id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
4582551
pes2o/s2orc
v3-fos-license
Quantitative Classification and Environmental Interpretation of Secondary Forests 18 Years After the Invasion of Pine Forests by Bursaphelenchus xylophilus (Nematoda: Aphelenchoididae) in China Abstract With growing concerns over the serious ecological problems in pine forests ( Pinus massoniana , P. thunbergii ) caused by the invasion of Bursaphelenchus xylophilus (the pine wood nematode), a particular challenge is to determine the succession and restoration of damaged pine forests in Asia. We used two-way indicator species analysis and canonical correlation analysis for the hierarchical classification of existing secondary forests that have been restored since the invasion of B. xylophilus 18 years ago. Biserial correlation analysis was used to relate the spatial distribution of species to environmental factors. After 18 years of natural recovery, the original pine forest had evolved into seven types of secondary forest. Seven environmental factors, namely soil depth, humus depth, soil pH, aspect, slope position, bare rock ratio, and distance to the sea, were significantly correlated with species distribution. Furthermore, we proposed specific reform measures and suggestions for the different types of secondary forest formed after the damage and identified the factors driving the various forms of restoration. These results suggest that it is possible to predict the restoration paths of damaged pine forests, which would reduce the negative impact of B. xylophilus invasions. Many scholars believe that the pine wood nematode Bursaphelenchus xylophilus (Steiner and Buhrer) Nickle, which causes pine wilt disease, may have originated in North America (Dropkin et al. 1981, Bergdahl 1988. In 1982, the first occurrence of pine wilt disease in mainland China was found on Pinus thunbergii in the area of Sun Yat-sen Mausoleum, Nanjing (Eastern China). At that time, only 256 dead trees were found (Sun 1982, Chen et al. 1983. Just 30 years later, the epidemic had rapidly expanded and spread to 16 provinces, including Anhui, Chongqing, Fujian, Guangdong, Guangxi, Guizhou, Henan, Hubei, Hunan, Jiangsu, Jiangxi, Shandong, Shannxi, Sichuan,Yunnan, and Zhejiang (Zhang and Luo 2003, Wu 2004, State Forestry Administration 2014. The epidemic has caused, both directly and indirectly, an accumulated loss of hundreds of billions of yuan. At present, the epidemic is moving closer to famous scenic spots, such as Huangshan, which is a World Natural Heritage Site (Zhang et al. 2004). In Japan, P. thunbergii forests have been the most heavily affected by B. xylophilus; many pine trees have died since 1905, when the first dead tree was found in Kyushu, and pine wilt disease has become a national concern (Mamiya, 1972(Mamiya, , 1988Maehara and Futai 2000;Kanzaki and Futai 2006). B. xylophilus was also found for the first time in 1988 in Busan, South Korea, in 1999in Portugal, and in 2011in Spain (Enda 1997, Mota et al. 1999, Mota and Vieira 2008, Robertson et al. 2011. At present, pine wilt disease caused by B. xylophilus poses a large threat to the pine forests of Asia and Europe (Evans et al. 1996, Kulinich and Orlinskii 1998, Sousa et al. 2002. In China, forests of Pinus massoniana and P. thunbergii have sustained the most serious damage, and pine wilt disease of these species is of major concern (Chai and Jiang 2003;Zhao et al. 2003Zhao et al. , 2007Shi et al. 2007a,b). Of these two species, P. massoniana is more widely distributed and accounts for a large proportion of the pine trees in China. It is the primary coniferous species in subtropical regions. Under natural conditions, it is usually distributed in poor geological environments, such as hills, steep slopes, and sites with poor, dry soils. The long-term geological and environmental conditions in these environments have formed its site-specific ecological characteristics, namely high seed germination, strong seedlings, resistance to infertility, attraction to sunlight, and strong reproductive ability. In places where native evergreen broadleaf forests are under intensive human disturbance, the evergreen broadleaf species have had difficulty in self-renewing and recovering in a short period of time; however, the introduction of P. massoniana seeds has been very successful for reforestation. These areas are called pioneer pine forests or pioneer communities (Anonymous 1991, Tian 2005. P. thunbergii originated from the east coast of Japan and the Korean Peninsula and is widely distributed in the coastal provinces of China, such as Shandong, Jiangsu, Anhui, Zhejiang, and Fujian. P. thunbergii likes light, is resistant to drought and infertility, is susceptible to water logging and cold temperatures, and grows well in areas with a warm and moist maritime climate. It grows best in deep and loose sandy soil layers containing humus. Because of its resistance to sea fog and wind, it can also grow on beach areas with saline soil. Due to the high vulnerability of P. thunbergii to pine wilt disease, B. xylophilus has devastated P. thunbergii forests in Asian countries, such as Japan and China, from the 1980s to the present (Kuroda 2004). Zhejiang Province has 2.6  10 6 hm 2 of P. thunbergii forest, which accounts for 49.7% of the forest area and 59.2% of the stock volume of the province. It is the primary landscape resource for many scenic areas in China (Anonymous 1980). The first incidence of pine wilt disease in Zhejiang Province was discovered in August 1991. In recent years, the degree of damage in Zhejiang Province has been increasing and the damaged area has expanded to 2.7  10 4 hm 2 , which accounts for >60% of the total damaged area in China. The main forests on Zhoushan Island, which is part of Zhejiang Province, were primarily pure P. massoniana and P. thunbergii forests before 1990 and were seriously damaged by B. xylophilus around 1993. Therefore, Zhoushan City, Zhejiang, was an ideal location for the present study. There are many reports of the evolution and restoration of P. massoniana and P. thunbergii forest systems (Jing andCai 2005, Ou et al. 2005). However, there has been no specific study of pine forest restoration after the invasion of B. xylophilus. This knowledge deficit is due to the lack of preinvasion plant community composition data (remotesensing imagery can acquire data only for the canopy and not for the understory layers). To solve this problem, we analyzed resource inventory data from a few years before and after the B. xylophilus invasion. Zhoushan Forestry Institute had conducted plant community analyses of pine forests in 1992, before the B. xylophilus invasion. We selected 24 typical land samples, based on distribution area, and sites, that were close to the 1992 land samples, and conducted vegetation surveys between July and September 2010 to determine the plant community structure. This study targeted pure P. massoniana or P. thunbergii forests, mixed forests of P. thunbergii and P. massoniana, and mixed forests of either P. massoniana or P. thunbergii and broadleaf trees. We revisited the forest land samples studied by the Zhoushan Forestry Institute in 1992 and compared the new data of 2010 with historical data. Land samples containing pine trees after a B. xylophilus invasion were selected for this investigation, which consisted of an analysis of the forest community structure after 18 years of natural restoration by considering the characteristics of local plant species. In the early stages of this study, we also found that Zhoushan City had adopted physical control measures for pine wilt disease, including the cutting and removal of damaged trees, before and after 1995. After repeated cutting and removal of the trees at different stages, they were allowed to regenerate naturally. At present, the restored secondary forests in areas where trees had been cleared due to a B. xylophilus invasion have a high plant density, many small-diameter trees, and a low regeneration capacity. These results indicate that after a B. xylophilus invasion, the qualities of P. massoniana and P. thunbergii are significantly different within different communities. Regardless of the outcome of restoration, to promote the rapid recovery of damaged pine forests caused by an invasion, the key factors that affect the restoration direction and tree growth within all types of plant communities must first be identified. These factors (e.g., soil and light) are the main controlling factors that affect the direction of pine forest system regeneration and restoration after an invasion by B. xylophilus. Therefore, based on the analysis of the restoration trends in P. massoniana and P. thunbergii communities after B. xylophilus invasions, this article targets the biological characteristics of the restoration types and regeneration species to further analyze the 'driving factors' that lead to various types of restoration. Considering those factors together with the actual conditions of the study region, we propose specific system recovery and transformation strategies. Materials and Methods Overview of the Study Region. The Zhoushan Islands are located in the southern part of the mouth of the Yangtze River, between the East China Sea and the outer edge of Hangzhou Bay, at 29 32 0 -31 04 0 N, 121 31 0 -123 25 0 E. The islands comprise a total land area of 1,440.2 km 2 , made up of 1,390 islands. The majority of the hills on the islands lie at altitudes of <250 m, and they encompass an area on the northern edge of the subtropical monsoon climate zone that is subject to marine influence. Zhoushan Island is the largest of these islands and is also China's fourth largest island with an area of 502 km 2 . Its highest peak is Huangyangjian, which has an elevation of 503.6 m. The annual average temperature is 16.5 C. The average temperature in the hottest month (August) is 27.3 C, and the average accumulated temperature !10 C is 5,120.8 C.The frost-free period is 251 d, the average annual precipitation is 1,351.3 mm, the average annual evaporation amount is 1,470.4 mm, and the average annual relative humidity is 79%. The mountain soils within this area are red soil and skeletal soil (Wang et al. 2011). Data Collection. According to the 1992 Zhoushan Island vegetation survey data provided by the Zhoushan Forestry Institute, before the damage, most of the pine forests could be classified as one of four types: pure P. massoniana forest, pure P. thunbergii forest, mixed P. massoniana and broadleaf (generally Liquidambar formosana and Quercus fabri) forest, and mixed P. thunbergii and broadleaf forest. The regeneration layer of the tree species in these forests consists mostly of Q. fabri. Based on the existing vegetation and topography of the islands, avoiding the local residential areas, and the distribution area and site, 24 20 m  20 m typical land samples (summarized in Table 1) in communities that were near to the land samples studied before the invasion (1992) were chosen for analysis. From July to September 2010, the individual species name, diameter at breast height (DBH), tree height, and crown width of the trees with DBH !1 cm were recorded. At the same time, 11 environmental factors were measured: seven terrain factors (elevation, slope steepness, slope aspect [AS], slope position [SP], bare rock ratio (BRR), distance from the coast in kilometers, and relative humidity of air in the forest) and four soil factors (soil depth [SD], soil humus depth [HD], soil pH, and soil moisture [SM]). Data Analysis. Using key values, such as the dominance index of the tree species in the land samples, the importance value (IV) can be calculated according to the following formula (Fang et al. 2009): IV ¼ (relative abundance þ relative frequency þ relative breast height basal area)/300 Data from the 42 major tree species in the land samples with the greatest IVs were used to establish a species matrix. Following the method of Song et al. (2010), we assigned AS values on a scale of 1-8, 1 ¼ north slope, 2 ¼ northeast slope, 3 ¼ northwest slope, 4 ¼ east slope, 5 ¼ west slope, 6 ¼ southeast slope, 7 ¼ southwest slope, and 8 ¼ south slope, where higher values correspond to more sunlight and hotter and drier conditions. The SP values are as follows: 0 ¼ low-lying land, 1 ¼ at the bottom of a slope, 2 ¼ below the middle of the slope, 3 ¼ in the middle of the slope, 4 ¼ above the middle of the slope, 5 ¼ the top of the slope, and 6 ¼ at the peak. The 11 environmental factors described above were used to establish an environmental matrix, using the two-way indicator analysis method (Bowman and Fensham 1991, Vermeerscha et al. 2003, Zhang 2011) and canonical correspondence analysis (CCA) (Chen 1992, Jiang et al. 1994, Gao and Zhang 2010 to classify the number of communities. In ecological studies, with several samples, biserial correlation analysis is often used for studying the correlation between the presence of a species and environmental factors (Brogden 1949). For each sample, the existence of tree species (present as 1, absent as 0) and the values of the environmental factors are recorded. Then, the double series correlation coefficients are used to describe the correlation between the species and environmental factors. At this time, the environmental factors are divided into two groups based on the presence of species. The biserial correlation coefficients are calculated as follows: where r p is the biserial correlation coefficient, M p and M q are the average values of the two groups p and q, S x is the standard deviation, and p and q represent the ratio of the number of observations of the two groups. A t-test can be used to test the significance of the coefficient. The CANOCO for Windows 4.5 software was used for the CCA (Leps andSmilauer 2003, Peres-Neto et al. 2006). The remaining data analysis was performed using the R software 'vegan'package. Results Quantitative Classification of the Communities. To determine the tree species classification on Zhoushan Island based on the IV matrix of woody plants with DBH values !1 cm in all of the land samples, twoway indicator species analysis (TWINSPAN) was used for the hierarchical classification of the 24 land samples. Values of 0, 0.1, 0.2, 0.4, 0.6, and 1.0 were selected as the cutting levels of the species. TWINSPAN resulted in the division of the 24 land samples into nine groups (Fig. 1). In the second level, the present forest was divided into coniferous and broadleaf forests. The third level reflects the overall characteristics of the secondary forest formed on Zhoushan Island after damage by B. xylophilus. But according to the actual vegetation data obtained by the typical investigated community conditions, and taking into account the continuity of space, group 8 was merged with group 7. Considering the recovery characteristics after pine wood nematode interference, a fourth level was chosen to divide the 24 typical land samples into seven groups, as follows. 1. Pure P. massoniana forests (land samples 5 and 6). This type of community originally consisted of P. massoniana forests and transformed into secondary pine forests after being damaged by B. xylophilus. It is mostly distributed in the relatively poor soil in the mountain or hilltop areas along the coast. The average DBH of P. massoniana in these forests is 5.6 cm, and broadleaf trees, such as Q. fabri, P. thunbergii, and Platycarya strobilacea, are sparsely scattered. The young trees and seedlings of P. massoniana form the majority of the newer layer. Loropetalum chinensis is very dense in the understory. Overall, this group is a type of typical subtropical pine forest with P. massoniana as the pioneer species. 2. P. massoniana and Q. fabri forests (land samples 4, 11, and 13). This type of community originated from pure P. massoniana forests that received less damage from B. xylophilus. After selectively cutting and removing damaged trees, the original P. massoniana forests were partially preserved, but no seedlings were distributed. The average DBH of P. massoniana in this type of forest is 8.7 cm. This type of forest is mostly located on the lower slopes at lower altitudes and has a thicker soil layer, allowing the understory Q. fabri seedlings to grow rapidly and potentially become the primary species of the canopy. However, P. massoniana still has a dominant position. The shrubs in this forest type are mainly L. chinensis and Rhododendron simsii. This type of forest is the main pine forest type on Zhoushan Island. 3. Miscellaneous hardwood and P. massoniana forests (land samples 16, 17, and 24). This type of community originated as pure P. massoniana forests, but P. massoniana has lost the dominant position in the canopy and has been replaced by the local broadleaf trees, such as L. formosana, Ilex purpurea, and Albizia julibrissin, as well as dominant shrubs. This type of forest is mainly distributed in the mountain areas far from the coast. It receives ample sunlight and has a thick humus soil layer. 4. Q. fabri and miscellaneous wood forests (land samples 7, 8, 9, 10, and 18). This type of community originated as pure P. massoniana forests or pure P. thunbergii forests. When P. massoniana or P. thunbergii became sparse, Q. fabri became the dominant species in the community and formed broadleaf mixed forests with dominant accompanying species, such as A. julibrissin, Ficus erecta var. beecheyana, and P. chinensis. The shrubs in the understory are mostly R. simsii and L. chinensis. 5. Q. fabri and L. formosana forests (land samples 1, 12, 14, and 15). This type of community no longer includes coniferous tree species S9 S10 S16 S17 S24 S4 S11 S13 S5 S6 D=1 N=24 S1 S12 S14 D=2 N=22 and evolved from the original pine forest after the repeated clearance of damaged trees. Because the local common species L. formosana and Q. fabri grew more quickly, they have become the main species of the canopy and were sometimes mixed with deciduous broadleaf species such as Cyclobalanopsis glauca, Symplocos setchuenensis, and Styrax confusus. The tree layer in this community also includes scattered evergreen species, such as Castanopsis sclerophylla and I. purpurea. It is a type of transition forest between deciduous broadleaf forests and evergreen broadleaf forests. 6. Quercus variabilis and L. formosana forest (land samples 2, 3, 21, 22, and 23). The original forests associated with this type of community included P. massoniana-broadleaf mixed forests or P. thunbergii-broadleaf mixed forests. After the conifers were damaged and removed, the original mixed broadleaf tree species, such as Q. variabilis, L. formosana, and Q. acutissima, rapidly grew into large trees with heights of 8-12 m. The major accompanying species are Q. fabri, A. julibrissin, and Dalbergia hupeana. The shrubs in the understory are mostly Symplocos paniculata, R. simsii, and L. chinensis. 7. Celtis sinensis and A. julibrissin forest (land samples 19 and 20). The original forest of this community was pure P. thunbergii forest and was mostly located along the coast of the northern island, which has characteristics such as poor light, high relative humidity, and an abundance of shrubs and grasses. After P. thunbergii were damaged and removed, the original forest evolved into a deciduous broadleaf forest with C. sinensis, A. julibrissin, and F. erecta var. beecheyana as the main tree species and broadleaf trees, such as Ulmus parvifolia, Cudrania tricuspidata, and Zanthoxylum ailanthoides, as the accompanying species. The shrubs in the understory are mostly Elaeagnus umbellate and Camellia japonica. Community Ordination and Environmental Interpretation. To study the correlation between the spatial distribution of species and environmental factors, a 24  42 species matrix and a 24  11 environmental matrix were used for CCA ordination (Fig. 2). This diagram directly reflects the impact of all of the environmental factors on species distribution and the correlation among the environmental factors, which are represented in the figure as line segments with arrows. The quadrant of the arrow's location indicates the positive or negative correlation between the environmental factors and the ordination axis. The length of the connecting line represents the degree of correlation between the environmental factor and the species distribution; the longer the connecting line, the greater the correlation. The angle between the connecting line and the ordination axis represents the correlation between that environmental factor and the ordination axis; the smaller the angle, the greater the correlation. The closer a species is to the land sample, the greater weight the species has in that quadrant (Lai and Mi 2010). It can be observed from the figure that HD has the longest line segment and forms the smallest angle with the second axis, indicating that HD is the most important factor affecting species distribution. Other factors as SD, AS, distance to sea in kilometers (D-sea), soil pH (PH), and air humidity (AH) have the next greatest impact on species distribution, followed by the altitude (AI) and SM. The slope (SI) and SP have minimal impact. The relative strength of these factors is related to the fact that the majority of Zhoushan Island is made up of low hills. In addition, the colinearities of two pairs of factors, namely the SD and AS and the BRR and SM, were found to be more significant. The results showed that environmental factors explain 56.83% of the species data. This result, along with the results of the CCA, was used to conduct a Monte Carlo permutation test (999 times) to determine whether the explanatory level of the 11 environmental factors on the species distribution was significant. The results revealed a significance of 0.008, indicating that the ordination results were within the explanatory level. Furthermore, correlation coefficient and significance tests were conducted on the environmental factors on the first two axes of the ordination axes (CCA1, CCA2).The results (Table 2) showed that HD is a very significant impact factor, while PH, SD, As, and Dsea are significant impact factors. These results could be explained by the fact that the community is still in the initial stages of restoration of the pioneer species. eigenvalue of the corresponding species matrix analysis was 4.531. The canonical eigenvalue of the species under the constraints of the 11 environmental variables was 2.575; the individual explanatory power of the soil factors on the spatial distribution of the species was 18.54%, and the individual explanatory power of the geological factors on the spatial distribution of the species was 33.43%. The common explainable section of the interaction between the two was 4.95%, and the explainable section was 43.17%. The results are shown using a Venn diagram (Fig. 3). Analysis of the Effects of Environmental Factors on Species Distribution. To better explain the level of influence of environmental factors on the spatial distribution of the tree-layer woody plants on Zhoushan Island and to identify the factor that has the largest role in determining the different types of restorations, a biserial correlation analysis of species and environmental factors was conducted. According to the integrated consideration of the occurrence frequency and the IVs of species in the 24 land samples, 12 major tree species out of 42 species of woody plants were selected and analyzed in terms of the 11 environmental factors using biserial correlation analysis. The results (Table 3) show that the SP (P ¼ 0.049) and the spatial distribution of P. thunbergii had a significant negative correlation; SD (P ¼ 0.018), HD (P ¼ 0.012), and the spatial distribution of F. erecta var. beecheyana had a significant positive correlation; the impact of the AS on spatial distribution was significant (P ¼ 0.007); the AS (P ¼ 0.050) and the spatial distribution of D. hupeana had a significant positive correlation; PH (P ¼ 0.044) and the spatial distribution of I. purpurea had a significant positive correlation; BRR (P ¼ 0.032) and the spatial distribution of L. chinensis had a significant positive correlation; BRR (P ¼ 0.032) and the spatial distribution of C. glauca had a significant positive correlation; and PH (P ¼ 0.039) and the spatial distribution of S. setchuenensis had a significant positive correlation. Another five environmental factors, namely SM, AH, AI, SI, and D-sea, correlated with the spatial distribution of species, but the test results showed that these correlations were not significant. From the biserial correlation analysis of the tree species and the environmental factors (Table 3), it can be seen that the spatial distribution characteristics of tree species such as P. massoniana, Q. fabri, and L. formosana, which have pioneer characteristics, including attraction to sunlight and drought resistance, were negatively correlated with environmental factors such as SD, HD, and AI, while tree species that placed higher demands on the soil environment, such as F. erecta var. beecheyana, A. julibrissin, P. thunbergii, and D. hupeana, showed positive correlations with HD. Most of the tree species showed a negative correlation with D-sea, which may be related to the heavy winds along the coast and the relative infertility of the soil, which is not suitable for the growth of the majority of broadleaf tree species. Overall, the changing trends of current species distributions based on the environmental factors were related to the fact that the secondary communities were still in the pioneer stage of restoration. Discussion and Conclusion Because completion of the analysis phase of this study, the original pine forests on Zhoushan Island have been restored to the following seven types of secondary forest groups: pure P. massoniana forest, P. massoniana and Q. fabri forest, miscellaneous wood and P. massoniana forest, Q. fabri and miscellaneous wood forest, Q. fabric, and L. formosana forest, Q. variabilis and L. formosana forest, and A. julibrissin and C. sinensis forest. These results show that the recovery of the pine forest communities with different origins, in different geographic environments, and after sustaining different degrees of damage follows certain predictable paths. In the future, the pine restoration direction can be forecasted to some extent based on this information. After being affected by B. xylophilus, the original pine forests on Zhoushan Island, which have undergone 18 years of natural recovery, have developed from pure conifer forests or conifer-broadleaf mixed forests into multiclass complex forest types, which at present includes pure conifer forests, conifer-broadleaf mixed forests, deciduous broadleaf mixed forests, and evergreen and deciduous broadleaf mixed forests. The community composition and structure tend to become more complicated and can mostly be considered as indicating progress or forward restoration (Lin, 1986). However, some of the land samples, such as S5 and S6, also show backward restoration. From the viewpoint of subtropical natural restoration progress, the B. xylophilus interference has accelerated the restoration progress of the Zhoushan Island vegetation communities from coniferous forests to evergreen broadleaf forests (communities of a typical subtropical climate), so the local plant communities have more complexity and stability. However, from another perspective, B. xylophilus is indeed a destructive pathogen. B. xylophilus invasion can lead to the collapse of fragile ecosystems, which might even restore 'backwards' to weeds and shrubs (Shi 2005). Our survey also found that many human factors have interfered with the natural recovery of damaged pine forests, such as the use of forest land to plant Myrica rubra, and a variety of electrical communication transmission equipment has been built on top of the mountain, which had a significant impact on the surrounding plant growth. In the 2010 investigation of Zhoushan Island pine forests, researchers found that the secondary P. massoniana and P. thunbergii forests formed after the previous B. xylophilus invasion were already suffering from varying degrees of B. xylophilus damage, indicating that the secondary pine forests face a 'second round' of damage from B. xylophilus and that the recovery of pine forest vegetation communities faces a number of volatilities and uncertainties. After the invasion of B. xylophilus and during the restoration of the pine forest, many of the forests sustained damage and degradation. Different degradation systems should be developed to target the formation of different communities after an invasion. Regarding the existing vegetation communities at various phases of degradation, different restoration strategies are required because of the differences in the species composition, structure, propagule bank, and soil matrix condition compared with the control community (Shen et al. 2005). According to the results of this study, when targeting different communities for restoration, we must take measures based on the local conditions, propose different restoration strategies for pine forests with different degrees of damage, and reduce the negative impact brought by the invasion of B. xylophilus to the P. massoniana forest system. Specifically, for the pine forests in areas with a thinner soil layer but thicker humus (which is related to the growth characteristics of P. massoniana and P. thunbergii), the restoration measures should be as follows: regularly cut off the damaged pine trees and L. chinensis in the forests and replant the forest gaps with barren-soil-resistant tree species, such as Q. fabri, which frequently accompany local pine trees(such as in groups 1 and 2). Regarding the pine forests located in areas with a relatively thick soil layer, species such as Q. fabri and L. formosana should be replanted in the forest. The presence of these local broadleaf tree species along with P. massoniana gradually improves the forest soil environment. The changes in conifer-broadleaf mixed forests caused by human activities reduce the risk of destruction by B. xylophilus from generation to generation (such as in group 3). For group 4 and 5 forests, the physical control measure of deforestation was taken after the infestation due to the severe damage present. High levels of germination in the existing forests resulted in a higher plant density in these forests. From the CCA ordination graph, we can see that these land samples, which are located mostly on the sunward side of low-altitude hills within islands, had a high soil pH and a thick soil layer. The recommended recovery measure for this type of pine forest is based on the appropriate thinning of trees to optimize the forest structure and the selective planting of proper tree species to match the land. Because those forests are far from the sea, their restoration could be focused on landscape ecology by planting evergreen tree types such as Schima superba, C. sclerophylla, and I. purpurea. Economically productive forests can also be developed inthe forest areas that have a low average age, contain a higher proportion of shrubs, and are close to residential areas (such as S12 and S24). After thinning the forests, the main local economic tree species (Shen 2009), such as M. rubra and Citrus reticulata, could be planted. For group 6 and 7 forests, the original communities were pine-broadleaf mixed forests with only a small proportion of pine trees. After the damaged pine trees were removed, the original broadleaf tree species mixed with the rest, such as Q. variabilis and L. formosana, and rapidly occupied the ecological position left by the removed pine trees, quickly growing into tall trees. The communities evolved into a stable forest structure constituting tall hardwood trees, dominant accompanying tree species, tree seedlings, and saplings. It is suggested that this type of pine forest does not require special recovery and reform measures. Closing the forest to harvesting and simply allowing the trees to grow while also monitoring pest infestation are necessary steps to enforce forest orientation recovery. Another important aspect of this study was that CCA was conducted to analyze 11 environmental factors, and the results showed that the explanatory level of the environmental factors regarding the spatial distribution of species was 56.83%, while the parts without explanation accounted for 43.17%. In addition to the unknown environmental factors that have not been considered, the spatial distribution of species in the natural recovery process of the secondary bare land is not only subject to environmental constraints, but also related to the original species in the communities, the surrounding species source, and human intervention. For example, we found that both sides of the village roads 1 km from S16 and S17 were planted with L. formosana and that the abundance of L. formosana on the mountain on both sides of the roads was significantly inversely proportional to the distance from the road. Therefore, in terms of the CCA ordination results, the driving factors in the different restoration types of damaged pine forest recovery could be important for the recovery potential of the damaged pine forest vegetation in the future. With this knowledge, we will be able to help the damaged pine forests recover as soon as possible. For the pine forests with better geographical locations (groups 1 and 2), the main environmental driving factors of restoration were the HD and SD. For the pine forests in other geographical locations (group 3), the main environmental driving factors of restoration were the original species of the communities, the surrounding species, and human intervention. The broadleaf forests with a thick soil layer, sufficient sun, and sufficient water (groups 4 and 5) faced fewer environmental constraints on plant growth and therefore a higher forest density. In these forests, the main environmental driving factors of restoration are competition for space and resources. For the broadleaf forest restored from the conifer-broadleaf mixed forest (group 6), the forest already had tall hardwood trees and a good soil environment. The main environmental driving factors of restoration in this case were the SD, PH, and BRR. Due to its higher northern mountainous terrain, the broadleaf forest located along the coast of the northern island (group 7) has a high canopy density that blocks the strong sea breeze that occurs throughout the year, causing a lack of sunlight and high AH; the main environmental driving factors for its restoration were AH and AI. Furthermore, ecological issues, such as slow natural recovery of vegetation, large fluctuations of community structure, and forest landscape fragmentation, occurred. External factors, such as the large bridge built in December 2009 to connect Zhoushan Island to mainland China and the establishment of the Zhoushan Island District, approved in July 2011, will strengthen the domestic and international trade of Zhoushan Island. However, the current fragile and unstable ecosystem with its low resistance to potential invasive organisms is worrying. Therefore, the secondary forest that has been developing since the B. xylophilus invasion on Zhoushan Island urgently needs to be artificially cultivated and regenerated, and the local ecosystem should be stabilized by classified reformation according to local conditions based on the principle that protection and landscaping should be the major focuses in the management of Zhoushan Island forest resources.
2018-04-03T03:45:03.307Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "427902aa792deca8b89b95323476c7a016d15b79", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jinsectscience/article-pdf/14/1/296/25454155/ieu158.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5cab1f2ce2cbc6b26a8222df7a7d8ab6a2c5a0af", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
250088975
pes2o/s2orc
v3-fos-license
Investigating Lorentz Invariance Violation with the long baseline experiment P2O : One of the basic propositions of quantum field theory is Lorentz invariance. The spontaneous breaking of Lorentz symmetry at a high energy scale can be studied at low energy extensions like the Standard model in a model-independent way through effective field theory (EFT). The present and future Long-baseline neutrino experiments can give a scope to observe such a Planck-suppressed physics of Lorentz invariance violation (LIV). A proposed long baseline experiment, Protvino to ORCA (dubbed ”P2O”) with a baseline of 2595 km, is expected to provide good sensitivities to unresolved issues, especially neutrino mass ordering. P2O can offer good statistics even with a moderate beam power and runtime, owing to the very large ( ∼ 6 Mt) detector volume at KM3NeT/ ORCA. Here we discuss in detail, how the individual LIV parameters affect neutrino oscillations at P2O and DUNE baselines at the level of probability and derive analytical expressions to understand interesting degeneracies and other features. We estimate ∆ χ 2 sensitivities to the LIV parameters, analyzing their correlations among each other, and also with the standard oscillation parameters. We calculate these results for P2O alone and also carry out a combined analysis of P2O with DUNE. We point out crucial features in the sensitivity contours and explain them qualitatively with the help of the relevant probability expressions derived here. Finally we estimate constraints on the individual LIV parameters at 95% confidence level (C.L.) intervals stemming from the combined analysis of P2O and DUNE datasets, and highlight the improvement over the existing constraints. We also find out that the additional degeneracy induced by the LIV parameter a ee around − 22 × 10 − 23 GeV is lifted by the combined analysis at 95% C.L. Introduction The phenomenon of neutrino oscillation which was first experimentally established more than twenty years back from the observations of atmospheric and solar neutrinos [1,2] is one of the most transparent currently available portals into the rich physics beyond the standard model (BSM) of particle physics. In standard scenario neutrino oscillation is governed by six parameters, namely the three mixing angles (θ 12 , θ 13 , θ 23 ); one Dirac CP phase (δ 13 ), and two mass-squared differences (∆m 2 21 , ∆m 2 31 ). So far θ 12 , θ 13 , ∆m 2 21 and the magnitude |∆m 2 31 | have been measured with good precision from various neutrino experiments. One of the principal focus of the neutrino oscillation community is now on the measurement and implications of the values of the remaining parameters: the leptonic (Dirac) CP phase δ 13 , the sign of ∆m 2 31 (denoting the correct neutrino mass ordering) and the octant of the mixing angle θ 23 . A value of δ 13 not equal to zero or π would indicate CP violation in the lepton sector. This, in turn, can potentially shed light on the another fundamental puzzle, namely the baryon asymmetry of the universe [3]. Resolution of the correct mass ordering and octant can help narrow down the plausible set of models explaining neutrino mass generation. Presently running long-baseline neutrino oscillation experiments Tokai to Kamioka (T2K) [4] and NuMI Off-axis ν e Appearance (NOνA) [5] are already giving us glimpses to the resolutions of the issues mentioned above. T2K data [6] has ruled out CP conservation (δ 13 0, π) at 95% confidence limit (C.L.). Irrespective of the mass ordering, at 99.73% C.L. (3σ) T2K excludes 42% of the entire parameter space for δ 13 (mostly around +π/2), restricting the allowed region to roughly δ 13 ∈ [−π, 0.04π] ∪ [0.89π, π]. NOνA data [7], on the other hand indicates a slight preference for θ 23 lying in the higher octant (HO) at a C.L. of 1.6σ. It also excludes most of the choices near δ 13 = π/2 at a C.L. 3σ for inverted mass ordering (IO). These measurements are expected to become more accurate as more data pour in. Though the global analyses of neutrino data [8][9][10][11] shows an indication towards NO with θ 23 possibly lying in the higher octant, the CP phase still has a large uncertainty. In near future, various other next-generation neutrino experiments with more sophisticated detection technologies are expected to start taking data. These experiments include, among others, Deep Underground Neutrino Experiment (DUNE) [12,13], Tokai to Hyper-Kamiokande (T2HK) [14], Tokai to Hyper-Kamiokande with a second detector in Korea (T2HKK) [15], European Spallation Source ν Super Beam (ESSνSB) [16], Jiangmen Undergound Neutrino Observatory (JUNO) [17], Protvino to ORCA (P2O) [18]. These experiments are expected to reach upto an unprecedented (∼ a few percent) level of precision in measuring the oscillation parameters and hence are also susceptible to the presence of various possible new physics. CPT symmetry, one of the most sacred foundations in local relativistic quantum field theory, is based on the assumptions of the hermiticity of the hamiltonian, Lorentz invariance and local commutativity. Since an interacting theory with CPT violation also breaks Lorentz invariance [19], one widely used strategy to probe CPT violation is to analyze the associated Lorentz invariance violation (LIV). Spontaneous breakdown of Lorentz invariance may occur in theories of quantum gravity (in string theory, for e.g.) at Planck scale (M P ∼ 10 19 GeV), forcing a Lorentz tensor field to acquire a non-zero vacuum expectation value, thus selecting a preferred spacetime direction [20][21][22][23][24]. It has been shown in literature that the Standard Model (SM) of particle physics can be extended to construct a low energy effective field theory (EFT), namely Standard Model Extension (SME) [25][26][27] that includes such Lorentz invariance violating effects, suppressed by M P . Neutrino oscillation by virtue of its interferometric nature, can probe such LIV effects at SME, thereby offering us a probe to the Planck scale physics. The proposed P2O experiment [18,[56][57][58] will have a baseline extending approximately 2595 km from the Protvino accelerator complex to the ORCA/KM3NET detector at the Mediterranean, -both of which are already existing. P2O baseline is most sensitive to first ν µ → ν e oscillation maxima around 4-5 GeV. Neutrino interaction around this energy is dominated by Deep Inelastic Scattering which is relatively well described theoretically, compared to, for e.g., 2-2.5 GeV (for DUNE) where resonant interactions and nuclear effects can potentially impact the measurements more significantly [59][60][61][62][63][64]. Such a very long baseline and relatively higher energy of the oscillation maxima gives P2O an excellent level of sensitivity, especially towards neutrino mass ordering. As has been illustrated in reference [65], the P2O baseline is favourable to determine mass hierarchy also due to the much less interference by the hierarchy-CP phase degeneracy. The very large detector volume of 6 Mt at ORCA will allow to detect thousands of neutrino events per year even with a very large baseline and a moderate beam power, -subsequently offering sensitivities to neutrino mass ordering, CP violation and θ 23 -octant that are competitive with the current and upcoming long-baseline neutrino experiments 1 [18,66]. Recently it has been proposed that it is also possible to reach unprecedented sensitivity to leptonic CP violation at P2O using tagged neutrino beams by utilizing the kinematics of neutrino production in accelerators and recent advances in silicon particle detector technology [67]. In recent years, there has been some interests in estimating new physics capabilities of P2O. Reference [68] discussed the sensitivity reach of P2O to Non-unitarity of the leptonic mixing matrix and also estimated how it will affect the standard physics searches. The authors of [65] discussed about the possible optimization of P2O in order to explore non-standard neutrino interactions. In the present work, we analyze the capabilities of P2O to probe violations of Lorentz invariance and CPT symmetry to estimate the constraints that can be put on these new physics parameters. The present manuscript is organised as follows. In Sec, 2 we briefly describe the formalism of LIV. In Sec. 3 we discuss in detail the probability expressions in presence of LIV parameters and provide a thorough analysis of the changes induced by each LIV parameter by means of heatplots. Sec. 4 describes the simulation procedures followed in this work. Secs. 5 and 6 illustrate the ∆χ 2 sensitivity results showing the correlations of LIV parameters among themselves and with the standard oscillation parameters δ 13 and θ 23 . Sec. 7 shows our final results as the constraints on LIV parameters obtained from this work, followed by the summary and conclusion in Sec. 8. Theoretical background We follow the widely used formalism of introducing Planck-suppressed CPT/Lorentz invariance violating effect to write a Lagrangian for the Standard Model Extension (SME), as developed in [25][26][27][69][70][71][72]. The Lagrangian relevant for neutrino propagation in SME is then given by, where Ψ is the spinor containing the neutrino fields. The first two terms inside the parentheses are the usual kinetic and the mass terms in the SM Lagrangian while the LIV effect has been incorporated by the operatorQ. The Lorentz invariance violating term, which is suppressed by Planck-mass scale M P can be written in terms of the basis of the usual gamma-matrix algebra. Considering only renormalizable and only the CPT-violating LIV terms, one can write the LIV Lagrangian from Eq. 2.1 in terms of vector and pseudovectors [25], where a µ , b µ are constant hermitian matrices and are in general combinations of tensor expectations, mass parameter and coefficients arising from the decomposition of gamma matrices. We focus on the following CPT-violating LIV parameter that is relevant in the context of the propagation of left handed neutrinos, Since our focus is on the isotropic component of the LIV terms, we will make the Lorentz indices zero. To further simplify our notation we will henceforth denote the parameter (a L ) 0 αβ as a αβ 2 . Using spinor redefinitions to get rid of the non-trivial time derivatives in the Lorentz invariance violating Lagrangian in Eq. 2.1 and carrying out some lengthy algebra with the resulting modified Dirac equation one can derive the Lorentz invariance violating effective hamiltonian relevant for ultrarelativistic, left-handed neutrino propagation through matter [71,72,74]. The first term containing the usual leptonic mixing matrix U and the neutrino mass eigenstates m i (i = 1, 2, 3) is the standard vacuum hamiltonian. The second term, proportional 2 The presence of LIV makes it necessary to report the LIV bounds in a specific frame to conveniently compare the results from various experiments. Following the widely used practice in literature, the LIV coefficients used in our analysis are defined in the Sun-centered celestial equatorial frame. The Z direction points north along the earth's rotational axis, X direction points towards the vernal equinox, while the Y direction completes the right-handed coordinate system [70]. Observations performed in any other inertial frame of reference can be related to that in this Sun-centered frame via Lorentz transformations. We refer the reader to reference [73] for more details on LIV-related measurements performed in other frames of reference and how they can be related to the standard Sun-centered celestial frame of reference. to Fermi constant G F and electron density N e along the neutrino propagation, originates from standard charged-current coherent forward scattering of neutrinos with electrons in earth matter. The third term containing the LIV parameters a αβ 's (α, β = e, µ, τ ) incorporates the effect of LIV (and also CPT-violation). The off-diagonal a αβ 's (α = β) are complex with a phase (ϕ αβ ) associated to them, while the diagonal parameters are real. As per convention, in the second line of Eq. 2.4, a common term (m 2 1 ) has been subtracted from the diagonal elements in H vac , and another common term a τ τ has been subtracted from the diagonal elements of H LIV . Both of these subtractions have the effect of removing an overall phase factor, which will have no impact on the oscillation probabilities. This implies that neutrino oscillation can effectively probe only two of the three diagonal parameters in H LIV . In our analysis, those two parameters areã ee = a ee − a τ τ &ã µµ = a µµ − a τ τ , while the individual value of a τ τ cannot be probed by the oscillation experiment. Thus for simplicity we take a τ τ to be zero and thusã ee = a ee , andã µµ = a µµ . Note that, any one of the three diagonal LIV parameters can be chosen to be removed from the analysis in this way. It is worthwhile to mention here that the physics of neutral current (NC) Nonstandard interaction (NSI) (usually denoteds by ε αβ ) that arises from neutrino mass models and introduces couplings between the neutrinos and the first generation fermions e, u, d, has an apparent similarity with the form of LIV hamiltonian, -thereby suggesting a mathematical mapping: ε αβ ↔ a αβ / √ 2G F N e . But there is a crucial difference between these two different kinds of physics scenario as discussed in detail in reference [75]. NC NSI is proportional to the density along the neutrino trajectory and is thus very tiny for short-baseline neutrino experiments. LIV, on the other hand, is an intrinsic effect that is present even in the vacuum. Impact of LIV parameters on probability In this work we focus on a αβ 's and we will now describe how they affect the oscillation probability expressions in various channels. Since the main contribution comes from the ν µ → ν e oscillation channel, we discuss about how P (ν µ → ν e ) is affected by LIV. The most important LIV parameters impacting this channel are a eµ and a eτ , and also to a lesser extent a ee . Following the similar approach as in reference [43,[76][77][78], we can approximately write the ν µ → ν e oscillation probability as the sum of the following three terms, where the first term on the right hand side is the probability term corresponding to standard interaction (SI) with earth matter, while the other two terms come due to the presence of LIV parameters a eµ and a eτ . The three terms on the right hand side can be shown to have the following forms. In presence of a ee , the replacement →Â[1 + a ee / √ 2G F N e ]  + a ee /(2E/∆m 2 31 ) has to be made. In order to understand the impact of the LIV parameters, we first have a look at the oscillation probability at the P2O baseline of 2595 km. This was estimated numerically using the widely used General Long Baseline Experiment Simulator (GLoBES) [79,80] and the associated package snu.c [81,82] with necessary modifications. We consider Normal mass ordering (NO) and take the following best fit values [8] of the oscillation parameters: We take one LIV parameter a αβ non-zero (fixed at the same numerical value of 5 × 10 −23 GeV, and the associated CP phase ϕ αβ = 0) at a time to assess the role of individual LIV parameters in the probability level, and show the results in Fig. 1. As expected, the appearance channel is most affected by the LIV parameters a eµ and a eτ . Compared to the standard case (black dashed curve), a eµ increases the magnitude of P (ν µ → ν e ) while the presence of a eτ shows a depletion around the oscillation maxima of 4 − 5 GeV. This is due to the fact that both the sin δ 13 and cos δ 13 terms within the square brackets of Eq. 3.4 have the same sign (negative, thus decreasing P µe ), while there is a relative sign between two such terms in Eq. 3.3, -thus leading to a smaller enhancement of P µe . The effects of a eµ and a eτ become qualitatively opposite for the Pν µ→νe channel. We also observe that a ee increases or decreases the probabilities only mildly. The disappearance channel, on the other hand is impacted by only the parameters a µµ and a µτ , -the changes induced by them being in the opposite direction for ν andν-modes. The sensitivity to the LIV parameters depends on the change in probability due to the presence of LIV: In order to have an approximate idea about the physics behind the sensitivity estimates, we focus on the dominant channel, i.e., the ν µ → ν e channel and the most relevant LIV parameters a eµ , a eτ , a ee . In Fig. 2 we show by means of a heatplot, how the absolute difference |∆P µe | evolves with variation in the LIV parameters and the variation in the standard CP phase δ 13 , for a fixed baseline and energy. In top (bottom) row, we consider the baseline 2595 (1300) km and approximate first oscillation maximum energy 5 (2.5) GeV for the P2O (DUNE) experiment. The light yellow end of the colour spectrum corresponds to lower |∆P µe | (i.e., more degeneracy between SI and LIV), while the darker shades indicate a higher impact of the corresponding LIV parameter, resulting in a higher value of |∆P µe |. In all the heatplots we see that there is little to no change in the probability for very small values of the LIV parameter, which is consistent with our expectation. In presence of a eµ (a eτ ), we note the presence of a set of two degenerate (yellow) branches appearing at two different values of δ 13 . Interestingly these degeneracies remain present irrespective of the values of |a eµ | or |a eτ | and the degenerate regions are almost parallel to the LIV parameter axis. These features are more prominent for the P2O baseline than the DUNE baseline. On the other hand, in presence of a ee , P2O baseline shows an additional degeneracy approximately around a ee 22 × 10 −23 GeV, but curiously this is absent for DUNE. For an analytical understanding of the various features, we use Eqs. 3.1, 3.2, 3.3, 3.4 to express |∆P µe | in presence of a eµ or a eτ as the following. Es 13 sin 2θ 23 s 23 sin δ 13 + 2 π cos δ 13 . (3.8) Note that, since all the heatplots are generated corresponding to the first oscillation maximum, we put ∆ = ∆m 2 31 .L/E π/2 in deriving Eqs. 3.7 and 3.8. ∆P µe (|a eµ |) (or ∆P µe (|a eτ |)) is directly proportional to |a eµ | (or |a eτ |) respectively, -which clearly shows Putting θ 23 = 48.8 • , the solutions are δ 13 39 • , −141 • . It is clear from the (s 23 /c 23 ) 2 factor that for θ 23 lying in the higher octant, first solution for δ 13 will move (mildly) closer to π/4, making the second solution move towards −3π/4. For the case of |a eτ |, using Eq. 3.8 the degenerate condition translates to, the solutions of which are given by roughly δ 13 −33 • , 147 • . We note that the solutions for δ 13 for Eqs. 3.9 and 3.10 for the locations of degeneracies approximately differ by a sign (as long as θ 23 does not lie too far from the maximal value of π/4), or equivalently they differ by a ±π/2 phase-shift. These locations of degeneracies and the shift of the solutions for |a eµ | and |a eτ | are consistent with Fig. 2. The slight slanting nature of the degenerate branches with increase in |a eµ | originates due to subdominant higher order terms, which we have not considered in our simplified analysis. In Fig. 2, we note that deviation from the standard case happens more quickly when the CP phase δ 13 ∈ [−π/2, 0] (for |a eµ |) and δ 13 ∈ [0, π/2] (for |a eτ |), -manifested by the presence of darker patches around |a eµ | or |a eτ | 2 × 10 −23 GeV. These two separate quadrants for δ 13 originate due to the presence of the relative sign between the sin δ 13 and cos δ 13 terms inside the square brackets in Eqs. 3.7 and 3.8. The proportionality of Eqs. 3.7 and 3.8 with energy suggests that the features are quantitatively more prominent for P2O than DUNE, since the peak energy corresponding to the former is twice the latter (5 GeV, as compared to 2.5 GeV for DUNE). To understand the features induced by the presence of a ee , we deduce the corresponding probability difference as follows (using Eq. 3.2 and replacing →Â[1 + a ee / √ 2G F N e ] to account for a ee ). (3.11) The cos δ 13 -term containing Y from Eq. 3.2 is suppressed by a factor α ( = ∆m 2 21 /∆m 2 31 ∼ 10 −2 ) compared to the first term in Eq. 3.11. We neglect this term for simplicity. Thus the degeneracy condition (∆P µe (a ee ) 0) in presence of a ee can be simplified to the following equation. whereâ ee = a ee / √ 2G F N e . It is easy to see that I + cannot be zero, and for I − to vanish we can immediately identify a ee = 0 as the trivial solution. To examine the possibility of further degeneracies, we note the following. ∆ π/2; (for both P2O and DUNE) To find other solutions when I − = 0, we plot the two terms in I − for both DUNE and P2O as a function of the parameterâ ee in Fig. 3. The first term is an oscillating function of a ee , while the second term is a constant. For DUNE, having a lower baseline and energy, the sine function (red solid) oscillates slowly and has only the trivial solution in the range shown. Corresponding sine function for P2O (blue solid) oscillates faster, given the larger baseline and energy, and thus can have a second (non-trivial) solution at a reasonably smaller positive value ofâ ee 2.2, which translates to a ee = 2.2 √ 2G F N e 24.8 × 10 −23 GeV. This is almost exactly the location of the second degeneracy in the top right panel of Fig. 2. The mild dependence of this degeneracy brach on the CP phase δ 13 arises from the cos δ 13 -term in Eq. 3.11 which we have neglected for simplicity. In Fig. 4, we show the heatplot for |∆P µe | in the parameter space of θ 23 and one LIV parameter (|a eµ |, |a eτ |, a ee ), for a fixed CP phase δ 13 = −0.68π. Comparing the first and the middle columns, we see that a eµ has a slightly bigger impact than a eτ . Moreover, presence of a eµ induces more deviation at lower octant (LO), while that of a eτ is apparent at higher octant (HO). If we look at the analytical expressions for ∆P µe in Eqs. 3.7 and 3.8, this octant dependendence originates due to the overall factor c 23 in presence of a eµ and s 23 in presence of a eτ (note that the factor sin 2θ 23 in those equations are octant-independent). In the third column of Fig. 4, a ee again gives rise to additional degeneracy for P2O, which has already been explained above with regard to Fig. 2. Simulation details We simulate the long baseline neutrino experiments DUNE and P2O using GLoBES [79,80] and use the add-on snu.c [81,82] to implement the physics of LIV. DUNE is a 1300 km long baseline experiment from the accelerator at the site of FermiLab to the site employing a liquid argon far detector (FD) of 40 kt fiducial mass at South Dakota. The experiment is capable of using a proton beam of power 1.07 MW and of running 3.5 years each on ν andν mode (resulting in a total exposure of roughly 300 kt.MW.yr corresponding to total 1.47 × 10 21 protons on target or POT). The flux, cross-sections, migration matrices for energy reconstruction, efficiencies etc. were implemented according to the official configuration files [83] provided by the DUNE collaboration for its simulation. P2O (Protvino to ORCA) is a proposed long baseline neutrino experiment with a baseline of nearly 2595 km from the Protvino accelerator complex, situated at 100 km south of Moscow to the site of ORCA (Oscillation Research with Cosmics in the Abyss), hosting 6 MT Cerenkov detector located 40 km off the coast in South France, at a mooring depth of 2450 m in the Mediterranean sea. ORCA is the low energy component of the KM3NeT Consortium [84], with a primary goal of studying atmospheric neutrino oscillations in the energy range of 3 to 100 GeV in order to determine the neutrino mass ordering. Currently, 10 lines (i.e., detection units) of the ORCA detector are live and taking data. A full ORCA detector is expected to have 115 lines and foresees completion in subsequent phases around 2025 [85]. Construction of the neutrino beamline and relevant upgradation of the accelerator for the P2O experiment is expected to be completed in a few years. Assuming a favorable geopolitical situation and available funding, the P2O project in its nominal configuration might be realised during the next decade [86]. We simulate the nominal configuration 3 of P2O experiment using a 90 kW proton beam with a runtime of 3 yrs. in ν and 3 yrs. inν mode, -corresponding to a total POT of 4.8 × 10 20 . The baseline mostly passes through the upper mantle of the earth with an average density of 3.4 g/cc and the deepest point along the beam being 134 km [87]. The fluxes, detector response parameters, the detection efficiencies, signal and background systematics etc., corresponding to our nominal P2O configuration were taken from [18,84]. To estimate the sensitivity of LBL experiments to probe the LIV parameters, we carry out a ∆χ 2 analysis using GLoBES. The analytical 4 form of the ∆χ 2 can be expressed as, (4.1) N true corresponds to the simulated set of event spectra corresponding to true set of oscillation parameters p true , where only standard scenario is assumed with all the LIV parameters a αβ (α, β = e, µ, τ ) kept fixed to zero and all the standard oscillation parameters are kept fixed to their bestfit values. N test denotes the events simulated in presence of LIV, where the LIV parameters, as well as some of the less well-measured standard oscillation parameters are allowed to vary. The total set of standard and LIV parameters that generate N test are denoted by p test . Table 1 summarizes the values of the standard and LIV oscillation parameters used in our analysis. Note that in generating N test we have kept the three well-measured standard parameters θ 12 , θ 13 , ∆m 2 21 fixed to their bestfit values. We have checked that varying these three parameters in the fit produces negligible changes to the result. We varied the other three less well-measured standard parameters θ 23 , ∆m 2 31 , δ 13 , as well as the LIV parameters |a αβ |, ϕ αβ (α, β = e, µ, τ ). Throughout the analysis we assume the true mass hierarchy to be normal and vary the test value of ∆m 2 31 over both the normal 3 There are proposals for using an upgraded proton beam with 450 kW power and also to use the Super-ORCA detector with denser geometry, lower energy thresholds and better flavour identification capabilities [18]. 4 This is the Poissonian definition of ∆χ 2 , which in the limit of large sample size, reduces to the Gaussian form. Table 1. The values of standard and LIV parameters used in our study. The first column indicates whether the parameters were kept fixed or varied around their true values. The third column shows the true values used (taken from the globalfit analysis in [8]), while the next column shows the range of variation (taken to be the current 3σ interval). The rightmost column shows the prior uncertainties used while varying the corresponding parameters in the analysis. If the 3σ upper and lower limit of a parameter is x u and x l respectively, the 1σ uncertainty is (x u − x l )/3(x u + x l )% [13]. and inverted hierarchy. The sums over the three indices i, j, k signify the summations over the energy bins, the oscillation channels (ν e appearance and ν µ disappearance), and the running modes (neutrino and antineutrino modes) respectively. For DUNE we take a total of 71 energy bins in the range of 0 − 20 GeV, -with 64 bins with uniform widths of 0.125 GeV in the energy range of 0 to 8 GeV and 7 bins with varying widths beyond 8 GeV [83]. For P2O, we take 40 uniform bins up to 12 GeV. Thus the first term (N test − N true ) inside the curly braces accounts for the algebraic difference between the two sets of data, whereas the log-term gives a kind of fractional difference between them. The entire expression in the curly brackets with summations over i, j, k consists of the statistical part of the ∆χ 2 . Uncertainties in the prior measurement of the l th oscillation parameter are given by the parameters σ pl . As indicated in Table 1, for the variation of θ 23 and ∆m 2 31 , we have used prior uncertainties of 3.5% and 2.4% respectively 5 . For ∆m 2 31 we have varied over the sign also to take care of the possible fake solutions in the opposite mass hierarchy. We have allowed the rest of the parameters δ 13 , a ee , a µµ , |a eµ |, |a eτ |, |a µτ |, ϕ eµ , ϕ eτ , ϕ µτ to vary in an unrestricted manner without any prior uncertainties. η m is the nuisance parameter/systematics and σ ηm is the corresponding uncertainty which arises from the detector properties. Table 2 summarizes the overall normalization uncertainties of these systematic parameters for various signals and backgrounds used in our analysis. We assume the various signal and background systematic parameters are distributed in a Gaussian way Systematics Uncertainty (σ η ) Uncertainty (σ η ) /Nuisance parameters (η) (DUNE) (P2O) ν e signal normalization 2% 5% ν e signal normalization 2% 5% ν µ signal normalization 5% 5% ν µ signal normalization 5% 10% ν e background normalization 5% 10% ν e background normalization 5% 10% ν µ background normalization 5% 10% ν µ background normalization 5% 10% Neutral current background normalization 10% 10% ν τ background normalization 20% 20% ν τ background normalization 20% 20% Density 10% 10% with mean value 0 and standard deviation σ ηm , indicated in the second and third columns of Table 2 for DUNE and P2O respectively. This way of treating the systematics in the ∆χ 2 calculation is known as the method of pulls [88][89][90][91]. The background normalization uncertainties include correlations among various sources of backgrounds (contamination of ν e /ν e in the incident beam, flavour misidentification, neutral current, and ν τ ). We further include a 10% prior uncertainty on both the baseline densities of DUNE (2.95 g/cc) and P2O (3.4 g/cc). The final estimate of the (minimum) ∆χ 2 is obtained after varying the relevant oscillation parameters (as mentioned earlier and summarized in Table 1 and the systematic parameters along with the densities (see Table 2), and reporting the minimum value of the ∆χ 2 . The procedure is known as the marginalization of the relevant oscillation parameters so that the final result gives a conservative estimate of ∆χ 2 . The ∆χ 2 thus estimated is the frequentist method of hypotheses testing [89,92]. Correlations among the LIV parameters In Fig. 5, we show the 95% confidence level (C.L.) regions in the parameters space spanned by one off-diagonal LIV parameter |a αβ | (|a eµ |, |a eτ | or |a µτ |) and one diagonal LIV parameter a α β (a ee or a µµ ). Thus we assume the presence of two LIV parameters at a time in the fit. The three standard parameters as well as the relevant LIV phases are then varied (see Table 1) and to obtain the minimum ∆χ 2 (i.e., we marginalize over the three standard parameters and the LIV CP phases.). For instance, for the analysis in the parameter space of a ee − |a eµ |, we vary θ 23 , ∆m 2 31 (sign and magnitude) with priors 3.5% and 2.4% respectively and δ 13 , ϕ eµ without priors in an unrestricted manner and obtain the minimum ∆χ 2 as a function of a ee and |a eµ |. We repeat the procedure for many sets of (a ee , |a eµ |) to plot the iso-∆χ 2 contours at a C.L. of 95% (which corresponds to a ∆χ 2 value of 5.99 for 2 d.o.f.). The blue contours show the sensitivity reach of P2O alone while the red ones illustrate the results of combining the projected data of P2O and DUNE (which we write as Figure 5. This shows the exclusion regions in the parameter space consisting of one diagonal (along the horizontal axis) and one off-diagonal LIV parameter (vertical axis) for P2O only (blue contours), DUNE only (green contours), and P2O combined with DUNE (red contours). The results are shown at the confidence level (C.L.) of 95%. The triplet of numbers (%) in each panel indicates the area lying (excluded) outside the 95% C.L. contours, expressed as a percentage of the total area of the parameter space considered. The numbers are shown for the three cases, -blue for P2O only, green for DUNE only, and red for the combined case of (P2O + DUNE), and thus they offer a measure of the exclusion capabilities of each experimental configuration for each relevant parameter space. (P2O + DUNE) hereafter). For completeness we have also shown the analysis with DUNE only case, although that is not the main focus of our work. We refer the interested readers to [41] for a more comprehensive analysis of the capabilities of DUNE to probe LIV pa-rameter space 6 . For each of these three experimental configurations, namely P2O, DUNE, and (P2O+DUNE), in each panel, we estimate the regions excluded at 95% C.L. contours (i.e., the area outside the contours with blue, green, and red boundaries respectively), and express that as a percentage of the total area of the parameter space is shown. The three numbers thus give us a quantitative measure of the improvement of the combination (P2O + DUNE) over P2O only or DUNE only in excluding the relevant parameter space at 95% C.L. 7 The improvement is remarkable in almost all cases, covering more than 90% of the parameter space considered. In presence of a ee , we observe the additional/fake degenerate region around a ee −22 × 10 −23 GeV, which arises due to marginalization over the opposite mass hierarchy. Note that the location of this fake solution is approximately opposite in sign to the degeneracies in the corresponding probability heatplots (Figs. 2 and 4: third column, top row), where the additional degeneracies were found around a ee 22 × 10 −23 . It can be qualitatively understood as follows. Without considering flux and cross-sections for simplicity, the dominant statistical contribution to the sensitivity in the LIV scenario (test scenario) involving the parameter a ee and another parameter, say c, roughly follows the corresponding probability deviation from the true standard case (in the similar spirit as discussion in Sec. 3): ∆χ 2 (a ee , c) ∼ ∆P µe (a ee ) + ∆P µe (c) + (other terms), (5.1) where the other terms contain contributions from the ν µ → ν µ disappearance channel, antineutrinos, priors and systematics, -which we have neglected in order to have a simple qualitative understanding. Using our previous discussion concerning Eqs. 3.11 and 3.12, we can write, Within the same mass hierarchy for the true and test scenario, the minimum for ∆χ 2 (a ee , c) is obtained at the true solution a ee 0, making I − vanish. But while marginalizing over the opposite mass hierarchy in the test scenario, and ∆ changes sign in the term sin[1 −Â(1 +â ee )]∆/[1 −Â(1 +â ee )] containing a ee , and we have, The analysis with DUNE-only case in the present work is qualitatively consistent with [41] with some minor differences due to different choices of the set of values of the oscillation parameters, different ranges of marginalization, different minimization techniques etc. 7 Similar estimates were used in reference [78] in order to quantify the improvement of one experimental configuration over another in the context of non-standard neutrino interaction. Because of the relative changes of signs, now I − cannot vanish and the minimum solution is obtained when I + goes to zero instead. That is obtained whenâ ee = −2 and thus a ee −22 × 10 −23 GeV. Such a degeneracy can also be observed for the DUNE only case (green contours), which is consistent with previous analyses with LIV in case of DUNE [41]. Although a combination with DUNE significantly constrains this additional degeneracy at 95% C.L., it still does not go away completely (we have checked that it still remains at 99% C.L.). For the parameter a µµ we see the contours are roughly symmetric around the true solution a µµ = 0. If a µτ is present, a combination with DUNE can probe more than 92% of the entire parameter space considered at a C.L. of 95%. This sensitivity to |a µτ | mainly comes from the ν µ → ν µ disappearance channel. Fig. 6 shows the ∆χ 2 correlation among the off-diagonal LIV parameters themselves (|a eµ |, |a eτ |, |a µτ |) and also between the two diagonal parameters a ee and a µµ , for P2O, DUNE and (P2O+DUNE). The improvement by the combined analysis is especially prominent for the most impactful parameter space a eµ − a eτ (top left panel of Fig. 6). At a C.L. of 95%, (P2O+DUNE) combination can exclude 89% of the parameter ranges considered, compared to 62% by P2O alone. In Fig. 7 (and Fig. 8), we demonstrate how efficiently the projected data from P2O, DUNE and the combined case of (P2O+DUNE) can reconstruct the standard CP phase δ 13 and mixing angle θ 23 , in correlation with the LIV parameters present. Here we assume the presence of one LIV parameter at a time and marginalize over the relevant LIV phase, as well as over the standard parameter not shown along the axes (see Table 1 for the ranges and priors.). For instance, for the analysis in the |a eµ | − δ 13 plane, the minimum ∆χ 2 is obtained after varying ∆m 2 31 (both magnitude and sign) and θ 23 with priors of 2.4% and 3.5% respectively and ϕ eµ without prior in an unrestricted manner. Similarly, for the |a eµ | − θ 23 plane, the marginalization is carried over δ 13 and ϕ eµ without priors in an unrestricted way and over ∆m 2 31 with a 2.4% prior. At the C.L. of 95%, the presence of any LIV parameter at P2O can give rise to allowed regions covering a large δ 13 -space (entire δ 13 -space for a ee and a eτ .). But the combination (P2O + DUNE) significantly shrinks the allowed regions to lie around the true solution of δ 13 = −122.4 • . Similar observation holds in Fig. 8 for the parameter space containing θ 23 also. Concerning the combined analysis of (P2O+DUNE) (i.e., the red contours in Fig. 8), although the maximal mixing (θ 23 = 45 • ) got excluded in case of all the LIV parameters, in case of a eµ and a eτ (bottom row, first and second columns of Fig. 8) we note that the allowed regions still appear in the opposite (lower) octant. We refer the readers to [43] for a more in-depth discussion regarding the impacts of a eµ and a eτ on θ 23 -octant. It is clear that for P2O alone, the exclusion region in presence of a eµ is greater than in presence of a eτ (67% versus 54% of the total parameter space considered, at 95% C.L.). This can be connected to the higher impact of a eµ in the probability deviation |∆P µe | in Fig. 4 Table 3. Bounds on the LIV parameters as obtained from the simulations of LBL data: (DUNE, P2O and (P2O+DUNE) in the 2nd, third, fourth column respectively) at 95% C.L. Fig. 9 shows the one dimensional projection (after marginalising away all other parameters including the CP phases, θ 23 , ∆m 2 31 ) of ∆χ 2 as a function of each LIV parameter individually. The results are illustrated for P2O alone (blue), DUNE alone (green) and for the combined analysis of (P2O + DUNE) (red). The ∆χ 2 values corresponding to 95% and 99% C.L.s are marked with horizontal black lines. The significant increase in the steepness of the red sensitivity curves is indicative of the crucial impact of the combined analysis in constraining the LIV parameters. For a ee , we note the lifting of the troublesome degeneracy at a ee −22 × 10 −23 GeV by the combined analysis above 95% C.L. This was not possible with the analysis done with DUNE alone (see also [41]) or with P2O alone. Table 3 shows our final result: the constraints obtained on the five LIV parameters at 95% C.L. with the combined (P2O+DUNE) analysis and compares the numbers obtained from only DUNE or only P2O. We note that the constraints on the diagonal parameters can be tightened significantly with the combined analysis. This is especially noticeable for a ee since the fake solution can be ruled out at 95% C.L. as mentioned before. For |a eµ |, |a eτ | and |a µτ | also the bounds improve moderately. Some comments are in order regarding the bounds on LIV parameters achieved by existing atmospheric neutrino data. Atmospheric neutrino experiments are sensitive to a much wider range of baseline and energy, and hence can obtain strong constraints on LIV parameters. For instance, the SK data have put the following bounds on the LIV parameters at 95% C.L. [33], Analysis of high energy astrophysical and atmospheric neutrino at IceCube have put following stronger constraints, at an even higher statistical significance of 99% C.L. [35]: The next generation IceCube-Gen2 [93] is expected to reach much tighter bounds on LIV parameter space. On the other hand if LBL experiments such as DUNE is also used to collect and analyse atmospheric neutrino data (in addition to beam neutrinos), it becomes sensitive to a much wider range of baseline and energy, and the constraints on LIV parameters can then improve by several orders of magnitude [13]. We would also like to mention that in comparison to LBL data, high energy astrophysical and atmospheric neutrino experiments can become more sensitive to higher order LIV parameters (which are energy-dependent) which we have not considered in the present analysis. As more neutrino data become available in near future, it will be possible to strategically combine LBL and atmospheric neutrino data and search for the presence of LIV with an unprecedented sensitivity reach. However, in the present analysis we have focused on the capability of LBL experiments only to probe the LIV parameters. A full combined analysis of both LBL and atmospheric data (simulated/real data) is beyond the scope of the current work, and we leave it as a future project. Summary and conclusion In this work we consider the proposed long baseline experiment P2O with a 2595 km baseline from the already existing accelerator complex at Protvino to the far detector situated at the site of KM3NeT/ ORCA with a fiducial mass of approximately 6 Mt. and a peak energy of around 4-5 GeV. Such a long baseline offers very high sensitivity to neutrino mass hierarchy and the massive far detector provides very high statistics even with a relatively moderate 90 kW proton beam. In this work, we have discussed the capability of an LBL experiment to probe fundamental theories of quantum gravity that can potentially manifest itself in the form of Lorentz invariance violation (LIV) around this energy range. We first discuss how the probabilities can deviate from the standard interaction (SI) scenario by different Lorentz invariance violating parameters (which are also CPT-violating) at the P2O baseline. We then analytically derive the approximate changes in the appearance probabilities, ∆P µe (= P µe (SI+LIV) − P µe (SI)) that are induced by individual LIV parameters. We illustrate by means of heatplots of ∆P µe , how the LIV parameters impact at the baseline of P2O at its peak energy and compare it with DUNE experiment. In presence of a eµ and a eτ , we find interesting degenerate branches (existing even for larger values of the LIV parameters) at specific values of the standard CP phase δ 13 . As a function of θ 23 , we observe that the impact of a eµ on ∆P µe is slightly higher than that of a eτ . These features were also explained with the help of probability expressions in presence of these two parameters. We also find two degenerate regions at the level of probability for a ee 0, 22 × 10 −23 GeV, whereas for DUNE we find only the trivial one at a ee = 0. We explain this by breaking down the corresponding ∆P µe and showing that the relevant sine-term oscillates faster for P2O (due to higher peak energy and a slightly higher value of average baseline density), -forcing a second nontrivial solution. We then proceed to estimate ∆χ 2 sensitivities to LIV parameters for P2O alone and also discuss how significantly the results improve when the simulated data of P2O is combined with that of DUNE. For completeness, we have also compared the results with a similar analysis using the simulated data of only DUNE. The sensitivity analyses were carried out by showing correlations of the LIV parameters (a ee , a µµ , a eµ , a eτ , a µτ ) among themselves and also with the two standard oscillation parameters δ 13 and θ 23 . For a ee we discuss in detail analytically how a crucial change in sign due to marginalization over the opposite mass hierarchy produces a fake ∆χ 2 minimum around a ee −22×10 −23 GeV. For all the parameter spaces we numerically estimate the area of the regions that are excluded (at 95% C.L.) by P2O (or DUNE) individually and compare it to that by the combined analysis of (P2O + DUNE). The significance quantitative increase in the excluded area for (P2O+DUNE) shows the overwhelming advantage of the combined analysis in all cases. Finally we calculate the one-dimensional ∆χ 2 projections as a function of all five individual LIV parameters after marginalisation over all other parameters and estimate the 95% C.L. constraints. We find that for the diagonal LIV parameters there is a significant improvement of the constraints estimated with the combined (P2O+DUNE) analysis. Especially noteworthy is the lifting of degeneracy around a ee −22 × 10 −23 GeV, which is not possible with P2O-only or DUNEonly analysis. For the off-diagonal LIV parameters also our estimated bounds improve moderately.
2022-06-29T01:15:57.024Z
2022-06-28T00:00:00.000
{ "year": 2023, "sha1": "e243275f18c74c6220ee4cc91fde36e30f752993", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e243275f18c74c6220ee4cc91fde36e30f752993", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
39810542
pes2o/s2orc
v3-fos-license
The physical characteristics of the gas in the disk of Centaurus A using the Herschel Space Observatory We search for variations in the disk of Centaurus A of the emission from atomic fine structure lines using Herschel PACS and SPIRE spectroscopy. In particular we observe the [C II](158 $\mu$m), [N II](122 and 205 $\mu$m), [O I](63 and 145 $\mu$m) and [O III](88 $\mu$m) lines, which all play an important role in cooling the gas in photo-ionized and photodissociation regions. We determine that the ([C II]+[O I]$_{63}$)/$F_{TIR}$ line ratio, a proxy for the heating efficiency of the gas, shows no significant radial trend across the observed region, in contrast to observations of other nearby galaxies. We determine that 10 - 20% of the observed [C II] emission originates in ionized gas. Comparison between our observations and a PDR model shows that the strength of the far-ultraviolet radiation field, $G_0$, varies between $10^{1.75}$ and $10^{2.75}$ and the hydrogen nucleus density varies between $10^{2.75}$ and $10^{3.75}$ cm$^{-3}$, with no significant radial trend in either property. In the context of the emission line properties of the grand-design spiral galaxy M51 and the elliptical galaxy NGC 4125, the gas in Cen A appears more characteristic of that in typical disk galaxies rather than elliptical galaxies. Introduction Nearby galaxies are excellent laboratories in which to study the properties of the cold interstellar medium (ISM), as the current capabilities of infrared and submillimeter observatories allow us to study them on sub-kiloparsec (kpc) scales. In particular, we can investigate the origin of key far-infrared atomic fine-structure lines on these physical scales using the Herschel Space Observatory (Pilbratt et al. 2010). Centaurus A (Cen A; NGC 5128), located only 3.8±0.1 Mpc away (Harris et al. 2010), is resolved at scales of a few hundred parsecs, thus giving us the opportunity to search for variations within the interstellar gas throughout the galaxy. Cen A (13 h 25 m 27.6 s , −43 • 01 ′ 09 ′′ ) has an un-IAPS, Via Fosso del Cavaliere 100, I-00133 Roma, Italy 1 Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. 1 usual morphology, as it is a giant elliptical that appears to have swallowed a smaller disk galaxy and estimates put this merger around 380 Myr ago (Tubbs 1980). The disk provides a prominent dust lane through the center, and shows a strong warp, giving it an 'S' like shape at infrared wavelengths (Leeuw et al. 2002;Quillen et al. 2006;Weiß et al. 2008;Parkin et al. 2012). Cen A is the closest galaxy with an active galactic nucleus (AGN) and associated radio jets extending approximately 4 • in either direction (e.g. Combi & Romero 1997;Israel 1998). It is also rich in gas, both atomic (H i) and molecular (H 2 ) hydrogen (Morganti et al. 2008;Struve et al. 2010), as well as carbon monoxide (CO), as observed in various rotational transitions (Phillips et al. 1987;Eckart et al. 1990;Quillen et al. 1992;Rydbeck et al. 1993;Parkin et al. 2012). For a detailed summary of the physical properties of the galaxy see Israel (1998) and Morganti (2010). Recently, Parkin et al. (2012) presented new photometric observations at 70, 160, 250, 350 and 500 µm using the Photodetector Array Camera and Spectrometer (PACS; Poglitsch et al. 2010) and the Spectral and Photometric Imaging Receiver (SPIRE; Griffin et al. 2010) on Herschel. Through dust spectral energy distribution (SED) modelling they found a radially decreasing trend in dust temperature from about 30 to 20 K. Then they combined the resulting dust map with a gas map, created with CO(J = 3 − 2) observations from the James Clerk Maxwell Telescope (JCMT) and an H i map (Struve et al. 2010), to produce a gas-to-dust mass ratio map. This ratio also shows a radial trend from 275 near the center of the galaxy, decreasing to Galactic values of roughly 100 in the outer disk. The high ratio in the center is attributed to local effects on the ISM from the AGN. Here, we extend the investigation of the disk of Cen A by combining the Herschel PACS photometry with new PACS spectroscopic observations of important atomic fine structure lines to characterize the neutral and ionized gas. Fine structure lines such as [ [O iii], respectively) play a crucial role in the thermal balance of the gas in the ISM. These lines provide a means of gas cooling by de-excitation via photon emission, rather than collisional de-excitation, which does not result in photon emission and thus inhibits gas cooling. The [C ii] line is a tracer of both neutral and ionized gas as C + is produced by far-ultraviolet (FUV) photons with energy greater than 11.26 eV, and it is one of the dominant coolants among the aforementioned lines with a luminosity of roughly 0.1-1 % that of the far-infrared (FIR) luminosity in typical star-forming galaxies (e.g. Stacey et al. 1985Stacey et al. , 1993Malhotra et al. 2001;Graciá-Carpio et al. 2011;Parkin et al. 2013 (Tielens & Hollenbach 1985). Observations show that as infrared color increases (thus increasing dust temperature), the heating efficiency decreases because the dust grains and polycyclic aromatic hydrocarbons (PAHs) that provide free electrons for gas heating via the photoelectric effect have become too positively charged to free electrons efficiently (Malhotra et al. 2001;Brauher et al. 2008;Graciá-Carpio et al. 2011;Croxall et al. 2012;Braine et al. 2012;Lebouteiller et al. 2012;Contursi et al. 2013;Parkin et al. 2013). To determine the physical properties of the gas we need to compare ratios of our observed fine structure lines to those predicted by a PDR model. There are a number of models which explore the characteristics of PDRs such as van Dishoeck & Black (1986, Sternberg & Dalgarno (1989, 1995, Luhman et al. (1997), Störzer et al. (2000), Le Petit et al. (2006) and Röllig et al. (2006), but one of the most commonly used models was first developed by Tielens & Hollenbach (1985), consisting of a plane-parallel, semi-infinite slab PDR. The gas is characterized by two free parameters, the hydrogen nucleus density, n, and the strength of the FUV radiation field, G 0 , normalized to the Habing Field, 1.6 × 10 −3 erg cm −2 s −1 (Habing 1968). This model has now been updated by Wolfire et al. (1990), Hollenbach et al. (1991), and Kaufman et al. (1999Kaufman et al. ( , 2006. Investigations of PDRs and cooling lines in Cen A have previously been carried out by Unger et al. (2000) and Negishi et al. (2001) using the Long Wavelength (LWS) spectrometer on the Infrared Space Observatory (ISO). Unger et al. (2000) observed Cen A at four pointings along the dust lane and found G 0 ∼ 10 2 and n ∼ 10 3 cm −3 . Using the same observations, Negishi et al. (2001) find G 0 = 10 2.7 and n ∼ 10 3.1 cm −3 . In samples of normal star-forming galaxies, as well as samples including starburst, AGN, and star-forming galaxies such as those of Malhotra et al. (2001) and Negishi et al. (2001), respectively, global values for G 0 range from 10 2 to 10 4.5 and n ranges between 10 2 and 10 4.5 cm −3 . In this paper, we look at the PDR characteristics of Cen A on smaller scales (roughly 260 pc at the 14 ′′ resolution of the JCMT) in search of radial variations. The paper is organized as follows. We describe our data processing for the spectroscopic observations in Section 2, and discuss the general morphology of the various lines in Section 3. In Section 4 we compare our observations to theoretical models and discuss their implications. We compare the gas characteristics of Cen A with M51 in Section 5 and summarize this work in Section 6. PACS spectroscopy The data for the five fine structure lines observed with the PACS instrument were taken on 2011 July 9 using the unchopped grating scan mode. They were taken as part of a Herschel Guaranteed Time Key Project, the Very Nearby Galaxies Survey (VNGS; PI: C. D. Wilson). Each observation consists of a set of 7 × 1 footprints extending eastward along an orientation angle of 115 • east of north. One footprint covers a fieldof-view of 47 ′′ per side and the footprints are separated by 30 ′′ . The PACS instrument consists of 25 spatial pixels ('spaxels') covering roughly 10 ′′ on the sky each and thus we obtain 25 individual spectra per footprint. 2 The basic observational details for each line are summarized in Table 1, while outlines of our observations are shown overlaid on a map of the total infrared intensity (see below for details) in Figure 1. We note that our obser-vations do not cover the nucleus of Cen A, as the nucleus was observed as part of another Herschel Guaranteed Time project. From Level 0 to Level 2 the PACS spectroscopic observations are processed with the standard pipeline for unchopped scans using the Herschel Interactive Processing Environment (HIPE; Ott 2010) version 9.2 with calibration files FM,41. For details of the pipeline see Parkin et al. (2013) or the PACS Data Reduction Guide. 3 Level 2 cubes are exported to PACSman v3.52 (Lebouteiller et al. 2012) where each individual spectrum is fit with a second order polynomial and Gaussian function for the baseline and line, respectively. Lastly, we create a map by projecting the rasters onto a common grid with a pixel scale of 3.133 ′′ . In Figure 2 SPIRE spectroscopy The SPIRE Fourier Transform Spectrometer (FTS) observation of Cen A consists of a fully sampled map at high spectral resolution. The [N ii] 205 line comes from observations using the SPIRE short wavelength (SSW) bolometer array, consisting of 37 hexagonally arranged bolometers with a combined total field-of-view of 2.6 ′ (although only bolometers within the central ∼ 2.0 ′ are well calibrated). 4 We processed the observation using HIPE v11.0 developer's build 2652 and calibration set v10.1 using the standard pipeline (see Parkin et al. (2013) for details). Next, we built a spectral cube using the HIPE function "spireProjection()" with the naive projection option, then we fit the spectral line in each pixel of the cube with a sinc function. Finally, a map is produced by integrating over each line at a resolution of ∼ 16 ′′ and with a 12 ′′ pixel scale. We chose this pixel size to match the common pixel size we adopted for all of the maps. The final [N ii] 205 map at its native resolution is shown in Figure 2. We note that in addition Figure 3 in Israel et al. (2014), which displays the full FTS spectrum from the central pixel). However, a full FTS spectral analysis is beyond the scope of this paper and will be presented in a later paper. Ancillary Data We also use previously published PACS photometry at 70 and 160 µm (Parkin et al. 2012). These data have been reprocessed up to Level 1 using the PACS photometer pipeline (Wieprecht et al. 2009) in HIPE v9.0 (calibration file set FM,41), and then passed into Scanamorphos v21 (Roussel 2013), which was used to produce the final maps. These maps were set to a final pixel scale of 1.4 and 2.85 ′′ at 70 and 160 µm, respectively. From the same paper we also make use of the CO(J = 3 − 2) observations taken at the JCMT. We refer the reader to this paper for details on how the CO (J = 3 − 2) map was constructed. Lastly, we make use of the Spitzer MIPS 24 µm data, reprocessed as described in Bendo et al. (2012). Convolution and Re-sampling The PACS spectroscopic maps were convolved to a common resolution matching that of our CO(J = 3 − 2) observations from the JCMT (14 ′′ ) using Gaussian kernels. The MIPS 24 µm and PACS 70 and 160 µm maps were convolved to the same resolution using the appropriate kernels from Aniano et al. (2011). All of our maps were resampled onto a map with a pixel scale of 12 ′′ , such that each pixel is mostly independent. Lastly, we mask out all detections below 5σ in our spectroscopic maps to ensure robust line ratios for our analysis. Calibration uncertainties are 4% for the MIPS 24 µm photometry 5 and 5% for the PACS 70 and 160 µm maps (PACS OM). The PACS spectroscopic maps have 30% calibration uncertainties (PACS OM) while the SPIRE FTS map has a 7% calibration uncertainty (SPIRE OM). We note that calibration uncertainties are included in the reported errors unless otherwise stated. Morphological Properties of the Line Emission Figure 2 shows our PACS and SPIRE spectroscopic maps at their native resolution and pixel scale. We note that these maps are in units of integrated intensity and only have a 3σ cut applied for display purposes; however, all flux measurements and analyses are carried out on the 5σ cut maps. The [C ii] emission, tracing both neutral and ionized gas, shows a smooth decrease from near the center of the galaxy to the edge of our map. Peaks in the [C ii] emission correspond to peaks in the warm dust emission as traced by the 70 µm emission, overlaid as contours Figure 2. The strongest emission is a factor of roughly 100 times higher than the outer part of the map and we denote this peak the 'SE tip' in Figure 2. The peak at the SE tip has also been seen previously in the Herschel PACS 160 µm band as well as the three SPIRE photometric bands at 250, 350, and 500 µm, and in CO(J = 3 − 2) emission (Parkin et al. 2012). Furthermore, Quillen et al. (2006) presented Spitzer Infrared Array Camera (IRAC) photometry that demonstrates a parallelogram shaped ring, coincident with our [C ii] observations. The total flux in our [C ii] map is (4.27 ± 1.28) × 10 −14 W m −2 over 5 MIPS Instrument Handbook, available at http://irsa.ipac.caltech.edu/data/SPITZER/docs/mips/mipsinstrumenthandbook/home/ an area of approximately 11200 square arcseconds. Our value is in fairly good agreement with Unger et al. (2000), who found a total flux for their center and south-east pointings (those which overlap our observations) of 3.83 × 10 −14 W m −2 covering a total area of 11100 square arcseconds given ISO's 70 ′′ beam. Any disagreement likely is due to the fact that our observations are not entirely co-spatial with theirs. The Parkin et al. (2013) in the nucleus of M51, where it was attributed to shocks produced by the Seyfert 2 nucleus. Cen A has a strong central active galactic nucleus (AGN); thus, it is possible we see the same type of behaviour in the center as in M51. This is further supported by the fact that the North is up and east is to the left in all panels. We have applied a 3σ cut to these maps to highlight robust detections; however, we note that in our analysis we use a 5σ cut to ensure robust line ratios. Units in all images are W m −2 sr −1 . Contours from the Herschel PACS 70 µm map tracing warm dust are overlaid on top with the levels corresponding to 3 × 10 −6 , 1.5 × 10 −5 , 3.0 × 10 −5 , 6.0 × 10 −5 , and 7.5 × 10 −5 W m −2 sr −1 . The beam size for each line is shown as a black filled circle in the lower left corner. The 'SE tip' referred to in the text is denoted by an arrow. 6 Table 2 Total integrated flux for cooling lines and infrared continuum in the eastern disk of Cen A. a Total integrated flux of each atomic fine structure line we observed for Cen A. b The area in square arcseconds over which each total is calculated. The variations reflect the number of good pixels included in the sum. c The total integrated flux over the area shown in Figure 1 The area covered by our [N ii] 205 map is different than the other five lines we present here, as the observations are centered on the nucleus of the galaxy and do not extend as far east as the PACS maps. We see that there is a strong detection across the disk, with an emission peak that is roughly a factor of 40 larger than emission detected above and below the plane. A comparison between the 70 µm contours and the [N ii] 205 line shows that the northwest extra-nuclear peak is also detected in the ionized gas. The total flux in this map is (6.4 ± 0.5) × 10 −15 W m −2 . [C ii], [O i] 63 , and F TIR Emission The line ratio of ([C ii]+[O i] 63 )/F TIR gives us an indication of the heating efficiency in Cen A. The [C ii] and [O i] 63 lines are the dominant coolants in the neutral gas of PDRs. Thus, their strength tells us how many FUV photons contribute to gas heating, assuming every free electron produced via the photoelectric effect that goes into gas heating eventually results in the emission of one or more [C ii] or [O i] 63 photons. This value is then divided by the total infrared flux, which indicates how many FUV photons result in dust heating if we assume all dust grains irradiated by FUV flux eventually re-emit infrared continuum emission. We calculated the total infrared intensity of Cen A using Spitzer MIPS 24 µm photometry (Bendo et al. 2012), PACS 70 and 160 µm photometry (Parkin et al. 2012), and the empirically determined equation for the total infrared intensity (or luminosity) from Galametz et al. (2013), We show a map of the total infrared intensity in Figure 1, which has a resolution of 14 ′′ . This map covers the entire disk of Cen A; however, we only use the region overlapping with our spectroscopic maps for our analysis. Furthermore, we converted the intensity to flux for comparison to the fine structure line observations. We achieved this by multiplying the intensity map (in units of W m −2 sr −1 ) by the number of steradians per pixel, to obtain a flux map in units of W m −2 pixel −1 . We note that in some studies the far-infrared flux, F FIR , which spans 42 µm-122 µm (e.g Graciá-Carpio et al. 2008), is used in lieu of the total infrared flux, F TIR (3 µm-1100 µm; Galametz et al. 2013). The two quantities are related via F FIR ∼ F TIR /2 (Dale et al. 2001). The heating efficiency map is displayed in Figure 3 and does not show a significant change with increasing radius, with values ranging from 4 × 10 −3 to 8 × 10 −3 with an average of (6 ± 2) × 10 −3 . Our value for this ratio is consistent with previous measurements in Cen A, as Unger et al. (2000) find a value of 6 × 10 −3 in the center and southeast regions. In other galaxies this ratio typically varies between 10 −3 and 10 −2 on global scales, as found by Malhotra et al. (2001), who studied 60 normal, star-forming galaxies, and Brauher et al. (2008), who conducted an analysis of 227 AGN, starbursts, and normal star-forming galaxies observed with ISO. We also look at the heating efficiency as a function of infrared color, 70µm/160µm (which is often used as a proxy for dust temperature (e.g. Croxall et al. 2012)). For this analysis, we have divided the strip of observations used for our analyses here, and in Section 4, into eight radial bins as shown by the color-coded schematic in Figure 4. Looking at the data in this manner allows us to search for radial variations within these line ratios. A plot of the heating efficiency as a function of the 70µm/160µm color is shown in the top panel of Figure 5. Each point represents the average value in each bin, while uncertainties are estimated from the standard deviations of the quantities in each bin. The innermost bin (shown in red) has a value of ∼ 5 × 10 −3 , we see an increase in the middle bins up to almost 8 × 10 −3 (shown in blue), then a decrease in the outermost bins. In the bottom panel of Figure 5, we show a plot of the heating efficiency as a function of dust temperature, determined by Parkin et al. (2012). The innermost bins show the warmest dust. In addition, we do not see a significant trend with increasing infrared color within uncertainties for either parameter space. This is an interesting result be-cause on global scales, Malhotra et al. (2001) observe a reduced heating efficiency in galaxies with the warmest 60 µm/100 µm colors, and Brauher et al. (2008) found no overall decreasing trend in heating efficiency with increasing dust temperatures. However, a decrease in heating efficiency with increasing infrared color (and thus dust temperature) has been previously observed within individual galaxies by Lebouteiller et al. (2012) in an H ii region within the Large Magellanic Cloud, by Croxall et al. (2012) in NGC 1097 and NGC 4559, and by Parkin et al. (2013) in M51. A suppression in heating efficency has been attributed to dust grains becoming too positively charged for the photoelectric effect to efficiently free electrons (e.g. Malhotra et al. 2001). Thus, for the observed region of Cen A's disk, the dust grains are largely unaffected by the impinging radiation field. Molecular Gas Cooling CO also contributes to the cooling via its rotational lines, although its contribution is small in comparison to the [C ii] and [O i] 63 cooling lines. Utilizing the CO(J = 1 − 0) integrated intensities reported at three positions in the disk of Cen A from Eckart et al. (1990), and measuring the corresponding CO(J = 3 − 2) integrated intensities in our map, we find an average CO(J = 3 − 2)/CO(J = 1 − 0) ratio of 0.42 ± 0.04 (11 ± 1) when the CO integrated intensities are in units of K km s −1 (W m −2 ). Dividing the CO(J = 3 − 2) map by the average CO(J = 3 − 2)/CO(J = 1 − 0) ratio we obtain an estimate of the CO(J = 1 − 0) distribution. In the top panel of Figure 6 we show the estimated line ratio CO(J = 1 − 0)/F TIR for Cen A. Similar to the ([C ii]+[O i] 63 )/F TIR line ratio, we do not detect a significant trend with increasing radius in this line ratio. Over the area outlined in Figure 4 (for comparison with other line ratios) we find an average CO(J = 1−0)/F TIR ratio of (1.9 ± 0.2) × 10 −6 . The average value of the [C ii]/CO(J = 3 − 2) line ratio (shown in the bottom panel of Figure 6) is (2.7 ± 1.7) × 10 2 , while the corresponding average value of the [C ii]/CO(J = 1 − 0) line ratio is (3 ± 2) × 10 3 across the strip. This second value agrees within uncertainties with the average found for a sample of starburst galaxies and Galactic star forming regions including Cen A, which is 4200 (after dividing their CO integrated intensities by the main beam efficiency) (Stacey et al. 1991). They find [C ii]/CO(J = 1 − 0) to be roughly 4070 for Cen A in particular. Converting the CO flux to a molecular hydrogen mass we can compare the line/F FIR ratios with recent results from Graciá-Carpio et al. (2011). They investigated the parameter space of line/F FIR vs. L FIR /M H2 for a subset of the SHIN-ING sample of galaxies. The ratio L FIR /M H2 represents the number of stars formed per unit mass of molecular gas per unit of time. We convert our CO(J = 3 − 2) integrated intensity to an H 2 mass assuming an X CO factor of (2 ± 1) × 10 20 cm −2 (K km s −1 ) −1 , typical for the Milky Way (Strong et al. 1988), and the CO(J = 3 − 2)/CO(J = 1 − 0) ratio of 0.42 ± 0.04, calculated as described above. We do not have global measurements for the various fine structure lines, thus we opt to instead measure the average L FIR /M H2 ratio over the area shown in Fig Ionized Gas The Rubin (1985), where the red, green, blue and purple dashed lines represent gas densities of 10 2 , 10 3 , 10 4 and 10 5 cm −3 , respectively. The [O iii]/[N ii] 122 line ratio we derive falls within a range of stellar effective temperature of approximately 3.45 × 10 4 and 3.62 × 10 4 K, which corresponds to stellar classifications of O9.5 or O9 (Vacca et al. 1996). However, we note that if the AGN contributes in part to the observed emission, the stellar classifications will shift to later types. We have chosen not to probe how [O iii]/[N ii] 122 varies as a function of U , because we are investigating the disk of Cen A rather than the nucleus. However, if the AGN were to contribute partially to the observed emission in the center, it might explain why our observed [O iii]/[N ii] 122 line ratio is higher (and thus the stellar classification is earlier) than that observed in M51 (Parkin et al. 2013). Correcting the [C II] emission for the Ionized Gas Contribution The emission in the [C ii] line comes from three sources: dense neutral gas, ionized gas, and diffuse neutral gas. For us to properly utilize the photodissociation region model in Section 4.3 to interpret our diagnostic far-infrared spectral lines, we need to isolate the [C ii] emission associated with the dense, neutral gas found in PDRs. The The dashed lines show the predicted line ratio for gas densities of 10 2 (red), 10 3 (green), 10 4 (blue), and 10 5 cm −3 (purple) using the H ii region model of Rubin (1985). ionized gas contribution can be determined by comparing two observed line ratios, namely the [N ii] 122 /[N ii] 205 and [C ii]/[N ii] 205 ratios, to a theoretical prediction for each line as a function of electron density in an H ii region (e.g. Oberst et al. 2006;Parkin et al. 2013). The low critical density of the [N ii] 205 line (approximately 44 cm −3 at T e = 8000 K) implies that emission via this transition can originate in diffuse ionized gas as well as in higher density ionized gas such as is typically seen in H ii regions. However, the critical density of the [C ii] line is almost identical (46 cm −3 ) to that of the [N ii] 205 line, thus both lines will probe ionized gas of the same density, which is key for using this method to remove the ionized gas contribution to the observed [C ii] emission. To calculate the level populations (and thus the predicted fluxes) for the two [N ii] transitions we employ the Einstein coefficients from Galavis et al. (1997) and collision strengths for collisions with electrons from Hudson & Bell (2004). For the [C ii] line level populations we use the Einstein coefficients of Galavis et al. (1998) and collision strengths of Blum & Pradhan (1992). Due to the lack of accurate measurements of the gas phase abundances of C or N, as well as the metallicity in Cen A, we adopt Solar gas phase abundances and assume no metallicity gradient within the region we are investigating. The abundances we choose are from Savage & Sembach (1996), and are C/H = 1.4 × 10 −4 and N/H = 7.9 × 10 −5 . Our observed [N ii] 122 /[N ii] 205 line ratio is initially calculated for the small region of overlap between the observations of the two lines. We convolve the [N ii] 122 map to the resolution of the [N ii] 205 map (17 ′′ ) using a Gaussian kernel, then align them to a common pixel grid. Next, we convert the units of the [N ii] 122 map to match those of the [N ii] 205 (Jy Hz beam −1 ) and then calculate the line ratio in each of the overlapping pixels. Using a theoretical curve of the [N ii] 122 /[N ii] 205 line ratio as a function of electron density for an H ii region, we determine the electron density at which our observed [N ii] 122 /[N ii] 205 ratio matches that of the theoretical prediction for each pixel in our line ratio map. We find a mean electron density of 6.3 cm −3 with lower and upper limits of 0.8 and 12.3 cm −3 . Given that there is little overlap between our [N ii] 205 map and our [N ii] 122 map, we choose to take the mean observed ratio and standard deviation as the adopted [N ii] 122 /[N ii] 205 measurement for the full area of our PACS observations. Thus, we find To confirm the low gas density that we find using the [C ii], [N ii] 122 , and [N ii] 205 lines, we have also looked at the [S iii](18.71 µm)/[S iii](33.48 µm) line ratio, which is sensitive primarily for gas densities between 10 2 cm −3 and 10 4 cm −3 (Snijders et al. 2007). Through the Spitzer Heritage Archive we obtained observations of Cen A taken with the Spitzer's Infrared Spectrograph (IRS) from three different AORs (4939776, 4939264, and 8767488), two of which are centered on the nucleus and one of which looks at a small region in the disk. The data were processed with the SMART package (Lebouteiller et al. 2010), then projected using the CUBISM package (Smith et al. 2007). In all three cases we find that the [S iii](18.71 µm)/[S iii](33.48 µm) is less than roughly 0.5, which implies an ionized gas density of less than 10 2 cm −3 (Snijders et al. 2007;Smith et al. 2009). However, this line ratio becomes insensitive to densities below approximately 10 2 , so we cannot be more specific about the ionized gas density using just the [S iii] lines. Nonetheless, these results are consistent with the densities derived above using the [C ii], [N ii] 122 , and [N ii] 205 lines, thus providing confidence in the ionized gas density we find. Based on our estimate of the electron density, we then determine the theoretical prediction for the [C ii]/[N ii] 205 ratio in ionized gas. A map of [C ii] emission originating in ionized gas is then generated from the predicted [C ii]/[N ii] 205 ratio, our [N ii] 122 map, and our assumed constant [N ii] 122 /[N ii] 205 line ratio. For comparison with the PDR model in Section 4.3, we remove the fraction of [C ii] emission originating in ionized gas from the total observed emission, which in general is quite low. The majority of our map demonstrates a contribution of roughly 10 to 20%, with the pixels showing the highest ionized gas contribution falling at the edge of the map farthest from the center of the galaxy. Potential Non-PDR Contributions to the Observed [C II] and F TIR Emission We note that there is a possibility that, of the [C ii] emission stemming from neutral gas, some may come from the diffuse ISM rather than from PDRs. However, Unger et al. (2000) looked into this possibility and concluded that less than 5% of the total [C ii] emission originated in non-PDR, diffuse gas within their ISO observations of Cen A. Furthermore, we calculate the ratio of H 2 /H i using the maps from Parkin et al. (2012) and find an average value of roughly 5 through the area covered by our spectroscopic strips, suggesting that the gas is H 2 dominated. In addition, it is possible that not all of the observed F TIR emission stems from PDRs as well; for example, a fraction could come from H ii regions. However, in the context of the PDR model considered here (see below), we find that it is unlikely that a significant fraction of the observed F TIR emission in Cen A originates in low-intensity PDRs or diffuse gas. We note that we do not take these contributions into account in the following analysis. The Model A comparison between observed line ratios and a PDR model allows us to diagnose the physical properties of the PDRs from which the fine structure line emission originates. Here we choose to use the PDR model of Kaufman et al. (1999Kaufman et al. ( , 2006, which has been updated and expanded from the model of Tielens & Hollenbach (1985). This particular model assumes the PDR is a plane-parallel, semi-infinite slab and is only parameterized by two free variables: the hydrogen gas density, n, and the strength of the impinging far-ultraviolet (FUV) radiation field normalized to the Habing field (1.6 × 10 −3 erg cm −2 s −1 ; Habing 1968), G 0 . The model simultaneously treats the thermal balance, chemical network and radiative transfer and produces a grid of predicted fine structure line strengths as a function of n and G 0 . By comparing observed line ratios to the predicted ones we can extract the corresponding n and G 0 . For our investigation we choose to utilize the line ratio parameter space In order to search for radial variations in the disk of Cen A we have again divided our observed line ratio maps into eight bins as shown in Figure 4 to measure n and G 0 . The unweighted average observed values in each bin are overlaid on the PDR model grid in the top panel of Figure 8, with the error bars incorporating both the measurement uncertainties of the observations as well as the standard deviation of the data in each bin. Even if the observed total flux emits from both the near and far sides of the cloud(s) because the cloud is optically thin to dust continuum infrared photons, the model assumes it only emits from the side exposed to the source of FUV flux. Thus, we divide the total infrared flux, F TIR , by a factor of two as recommended by Kaufman et al. (1999). The resulting values of n, G 0 and the temperature at the surface of the PDR, T , are presented in Table 4 under the "Uncorrected" heading. The PDR model assumes the [C ii] emission originates only in neutral gas, but as described in Section 4.1, [C ii] emission can be produced in both neutral and ionized gas. Thus, to properly compare our observations to the model we need to remove the 10-20% contribution from the ionized gas. We also need to make a correction to the [O i] 63 observations that stems from geometrical effects of many PDRs in a given observation for extragalactic sources. We see PDRs at all orientations with respect to our line of sight, but when a spectral line is optically thick and the cloud is lit from behind, we will not observe emission from that line, as is the case for the [O i] 63 line. Kaufman et al. (1999) state that as a result of the optically thick line and various PDR orientations, we only observe about half of the total [O i] 63 emission produced, while the remaining half radiates away from the line of sight. Following the recommendation of Kaufman et al. (1999), we increase our observed [O i] 63 flux by a factor of 2, as we have previously done with M51 (Parkin et al. 2013). We caution the reader that there is some uncertainty in this correction factor that should be kept in mind when interpreting the following results. We show the fully corrected line ratios compared to the PDR model in the middle panel of Figure 8 and tabulate the results in Table 4. We see that with these changes the data points shift down and slightly to the right, corresponding to increases in both G 0 and n. Our We find that this parameter space gives consistent solutions for n and G 0 as those derived from the plot in the bottom left panel of Figure 8. This further suggests to us that our assumption that the [O i] 63 line is optically thick is valid. Results In Parkin et al. (2012) a radially decreasing trend in both the dust temperature and the gas-todust mass ratio was reported, implying some influence on the surrounding ISM by the central AGN in Cen A. Interestingly, we do not see a radial trend in the density of hydrogen nuclei in PDRs, n, nor the strength of the interstellar radiation field impinging onto the PDR surfaces, G 0 . Even within one standard deviation of the mean value in each bin, there is little trend with increasing radius from the center (only the outermost bin shows a significant deviation from the other bins; however, this may be due to the small number of pixels in that bin). Correspondingly, the surface temperature of the PDRs also does not show a radial trend, in contrast to the dust temperature, which decreases from 26.5 K in the innermost regions of the area outlined in Figure 4 to 20.5 K in the outermost region. These results suggest that the physical properties of the molecular clouds nearest the center in our observations are not being affected strongly by the AGN. The discrepancy between the PDR temperature and the dust temperature may be explained in part by estimating the 'observed' value of G 0 using the total infrared intensity. Following the method presented by Kramer et al. (2005), we calculate the average observed value of G 0 in each bin across the eastern disk of Cen A and present the results Note.-The values reported for logG0, log(n/cm −3 ), and T show the best fitting range from the model grid in brackets. The lower limits on these values are calculated by subtracting the lower uncertainty from the lower end of the best fitting range, while the upper limits should be calculated by adding the upper uncertainty to the upper end of the best fitting range. a Angular distance from the center of the galaxy to the approximate center of each bin, along an orientation angle of ∼ 116 • east of north. We remind the reader that the bins are shown schematically in Figure 4. in Table 5. Interestingly, the FUV radiation field predicted by the dust emission is consistent within uncertainties with that determined by the model for all bins except the outermost bin. This likely indicates that PDRs dominate the total infrared emission in the most, if not all of the eastern disk. The value of G 0 calculated using the observed TIR integrated intensity is lower than that determined by the PDR model in the outermost bin. This could be due to a lower beam filling factor for the dust emission associated with PDRs, compared to the colder, more diffuse dust component. One possible explanation for this might be the high inclination of Cen A with respect to the line of sight. Cen A has an inclination of roughly 75 • (Quillen et al. 2006) so it is nearly edge on. If we were only diagnosing the characteristics of clouds from the nearest side of the galaxy, we might not observe any effects of the AGN on the surrounding clouds. However, while we believe the [O i] 63 line is optically thick, [O i] 145 is not, and it is unlikely the [C ii], and F TIR are optically thick as well. Thus, it is more likely that any effects the AGN might have on the surrounding gas are diluted because we are integrating emission over clouds through the arm and interarm regions as well as any clouds in the vicinity of the AGN along our line of sight. Another possibility to explain why the dust is affected by the AGN but not the PDR gas is that a significant amount of dust may reside in the diffuse ISM. This dust could be more strongly affected by the X-rays produced by the AGN than the gas located within PDR regions. Inferred Physical Conditions from PDR Modelling In Section 4.3 we found that the average values of G 0 and n across the disk ranged from ∼ 10 1.75 -10 2.75 and ∼ 10 2.75 -10 3.75 cm −3 , respectively. These results are consistent with those previously published by Unger et al. (2000), who found G 0 ∼ 10 2 and n ∼ 10 3 cm −3 , as well as by Negishi et al. (2001) who found G 0 = 10 2.7 and n = 10 3.1 cm −3 , for Cen A. The properties of the molecular clouds are also consistent with those found by large surveys on global scales. The 60 galaxies in the Malhotra et al. (2001) sample show 10 2 ≤ G 0 ≤ 10 4.5 and 10 2 cm −3 ≤ n ≤ 10 4.5 cm −3 , while the full sample of Negishi et al. (2001) shows a range of 10 2 to 10 4 for both n and G 0 , where n is in units of cm −3 . We also compare our results to those found for other individual galaxies. In Figure 9 we plot the locations of Cen A, M51, and several other galaxies for which PDR characteristics are available in the literature, in the G 0 -n parameter space. The PDRs in Cen A are consistent with those of the Seyfert 1 galaxy NGC 1097 (Croxall et al. 2012), and the spiral galaxies NGC 4559 (Croxall et al. 2012), NGC 6946 and NGC 1313(Contursi et al. 2002, and M33 (Mookerjea et al. 2011) (within uncertainties). We also see that Cen A has a lower value for G 0 than the starburst of M82 (Contursi et al. 2013), but a higher value for G 0 than M33. Overall, Cen A is in fairly good agreement with the range of values of G 0 and n found in other sources. Given that Cen A is an elliptical galaxy that has merged with a disk galaxy, it is useful to also compare it to other early-type galaxies. In contrast to the values for G 0 and n found in normal or starbursting galaxies, Wilson et al. (2013) . We choose to compare these line ratios for NGC 4125 with our uncorrected results for Cen A because the [C ii] emission from NGC 4125 was not corrected for ionized gas. In fact, it is likely that NGC 4125 is ionized gas dominated given that only an upper limit for [O i] 63 is determined but there are significant detections in [N ii] 122 and [C ii] . Furthermore, Welch et al. (2010) find only an upper limit in CO emission for NGC4125. A study of 12 early type galaxies by Crocker et al. (2011) and a study focusing on a subsample of early type galaxies from the ATLAS 3D survey ) by Crocker et al. (2012, find that the star forming properties and diagnostic CO line ratios of early type galaxies are similar to normal star-forming, spiral galaxies. Thus, Cen A seems to present a"more classical" ISM than NGC 4125 when compared to samples of elliptical and lenticular galaxies, as well as spiral galaxies. (Parkin et al. 2013), M82 (Contursi et al. 2013), NGC1097 and NGC4559 (Croxall et al. 2012), NGC 6946 and NGC 1313 (Contursi et al. 2002), M33 (Mookerjea et al. 2011), and surveys Malhotra et al. (2001) and Negishi et al. (2001). Parkin et al. (2013) investigated the same atomic fine structure lines in the central ∼ 2.5 ′ of M51 by dividing the galaxy into four distinct regions: the nucleus, center, arm and interarm regions. They discovered a radial trend in both the fraction of ionized gas (from about 80% in the central region of the galaxy down to 50% in spiral arm and interarm regions) as well as in the properties of the molecular clouds, n, G 0 and T . However, they also discovered that in addition to the radial trend, the molecular clouds in the arm and interarm regions displayed the same physical characteristics, despite differing star formation rate surface densities. We now discuss the similarities and differences between the properties of the gas in both M51 and Cen A. Comparison to M51 To give any meaning to this comparison we first need to consider the star formation rate (SFR) and star formation rate surface density (SFRD). We estimate the SFR of Cen A by using the equation derived empirically by Li et al. (2013) that uses the luminosity of the Herschel PACS 70 µm map (their equation (4), with the calibration constant determined for their combined dataset as listed in their Table 5): where the SFR rate is given in M ⊙ yr −1 and the luminosity at 70 µm is given in erg s −1 . With this equation, we obtain a total SFR in the region outlined in Figure 4 of approximately 0.29 M ⊙ yr −1 . Li et al. (2013) caution that the SFR determined with this equation may be overestimated by up to 50%. Indeed, Equation 2 assumes that L(70) is entirely associated with recent star formation, while up to half of the observed L(70) could be associated with the older stellar population. Thus, we conservatively assume half of the 70 µm is not associated with current star formation. This leads to an observed SFR to ∼ 0.14 M ⊙ yr −1 . Next we need to estimate the SFRD for Cen A. The inclination angle of Cen A is approximately i = 75 • (Quillen et al. 2006), thus we divide the observed area in Figure 4 (∼ 12.3 kpc 2 ) by cos i to estimate the physical area of the region covered by the fine structure lines. This gives a SFRD of Σ(70) = 0.01 M ⊙ yr −1 kpc −2 . For consistency, we apply the same equation to M51 using only the area covered by the fine structure line observations in Parkin et al. (2013), which is roughly 49 kpc 2 . We obtain a value for the SFR in this region of M51 of 4.85 M ⊙ yr −1 , thus a SRFD of ∼ 0.05 M ⊙ yr −1 kpc −2 (again assuming 50% of the 70 µm is from recent star formation). In comparison, Kennicutt (1998) reports a global mean SFR density of approximately 0.02 M ⊙ yr −1 kpc −2 for M51, while Kennicutt et al. (2007) find a range of SFRDs between 0.001 and 0.4 M ⊙ yr −1 kpc −2 for 257 apertures centred on H ii regions within M51. Thus, the the SFRD in Cen A is lower when compared with the region of M51 mapped in the atomic fine structure lines; however, we caution that there are large uncertainties in these estimates and thus the difference may not actually be as significant. With the SFRDs of both galaxies in mind, we can first compare the heating efficiency as a function of the 70 µm/160 µm ratio between the two galaxies. In Parkin et al. (2013) it was shown that the average value for the heating efficiency was about 5 × 10 −3 in the arm and interarm regions of M51, decreasing to 3 × 10 −3 in the nucleus. In Figure 5 we see that this ratio is slightly higher in Cen A than in M51, with a value of 5 × 10 −3 in the bins closest to the AGN, a peak of 7.5 × 10 −3 in the middle of the strip, and a value of 6 × 10 −3 in the outermost bins. Next Figure 10. The values for n and G 0 in Cen A are generally consistent with those of the arm and interarm regions of M51 within uncertainties, even for the innermost bins in our observations. However, the nuclear and center regions of M51 have slightly higher values for n and G 0 (10 3 -10 4 cm −3 , and 10 2.75 -10 3.75 , respectively) and a higher ionized gas fraction (by up to a factor of 4) than observed in the disk of Cen A (see Figure 9). This is an interesting result because both galaxies have active centers, with M51 containing a Seyfert 2 nucleus (Ho et al. 1997); thus, we might expect similar properties in their central regions. We note that we do not have observations directly of the nucleus of Cen A; however, the result stands even if we ignore the nucleus region of M51 because the center region contains molecular clouds with higher density and is exposed to a stronger radiation field than in Cen A. This might also be due to the higher fraction of ionized gas in M51 than in Cen A. If M51 has a larger population of massive young stars, they would produce more FUV radiation and thus more H ii regions. Alternatively, the difference could be another consequence of the high inclination of Cen A. If there is a stronger radiation field affecting clouds near the center of the galaxy, it may be diluted by the weaker fields contributing along the line of sight. Investigating additional galaxies with active nuclei could confirm which is the more likely scenario. The [O iii]/[N ii] 122 line ratio indicates that the youngest stars in Cen A are hotter (O9.5 or O9) than in M51 (B0; Parkin et al. 2013) based on the stellar classifications from Vacca et al. (1996). However, this apparent discrepancy may be due to lower signal-to-noise in the M51 observations of the [O iii] line than in Cen A, suggesting that the observed [O iii]/[N ii] 122 ratio in M51 from Parkin et al. (2013) should be considered as a lower limit. It is also possible that the ratio observed in Cen A is a combined effect of the different stellar populations and of a stronger AGN, thus making the apparent stellar classification earlier than it really is. We conclude that the physical characteristics of the PDRs in the molecular clouds of Cen A are reasonably similar to those found in normal, star forming galaxies, although there seem to be a few noticeable differences between it and M51 based on our two data sets. Conclusions We have presented new spectroscopic observations of the unusual elliptical galaxy Centaurus A from the Herschel PACS instrument. These observations focus on important atomic cooling lines originating from both neutral ([C ii](158 µm), We find that the heating efficiency in the disk, represented by the ([C ii]+[O i] 63 )/F TIR line ratio, ranges from 4 × 10 −3 to 8 × 10 −3 , consistent with values determined in galaxies on global scales, as well as on resolved scales in other individual galaxies. However, we do not observe a significant decrease in the heating efficiency with increasing dust temperature, as represented by the 70 µm/160 µm color, nor do we observe a suppression in the heating efficiency in the vicinity of the nucleus. We also find that the heating efficiency is slightly higher in Cen A than the grand-design spiral galaxy, M51, suggesting the dust grains and PAHs in Cen A are more neutral in PDRs than in M51. Furthermore, the line ratio [O iii]/[N ii] 122 reveals that the youngest stars are of a slightly earlier stellar type than those in M51, thus producing a harder radiation field in the disk of Cen A. However, there is a possibility that the AGN is partially contributing to the observed emission, resulting in an earlier stellar type classification than is actually present. A comparison between a PDR model and our observations reveals that the strength of the FUV radiation field incident on the PDR surfaces ranges from ∼ 10 1.75 -10 2.75 and the hydrogen gas density ranges from ∼ 10 2.75 -10 3.75 cm −3 , in agreement with typical values in other star forming galaxies, including M82, which has a central starburst. However, the conditions (PDRs) producing the fine structure lines in Cen A are distinctly different from the elliptical galaxy NGC 4125, where the gas may be completely ionized. Contrary to M51, we do not see a significant radial trend in ei-ther n or G 0 . Furthermore, while the results from the PDR modelling for Cen A agree with those for the arm and interarm regions in M51, the central region of M51 shows higher values for n and G 0 . Observations of the nucleus of Cen A in the important fine structure lines may reveal a similar trend; however, we point out that in the central region of M51 up to 70% of the [C ii] emission originates in diffuse ionized gas while in Cen A this fraction is only 10-20%, thus this may also explain the differences between the two galaxies. We conclude that the disk of Cen A exhibits properties in its PDRs that are similar to other normal disk galaxies, despite its unusual morphological characteristics. T. J. P. thanks the anonymous referee for his/her comments, which have been beneficial to the research presented in this paper. The research of C. D. W. is supported by the Natural Sciences and Engineering Research Council of Canada and the Canadian Space Agency. I. D. L is a postdoctoral researcher of the FWO-Vlaanderen (Belgium). PACS has been developed by a consortium of institutes led by MPE (Germany) and including UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF-IFSI/OAA/OAP/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by the funding agencies BMVIT (Austria), ESA-PRODEX (Belgium), CEA/CNES (France), DLR (Germany), ASI/INAF (Italy), and CICYT/MCYT (Spain). SPIRE has been developed by a consortium of institutes led by Cardiff University (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC and UKSA (UK); and NASA (USA). HIPE is a joint development by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS and SPIRE consortia. The James Clerk Maxwell Telescope is operated by The Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the United Kingdom, the Netherlands Organisation for Scientific Research, and the National Research Council of Canada. This work is based, in part, on observations made with the Spitzer Space Telescope, obtained from the NASA/ IPAC Infrared Science Archive, both of which are operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research made use of APLpy, an open-source plotting package for Python hosted at http://aplpy.github.com.
2014-04-01T21:27:05.000Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "445be191c62858fedac1f2004356f5be2f97334e", "oa_license": null, "oa_url": "https://biblio.ugent.be/publication/5921238/file/5921864.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "445be191c62858fedac1f2004356f5be2f97334e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1830347
pes2o/s2orc
v3-fos-license
Vocal-Tract Resonances as Indexical Cues in Rhesus Monkeys Summary Vocal-tract resonances (or formants) are acoustic signatures in the voice and are related to the shape and length of the vocal tract. Formants play an important role in human communication, helping us not only to distinguish several different speech sounds [1], but also to extract important information related to the physical characteristics of the speaker, so-called indexical cues. How did formants come to play such an important role in human vocal communication? One hypothesis suggests that the ancestral role of formant perception—a role that might be present in extant nonhuman primates—was to provide indexical cues [2–5]. Although formants are present in the acoustic structure of vowel-like calls of monkeys [3–8] and implicated in the discrimination of call types [8–10], it is not known whether they use this feature to extract indexical cues. Here, we investigate whether rhesus monkeys can use the formant structure in their “coo” calls to assess the age-related body size of conspecifics. Using a preferential-looking paradigm [11, 12] and synthetic coo calls in which formant structure simulated an adult/large- or juvenile/small-sounding individual, we demonstrate that untrained monkeys attend to formant cues and link large-sounding coos to large faces and small-sounding coos to small faces—in essence, they can, like humans [13], use formants as indicators of age-related body size. Results Though the whole acoustic spectrum of vowel sounds is ideal for our categorization of speech [14], the lowest-dimensional representations rely on vocal-tract resonances, or formants [15]. Formants are not only important phonetic elements of speech-allowing us to distinguish different vowel sounds-but also carry important information related to the physical characteristics of the particular speaker. In humans, both statistical pattern recognition [16,17] and psychophysics [13,[18][19][20][21][22][23] have suggested that formants are significant contributors to these indexical cues. It is likely, then, that detecting formants could have provided ancestral primates with indexical cues necessary for navigating the complex social interactions that are the essence of primate societies. One important indexical cue is body size. Formant cues related to body size could be used by monkeys to determine the sex (in sexually dimorphic species), degree of potential threat (e.g., whether a competitor is larger or smaller), and/or age of an individual, as such cues do for human listeners [13,18,20,21]. Formants are the result of acoustic filtering by the supralaryngeal vocal tract-the nasal and oral cavities above the vocal folds. During vocal production, pulses of air generated by the rapid movement of the vocal folds produce an acoustic signal. The frequency of these pulses-the glottal-pulse rate-determines the fundamental frequency of the signal, which in turn is perceived as pitch. As the signal passes through the supralaryngeal vocal tract, it excites resonances, resulting in the enhancement of particular frequency bands; these are the formants. The length of the vocal tract determines, in part, which frequency bands are enhanced [2,15]: The frequency of, and the spacing between, successive formants decreases with increasing vocal-tract length. Because the vocal-tract length scales with body size in humans [24], formants are often reliable cues to this physical feature [13,18,20,21]. Acoustic analyses of rhesus monkey vocalizations reveal that these calls also have prominent formant structure [3,5,25] and that this spectral structure could, in theory, provide monkeys with indexical cues about their conspecifics, including information about their body size [3]. Here, we explicitly test the hypothesis that rhesus monkeys use formants as salient acoustic cues to assess the age-related body-size differences of conspecifics. A direct, experimental approach for assessing the role of formants includes the use of vocal synthesis methods in which the formant frequencies of a call can be manipulated independently of other acoustic cues (e.g., the fundamental frequency [glottal-pulse rate]) [26]. Only recently have such synthetic vocalizations been used successfully in animal playback experiments (whooping cranes [27], red deer [28], and rhesus monkeys [29]). Along similar lines, we used naturally produced rhesus monkey ''coo'' calls as models and a speech vocoder [30] to synthesize versions of these calls in which the glottal-pulse rate and all other acoustic variables (e.g., duration and amplitude envelope) were held constant while the formant frequencies were shifted up or down. Figure 1A shows the spectrograms of a single coo synthesized with two different vocal-tract lengths (10 cm and 5.5 cm). Note how the formants shift down and become more concentrated for large vocal-tract lengths and shift up and spread out for short vocal-tract lengths, whereas the overall shape of the amplitude envelope remains unchanged. The shift in formant spacing is also evident in the power and linear prediction spectra for the two vocal-tract lengths ( Figure 1B). To determine whether rhesus monkeys use formant cues to assess age-related differences in conspecific body sizes, we adopted a preferential-looking paradigm. Previous work has established that, like human infants (e.g., [31,32]), rhesus monkeys naturally prefer to look at a visual stimulus that corresponds to the auditory stimulus that they hear [11,12]. In the present context, we tested whether our monkey subjects would preferentially attend to a video display showing a large, older monkey (sexually mature, 13-yr-old) versus a small, younger monkey (juvenile, 6-yr-old) producing a coo vocalization ( Figure 1C) when they heard a coo produced from a simulated long vocal tract, and vice versa (Figures 1A and 1B). Monkeys were seated in front of two LCD monitors and a hidden speaker located between them and at the same height. One monitor displayed a video of the face of the large monkey producing a coo call, and the other monitor displayed the face of the small monkey producing a coo call. We counter-balanced all pertinent variables in the experiment. Both videos were played synchronously in a continuous loop for 60 s. Videos were edited such that the onset and offset of each monkey's mouth movements was synchronous. Synchronously with the videos, the subjects heard a coo that was from a long vocal tract (10 cm) or a short vocal tract (5.5 cm) and was based on a call from a third individual ( Figures 1A and 1B). The call of this third individual was based on one of two coo calls from other individuals of different ages (a sexually mature, 11-yr-old adult and a juvenile 6-yr-old) to eliminate any chance that the subjects could match the call with the dynamic faces by using some other individual-specific articulatory cue or some age-related acoustic cue(s) independent of formants. Although only the heads were visible in these videos, subjects could putatively assess overall size by features of the face (their size or the relative positions of facial features) or by comparing the head size relative to parts of the chair in which the vocalizing monkeys were seated ( Figure 1C). Head size can be used as a proxy for overall body size because there is a strong correlation between skull size and body size (as measured by either weight or length) and thus with vocal-tract length and formant spacing [3]. Because all visual and auditory components were synchronized and identical in both duration and overall amplitude, amodal cues could not be used to make a match. Two sets of such audiovisual stimuli were generated and used in these experiments; that is, there were two coo calls that were from two differently aged and sized individuals and were manipulated to sound large and small and then paired to the videos. Thus, our paradigm addressed whether monkeys would preferentially attend to the dynamic face that was approximately matched in size to the coo call that simulated that body size. Monkeys looked at the matching screen for 58.4% of the total time they spent looking at either screen (match: 13.08 6 1.45 s; nonmatch: 10.26 6 1.49 s); this proportion differed significantly from chance [one-sample t test, t(23) = 2.67, p = 0.014] (Figure 2A). This w3 s difference, although seemingly small, is robust in the context of the preferential-looking method and is similar to differences reported for similar experiments in both humans [31,32] (C) Still frames extracted from the videos used in the preferential-looking experiments. The top row shows frames from the large monkey. Videos were synchronized and edited so that they appeared to be synchronously producing the coo vocalization shown in (A). and monkeys [11,12]. With the percentage of total looking time to the match screen used as a dependent variable, an ANOVA was conducted to explore any possible interactions among four primary variables (side of screen [left versus right], vocalizer [acoustic signal of monkey 1 versus monkey 2], face [visual signal of monkey 1 versus monkey 2], and vocal tract length [long versus short]). All main effects or interactions were nonsignificant. Thus, there were no response biases toward the left or right screen, the stimulus exemplars (the calls or the faces used), or the size of the monkey on the matching screen (e.g., monkeys did not look longer overall when the matching screen showed a large monkey). Nineteen out of twenty-four monkeys in the present experiment preferentially attended to the dynamic face that best matched the body size simulated by the coo vocalization played through the speaker ( Figure 2B, sign test, p = 0.003). These results demonstrate that rhesus monkeys can, without any training whatsoever, use formant structure to assess the age-related body size of conspecific individuals. Discussion Previous behavioral studies demonstrated that trained baboons [33] and macaques [34][35][36] can discriminate different human vowel sounds presumably on the basis of formant-frequency differences. Recently, Fitch and Fritz [29] have significantly extended these findings by showing that rhesus monkeys can, without training, discriminate differences in the formant structure of their own conspecific calls. However, a demonstration that particular sorts of features appear in species-typical vocalizations or that animals can attend to such features is (though of great importance) not equivalent to showing them to be functionally significant to the animals in question. The functional significance of formants in monkey vocalizations was first suggested by the study of Owren [9,10], who showed that trained vervet monkeys could use formants to distinguish between their alarm calls (akin to the way in which humans may discriminate speech sounds). The results of our experiments suggest that rhesus monkeys can not only spontaneously discriminate changes in formant structure within a call type (à la [29]), but can also use these differences in formant structure as indexical cues-to assess the age-related size of a conspecific individual. Although body size is just one indexical cue among many that may be encoded in the formant frequencies of monkeys, our data show that, as in humans [13,18,20,21], acoustic cues that are the product of vocal-tract length can be used to estimate body size. These data are the first direct evidence for the hypothesis that formants embedded in the acoustic structure of nonhuman primate calls provide cues to the physical characteristics of the vocalizer [3][4][5][6][7]. Rhesus monkeys and humans are not alone in this regard. One other nonhuman species perceives a link between formant structure and body size: red deer, Cervus elaphus. Recent studies of red deer males during their mating season show that not only do red deer roars contain formant structures that are indicators of a male's body size and fitness [37], but male red deer are also more attentive and, in some cases, will reply with more roars when they hear synthetic male roars with lower formant frequencies (simulating a large stag) [28]. Indeed, red deer are able to ''exaggerate'' their apparent size by actively lowering their larynx during vocal production, thereby creating a longer vocal tract (and thus lower formant frequencies) [38]. Nonhuman primates are not known to be able to actively lower their larynx in this manner during vocal production. Taken together, the fact that rhesus monkeys and red deer can both use formant cues to assess body size begs the question: Is their common perceptual ability the result of convergent evolution (i.e., they evolved independently) or common ancestry (i.e., all or most mammals share this capacity)? If all mammals were endowed with the capacity to assess body-size cues (age-related or otherwise) via formant frequencies, then it would suggest that even in mammals whose own vocalizations lack formant structure, formant discrimination would still be evident. For example, in small mammals (including small primates, such as New World marmosets or squirrel monkeys) that have short vocal tracts and high frequency calls, formant structure is simply not present in their vocalizations (see [29] for details regarding why this is so) and thus formant perception in these animals would exist without purpose, perhaps as the nonadaptive by-product of other auditory mechanisms. The alternative evolutionary scenario would suggest that the link between formant perception and indexical cuing arose in parallel, possibly multiple times during the course of mammalian evolution. Indeed, the divergent vocal production apparatuses between primates and red deer suggest that the evolution of vocal communication among mammals did not take a linear, unbranching path. Naturally, a direct test of either of these hypotheses would entail exploring formant perception in untrained animals that lack formant structure in their own vocalizations. Regardless of the evolutionary origins of acoustic body-size perception via formants, the link between rhesus monkey perception and human perception is likely to be direct because they are closely related species. However, in human speech perception, indexical cues are coupled with phonetic cues. Humans are able to identify vowel sounds across a wide range of speaker body sizes and ages (and thus different formant-frequency positions), though it is not a feature we consistently attend to. Nevertheless, recent human psychophysical studies revealed that humans, when asked, can make accurate judgments of a speaker's body size by using the formant structure embedded in speech sounds [13,21] and can recognize vowel sounds even when the simulated vocal-tract length is extended beyond the species-typical range [21]. Thus, assessing speaker size through formants may be an automatic, unconscious process that the human auditory system does in everyday speech communication. Even more pertinent to the current findings with rhesus monkeys, humans can use formant frequencies to determine the age category of speakers (juvenile versus adult) [13], and when fundamental frequency is put into conflict with formant information, human listeners rely on the formants to make age judgments [13]. A question that remains open is whether monkeys and/or humans within the category of adults can use formant cues to assess body size. Theoretically, such an assessment could be useful in male-male competition or mate attraction (as in the red deer, described above). Behavioral and acoustic evidence for either scenario, however, remains somewhat ambiguous. For example, in humans, acoustic measurements reveal a relationship only between adult-male height and formant spacing [17,22], whereas others find a significant correlation only between female height and formant spacing [23,39]. At the behavioral level, these cues may not be sufficient for assessing speaker size [22]. The reasons for these apparent inconsistencies across studies are multifarious and possibly include differences in body-size variables measured and speech tokens used, and/or large variation in vocal-tract morphology. Similarly, acoustic measures of formant spacing in the grunt calls of adult-female baboons reveal that it is not reliably correlated with many different measures of body size [6], and no behavioral tests of adult body-size perception via formants in monkeys have been forthcoming. Thus, although formant spacing may be a reliable perceptual cue to body size across age classes (as in the present study), this may not be true within an age class. Given that neither the vocal apparatuses nor brains of human ancestors fossilize, the comparative method is the only way to investigate the evolution of primate communication [40,41]. By comparing the vocal behavior of extant primates with human communication, one can deduce the behavioral and neural capacities of extinct common ancestors, allowing the identification of homologies and providing clues as to the adaptive functions of such behaviors. The close relationship between Old World macaques and humans allows for putative homologous brain mechanisms related to formant perception to be explored and compared between these species. Our data show that rhesus monkeys can intermodally match the auditory size embedded in their coo calls with the appropriately sized visual image of a vocalizing monkey's face; this ability is independent of the identity of the seen and heard monkey. Thus, monkeys are extracting a size cue from auditory structure alone and subsequently matching it to an appropriately sized visual signal. Could auditory cortex integrate such ''high-level'' bimodal signals [42], perhaps on the basis of implicit multisensory associations formed during everyday social interactions [43]? A first step would be to demonstrate that particular regions of auditory cortex are sensitive to formant structure relative to other acoustic parameters. A recent human neuroimaging paper revealed that regions adjacent to, but not within, Heschl's gyrus are sensitive to formant differences related to speaker size [44], and we have preliminary neurophysiological data that some cortical sites in the lateral belt (putatively a homologous area) of monkeys are also sensitive to vocal-tract-length-related changes in formant spacing relative to changes in fundamental frequencies (C.F. Chandrasekaran, R.V.D., R.D.P., N.K.L., and A.A.G., unpublished data). It is not known whether neurons in these areas integrate auditory and visual size information; if so, it would be strong evidence that these neurons encode ethologically relevant size information. It is a long trajectory from body-size perception to speech perception via formant cues. There are many aspects of vocal production that are unique to humans and allow us to produce a broader range of sounds with greater complexity [15]. Our data suggest that the use of formant cues in the perception of vowel sounds by humans in a linguistic context emerged gradually, perhaps for other functional reasons, over the course of human evolution. Perception of indexical cues, such as age-related body size, via formants in vocalizations may be one functional link between the vocalizations of human and nonhuman primates. Subjects We tested male rhesus macaques (n = 24; age range 4-14 yr) from a large colony housed at the Max Planck Institute for Biological Cybernetics. Animals are socially housed and provided with enrichment objects (toys, hammocks, ropes, etc.). All experimental procedures were in accordance with the local authorities (Regierungspraesidium) and the European Community (EUVD 86/609/EEC) standards for the care and use of laboratory animals. For the purposes of the current experiments, subjects were free-fed food and water. Stimuli The stimuli were digital-video recordings of seated rhesus monkeys spontaneously producing coo vocalizations in a sound attenuated room ( Figures 1A and 1B). The stimulus set was based on 3-yr-old digital videos of now-deceased male monkeys from the Max Planck Institute for Biological Cybernetics. These videos were then acquired onto a computer and manipulated as needed in Adobe Premiere 6.0 (www.adobe.com). We extracted the audio track from the digital-video samples. Calls were acquired at 32 kHz and then upsampled to 44.1 kHz to allow playback on our hardware. To generate synthetic rhesus monkey coo calls, we used computational algorithms previously used in similar studies with human speech sounds [13,21]. The stimuli used in the present experiments were based on natural rhesus monkey coo calls that had been scaled with STRAIGHT, a speech-processing routine that dissects and analyzes an utterance with glottal-cycle resolution. STRAIGHT produces a pitch-independent spectral envelope that represents the vocal-tract information independent from the source (the glottal pulse or vocal-fold vibrations) [30]. Once STRAIGHT has segregated a coo call into source (the glottal-pulse rate component) and vocaltract information (the spectral envelope), the coo can then be resynthesized with the spectral envelope contracted or expanded (simulating increases or decreases in vocal-tract length, respectively) or the source information expanded or contracted. The two operations are largely independent. Thus, coo calls produced by a small monkey can be transformed to sound like those of a large monkey, and vice versa, by manipulating the apparent size of the vocal tract while keeping the source constant. For the experimental paradigms described below, we used two different coo-call exemplars from two differently aged monkeysa 6-yr-old juvenile monkey, weighing 5.8 kg, and a sexually mature 11-yr-old adult monkey, weighing 10.0 kg. These were our base stimuli. This was done to control for any cues that may be related to body size beyond the resonance frequencies of different vocaltract lengths. For both calls, we then normalized the glottal-pulse rate to 420 Hz. This was done to control for any acoustic cues to body size that may be related to vocal-fold thickness and glottalpulse rate. For each of the two vocalizations, we then manipulated its spectral envelope to create two synthetic versions for each call. One version simulated a large monkey with a vocal-tract length of 10 cm, and the other simulated a small monkey with a vocal-tract length of 5.5 cm. These vocal-tract lengths are within the speciestypical range for rhesus monkeys [3]. All vocal stimuli were calibrated to the same average root-mean square (RMS) power with Adobe Audition. Preferential-Looking Paradigm Two videos were edited, one of which showed a large monkey (13yr-old, 9.0 kg) producing a coo vocalization and the other showed a small monkey (6-yr-old, 5.9 kg) producing the same call. One of the synthetic coo vocalizations (see above) simulating either a short or long vocal-tract length was then used to replace the original sound track. The videos were edited in Adobe Premiere such that the onset and offset of mouth movements occurred at exactly the same time. Thus, from spatiotemporal point of view, both monkeys appeared to be producing the same coo call. Two sets of such videos were made for each of the two coo calls used. The ''big monkey'' versus ''small monkey'' visual stimuli were played simultaneously on side-by-side 15 inch LCD monitors (Acer FP559, www.global.acer.com). Audio tracks were synchronized with both videos and played through a hidden speaker (same as above) placed directly between and slightly behind the monitors. The RadLight 3.03 Special Edition software video player (www. radlight.net) was used to play the videos in synchrony. Sounds were presented at an intensity of 72-75 dB (A-weighted) sound-pressure level (SPL) as measured with a Brü el & Kjaer 2238 Mediator sound-level meter (www.bksv.com) at a distance of 72 cm. For testing, a subject was brought to the testing room and placed in front of the two monitors at a distance of 72 cm. The monitors were 65 cm apart (center-to-center distance) and at eye level with the subject. All trials were videotaped by a digital-video camera placed above and between the monitors. All equipment was concealed by a thick black curtain except for the monitor screens and the lens of the camera. The experimenter monitored subject activity from outside of the room. During this time, the subject's attention was directed to the center by the flashing of a 1.2W light placed centrally between the two monitors. A test session began when the subject looked centrally. A trial consisted of the two videos and one of the auditory stimuli played in a continuous loop for 60 s. The left-right position of the two videos was counter balanced. Each subject was only tested once, and all trials were recorded on digital video. We used a between-groups design because, as in all studies that examine the spontaneous behavior of animals and prelinguistic human infants, the subjects often quickly habituate to the testing environment. No reward or training was provided. We collected high-quality, close-up digital videos of the subjects' behavior with a JVC GR-DVL805 digital camera (www.jvc.com). Videos were acquired at 30 frames/s (frame size: 720 3 480 pixels) onto a PC by using an IEEE 1394a input and Adobe Premiere 6.0 software (www.adobe.com). Clips for analysis were edited down to 60 s, starting with the onset of the auditory track. The total duration of a subject's looking toward each video (left or right) was recorded and expressed as the proportion of total time spent looking at either screen. Scoring which of the screens the monkey subjects were looking toward was unambiguous. The screens are far apart in the horizontal dimension, fairly close to the monkey's face, and at eye level. Thus, the monkey has to make large eye and head movements to look to one screen or the other, and it is similarly clear when he is not looking at either screen. To validate this, we had all the videos scored by a second observer blind to the experimental condition in order to determine interobserver reliability, which was 0.938 (p < 0.0001) as measured by a Pearson r test. The statistical tests and plotted data are derived from the blind observer's video scores.
2016-03-01T03:19:46.873Z
2007-03-06T00:00:00.000
{ "year": 2007, "sha1": "3416ddf1f6de300f8d3ca6feb9ab2a9269124c9b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.cub.2007.01.029", "oa_status": "HYBRID", "pdf_src": "Elsevier", "pdf_hash": "3416ddf1f6de300f8d3ca6feb9ab2a9269124c9b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology", "Psychology" ] }
259046208
pes2o/s2orc
v3-fos-license
Early warning model and prevention of regional financial risk integrated into legal system In order to improve the laws and regulations of the financial system, in the construction of laws and regulations, the traditional financial risk Early Warning (EW) model is optimized. The financial prevention and control measures with legal protection are implemented to warn the financial risks, which plays an important role in the construction of the rule of law in the Financial Market (FM) and the establishment of financial risk prevention and control laws and regulations. This paper combines the deep learning model and the Markov regime Switching Vector Auto Regression (MS-VAR) model and constructs a regional financial risk EW model from the following aspects: macroeconomic operation EW indicators, regional economic risk EW indicators, regional financial institution risk EW indicators. The model is empirically researched and analyzed. The results show that the fluctuation trend of the macroeconomic pressure index in the time series is relatively large, and the overall fluctuation of the regional economic pressure index is small, and fluctuates around 0 in most periods. After the financial crisis, local governments stepped up their supervision of non-performing corporate and household loans. From 2011 to 2018, the non-performing loan ratio began to decline, and the overall fluctuation of the regional financial comprehensive stress index was small, fluctuating around 0. Due to the lack of legal regulation, from the perspective of the regional economy, the risk level is more likely to change from low risk to moderate risk, while the risk status is less likely to change from high risk to moderate risk. From the perspective of regional financial institutions, the probabilities of maintaining low risk and moderate risk are 0.98 and 0.97, respectively, which is stronger than maintaining the stability of high risk. From the perspective of the state transition of the regional financial risk composite index, the probability of maintaining low risk and high risk is 0.97 and 0.93, which is higher than maintaining the stability of medium risk. The Deep Learning (DL) regional financial risk EW MS-VAR model has strong risk prediction ability. The model can better analyze the conversion probability of regional financial risk EW index and has better risk EW ability. This paper enhances the role of legal systems in financial risk prevention and control. The regional financial risk EW model incorporating financial legal indicators can better describe the regional financial risk level, and the EW results are basically consistent with the actual situation. In order to effectively prevent financial risks and ensure the safety of the financial system, it is recommended that the government improve local debt management, improve financial regulations and systems, and improve the legislative level of financial legal supervision. Introduction In the decades of social and economic development in China, the financial industry plays a vital role in the market economy and is the key to the market economy. In order to improve the laws and regulations of the financial system, the traditional financial risk early warning model is optimized in the construction of laws and regulations. The implementation of legally protected financial prevention and control measures and early warning of financial risks play an important role in the legal construction of Financial Market (FM) and the establishment of laws and regulations on financial risk prevention and control. FM malfunctions occur from time to time, and the risk probability increases significantly. Financial risk has the characteristics of high risk and high crisis, which is likely to cause serious harm to the market economy [1]. In particular, Regional Financial Risks (RFRs) have strong linkage and spread, and it is easy to induce national financial risks and even global financial crisis through ripple effect [2]. Under the downward pressure of huge Economic Growth (EG), various uncertain factors appear. Therefore, Early Warning (EW), prevention and control of RFRs have become an important task of macro-control in China [3]. Therefore, it is essential to emphasize the implementation of effective financial risk prevention measures. Meanwhile, relevant laws and regulations must be improved to make FM develop harmoniously and orderly. Although national legal supervision can effectively monitor risks, effective financial risk EW measures can make the FM develop more harmoniously and orderly, make the whole market economy run more smoothly, and bring higher economic benefits to the society [4]. Strengthening RFRs-oriented EW has important practical significance for preventing financial risks and ensuring stable regional EG [5]. Financial behavior is highly related to the economy and society. All walks of life have paid more attention to the risks in the financial field. As a popular technology in Artificial Intelligence (AI), Deep Learning (DL) can model abstract high-level features of various data with multiple processing layers and nonlinear transformation [6]. Meanwhile, DL can find appropriate and effective features from complex data by processing big data, learning features through training, and using multi-layer perceptron models to supervise unsupervised data learning. The deeper the model is, the more accurate the feature expression will be [7]. used Deep Neural Network (DNN) to model the evacuation of subway station buildings, and carried out simulation experiments. Comparing the Convolutional Neural Network (CNN) model with the pre-training model by classifying data sets, the accuracy and training speed of the proposed model are verified [8]. Chen et al. (2021) used DL technology to model the network security system of smart cities, reducing the network security risks [9]. DL can also be used in financial analysis to predict commodity prices, financial events, financial risks and other hot issues. For example, Zhou et al. (2021) applied DL to the financial risk early warning of real estate enterprises. They took real estate as an example to make a concrete demonstration and analysis [10]. For RFRs-oriented electronic warfare, scholars have also conducted relevant research on risk causes, propagation paths, indicators, and model selection [11]. Du et al. (2021) established a scientific and effective oriented electronic warfare system based on Big Data Technology (BDT). They integrated a lot of data and introduced related risk index [3]. In selecting financial risk EW indexes, the Evaluation Index System (EIS) mainly focuses on currency crises, bond crises, or banking crises and fails to reflect the systemic financial risk fully. Moreover, the traditional financial risk EW system is mainly based on the linear model [12]. Based on the above theory, this paper improves the selection of financial risk EW indicators and risk EW models. Section 1 describes the purpose of writing the article and literature review. In Section 2, the characteristics, incentive mechanism and risk factors of regional finance are summarized, and the regional financial risk EW system is constructed by using DL model and Markov regime switching vector autoregression (MS-VAR) model. Then, the regional financial risk EW index and its comprehensive index are constructed, and the regional financial risk EW MS-VAR model based on DL is constructed. Section 3 analyzes the comprehensive index of economic stress from two aspects: macro-economy and regional economy, and regional financial stress from two aspects: regional financial institutions and regional finance. This model is used to test the EW of risks in different dimensions and predict regional financial risks. Section 4 summarizes the main results of this study and analyzes the future research direction. The innovation of this paper is to integrate DL model and financial legal system into the construction of regional financial risk EW model, and analyze the comprehensive indicators of regional financial risk EW model from many aspects. The design aims to make the EW of RFRs conform to the development characteristics of the local financial industry and improve the predictability of the EW model of financial risk. This discovery provides a reference for subsequent scholars to study financial risk EW. Theoretical research and method design of regional financial risk Overview of RFRs theory 1. Basic concepts and characteristics of RFRs. RFRs is a variety of risks in the financial industry within a certain economic range. According to the impact region and performance characteristics, financial risks are divided into macro, regional, and micro levels. In particular, RFRs belongs to the meso level [13]. The formation of financial risks within a specific range generally has three-level factors. The micro-level risk spreads within a specific range and has the characteristics of top-to-bottom development. Risks spreading in some highly different economic ranges belong to horizontal risk. Finally, the macro-level risks accumulate and spread in the financial industry system [14]. RFRs have the general characteristics of financial risks, such as objective existence, controllability, and negative impact, and [15] at the same time, have unique characteristics. For example, the RFRs formation mechanisms are special. The risk harms limited areas, and there is a remarkable effect in RFRs-oriented EW [16]. Regional financial risks mainly consider the impact of four aspects: the level of macroeconomic development, the level of regional industrial development, the development of regional financial institutions, and the financial legal supervision system [17]. The details are shown in Fig 1. As shown in Fig 1, from the perspective of the macroeconomic level, economic risk is an essential factor in financial risks. RFRs is closely related to the market macroeconomic factors of stocks, bonds, currencies, and real estate. The economic slowdown, the industrial structural transformation, the real estate market downturn, and investors' lack of confidence will all affect regional finance's development. Local economic development indirectly affects regional financial stability. Regional financial institutions are mainly banks. If the non-performing loan ratio of the banking industry increases, it will induce a run on the banking industry, which will easily induce RFRs. Loopholes in the financial and legal supervision system, the applicable fuzzy boundary of legal norms, and the lack of governance and authority of local regulations are also important factors causing RFRs. Financial risk incentive theory. The theory of financial risk incentives includes five aspects: financial business cycle theory, financial vulnerability theory, financial asset price fluctuation theory, bank run theory, and lack of legal supervision [18]. perspective of financial development, this work studies the dynamic change mechanism in different stages of the financial cycle. The balance sheet and bank loan mechanism are the primary factors that increase financial risks. The theory of financial fragility refers to the accumulation of internal risks formed by its high debt business model. The risks mainly come from the highly leveraged business strategy, the lack of legal and regulatory constraints of emerging financial institutions, the growth of asset foam caused by speculative investment, and the accelerated accumulation of risks caused by information asymmetry. The fluctuation law of financial asset prices also differs significantly from the trend of macroeconomic operation. The banking industry's assets mainly come from the deposits of depositors, which makes the liquidity of bank assets worse than that of liabilities. When banks realize asset appreciation in the form of short deposits and long loans, the uncertainty of depositors' demand is the main factor causing bank runs. The lack of legal supervision is the lack of clear legal regulation of financial supervision, imperfect financial supervision regulations, and lack of legal texts for risk prevention and control. 3. The transmission channel of RFRs. The financial risk transmission mechanism mainly includes the transmission of trade, financial channels, and two-way transmission between the real and financial industries. The main transmission path is the two-way transmission between the real and financial industries [19]. The specific transmission path is demonstrated in Fig 3. According to Fig 3, trade channel transmission mainly refers to the risk transmission between two regions with trade exchanges. Financial channel transmission refers to the outbound regional capital transfer induced by financial risks through capital flow, bank lending, and financial product investments. Meanwhile, unclear definitions of FM laws and regulations and the division of supervision power unregulated by legal norms and guidelines make the entire FM the most important channel for risk transmission. Essentially, the two-way transmission mechanism between the real economy and the financial industry is the two-way risk transmission of financial risks through the real economy and the FM. RFRs-oriented EW system 1. DL model theory. Deep Neural Network (DNN) is an effective machine learning algorithm for DL. It learns the inherent laws and representation levels of sample data. It can automatically learn data features and complete tasks such as classification and regression [20]. Its ultimate goal is to enable the machine to have the same analysis and autonomous learning ability as people and recognize characters, images, and other data [21]. DL extracts features layer by layer by mining the underlying feature distribution of the data, with multiple hidden layers. The hidden layer connects the input and output layers. The structure of the DNN model is unfolded in Fig 4. As explained in Fig 4, unsupervised learning from the input layer to the output layer is used in the DNN model. DNN starts from the input layer and trains layer by layer to the top layer. The parameters of each layer are trained layer by layer without calibration data. This training method can be regarded as an unsupervised training process [22]. The idea of DL feature extraction is applied to the construction of financial risk EW model. The DL data is used to mine features, analyze regional financial risk indicators, and mine financial risk and influencing factor indicators. 2. RFRs-oriented EW model. The traditional financial risk EW model includes the Frequency Ratio (FR) model. FR is based on the influencing factors of currency risk. The FR model expressed in Eq (1): In Eq (1), Y is the financial risk variable. Y = 1 indicates the outbreak of financial risk. x represents the influencing factor of financial risk. q denotes the parameter vector of x. The joint probability of induced variables is used to measure the outbreak probability of financial risk. Suppose N countries are measured, and the sample period is 1,2,� � �,T. In that case, p{i, t} is estimated by Eq (2) The comprehensive result of country i in period t is x{i, t}. MS-VAR model can well analyze the structural changes between variables and predict the future data change rules based on summarizing the historical data change rules. The specific model is as follows: The vector autoregressive model composed of the k-dimensional time series y t = (y 1t ,� � �, yk t ) 0 is expressed by Eq (3): In Eq (3), t = 1,2,� � �,T, μ t~I ID(0,S), and y 0 ,� � �,y t−p are the determined variable. Suppose the error variable conforms to the normal distribution, or μ t~I ID(0,S). In that case, Eq (3) can be expressed as the intercept term of VAR(p) model. Its expression is given in Eq (4) [24]: In Eq (4), μ is the k×1 mean form of y t . The calculation of μ reads: When the time series is affected by the change in the regime system, it is assumed that the regime switch variable S t 2{1,� � �,M} is a Markov chain in a discrete state. The conversion probability is counted by Eq (6): When the order of the MS-VAR model is P, the specific performance of the Regime Switch Model (SRM) reads: In Eq (7), μ(S t ), A 1 (S t ), and A p (S t ) are parameters of μ. A 1 � � �,A p is applicable to the parameter function of regime S t . μ(S t ) is calculated by Eq (8): The RSM is added to the intercept term to achieve a reasonable state of regime switch and mean smooth switch [25]. The results are reflected in Eq (9): RFRs-oriented EW index and its Composite Index (CI) construction 1. RFRs-oriented EW index. This section takes Shanghai as an example to construct the RFRs-oriented EW indexes from four aspects: macroeconomic operation risk, regional economic risk, regional financial institution risk, and financial and legal supervision system risk [26]. Macroeconomic operation index data are selected from government departments, stock market, bond market, and foreign exchange market [27]. Regional economic risk index data take the dimensions of government departments and enterprises as secondary indexes. The regional financial institution index considers the banking and insurance industries as secondary indexes. By comparison, the financial and legal supervision index takes the local legal norm system and the legal norm of the supervision subject as the secondary indexes. The index selection is specified in Fig 5. Macroeconomic fluctuation is a cyclical change from depression to recovery and then to climax. In terms of the leading index, economic indexes represented by macroeconomic prosperity and entrepreneur confidence index are selected to improve the RFRs-oriented EIS further. Regional economic risk is mainly reflected in the transfer of local debt risk to the risk of the financial system and the risk transfer from the real industry to the financial system. The strength of risk management and control of financial institutions directly affects the stability of the FM, with banking and insurance as secondary indexes. The local legal norm system is reflected in two aspects: the division of financial supervision power and the division of punishment for illegal acts. The legal norms of the supervision subject are embodied in the classification and standardization of the supervision subject. 2. RFRs CI. In view of the characteristics of China's financial system and the regional characteristics of financial risks, taking Shanghai as an example, this paper analyzes the macroeconomic and regional economy, regional financial institutions and regional financial risk index from 2002 to 2020. The data are derived from Wind, eastmoney.com and cnfin.com and other relevant economic and financial websites. Against the characteristics of China's financial system and the regional characteristics of RFRs, the CI method is used to construct the RFRsoriented EIS. It is used as the basic variable of the financial risk EW model [28]. The CI method can be converted according to variable changes and combined with various risk identification and EW models. Fig 6 lists the details. Fig 6(A) is the macroeconomic and regional economic risk index from 2002 to 2020, and Fig 6(B) is the regional financial institution and regional financial risk index from 2002 to 2020. The economic risk index and the regional financial risk index are variable. With the strengthening of government supervision, the risk index has decreased. DL-based MS-VAR model for RFRs EW Combined with the DL model, the regional financial risk EW indicators are analyzed, and on this basis, the regional financial risk EW MS-VAR model is constructed. In order to strengthen the EW ability of the MS-VAR model, EW inspections are carried out on risks of different dimensions, that is, risk state transition identification is carried out, and transition probability identification is carried out for three different risk levels. It includes low risk level, medium risk level, and high risk level [29]. The process is as follows: ( In Eqs (10)-(13), S t is the state variable. S t = 1,2,3 respectively represent a low-level financial risk, medium level financial risk, and high-level financial risk. {S t } denotes a first-order Markov chain. The state variable S t−1 at the previous moment determines S t at moment t. p ij stands for state conversion probability. The conversion probability matrix P of S t , is obtained by Eq (14): In Eq (14), p ij 2[0,1], and P 3 i¼1 p ij ¼ 1; i = 1,2,3. I t−1 indicates the information set of r t corresponding to moment t−1. Then, the joint density function is expressed by Eq (15): Suppose r t is known. In that case, the change of joint distribution probability is calculated by Eq (16): The filter probability Pr(S t = j|I t ) is converted, and the result is given in Eq (17): Based on the filter alternation effect, the smoothing probability Pr(S t = j|I t ) is calculated. The smaller the smoothing probability estimation is, the smaller the probability that the moment t is at the ith volatility level [30]. Economic pressure CI The results are shown in Fig 7 from the macro-economy and regional economy perspectives. As explained in Fig 7, the macroeconomic pressure index has a large fluctuation trend in the time series. The macro-economy is subject to external shocks, and the pressure posed by potential risks continues to accumulate. When the impact utility expands rapidly, a pressure risk erupts. The overall fluctuation of the regional economic pressure index is small. It fluctuates up and down at 0 in most periods, and the larger fluctuation range appears around 2004. In particular, the fluctuation trend of the regional economic pressure index is different from the macroeconomic dimension. This is mainly because the impact of the global financial crisis on regional economic development is significantly less than the macroeconomic fluctuation. Regional financial pressure CI Fig 8 analyzes the regional financial pressure CI from the two aspects of regional financial institutions and regional banking. As shown in Fig 8, the fluctuation range of the dimensional pressure index of regional financial institutions is significantly higher than that of the regional banking dimension. From 2004 to 2010, the non-performing loan ratio of the banking industry was always at a high level. After the financial crisis, the banking risks were released to a certain extent, and the legal supervision of local governments was strengthened. At the same time, the supervision of nonperforming enterprises and residents' loans was strengthened, which greatly reduced the nonperforming loan ratio of the regional banking industry. From 2011 to 2018, the non-performing loan ratio began to decline, and the pressure index of financial institutions entered a gentle state. The overall fluctuation range of the regional financial pressure CI is small, floating up and down the zero value. The financial pressure CI has changed more dramatically during 2004-2006, 2007, 2009, and 2016. MS-VAR EW inspection EW tests are performed on risks of different dimensions. Risk state conversion identification is mainly based on three levels: high risk, medium risk, and low risk, as revealed in Fig 9. As revealed in Fig 9, in terms of the macroeconomic dimension, the probability of maintaining the same state of transition among the three regional systems of low, medium and high risk is all 0.91. This indicates that the stability of the transition between regional systems is relatively strong, and the probability of maintaining the original risk level is high. It is easier to convert regional economic risk from low risk to moderate risk, but it is more difficult to convert from high risk to moderate risk. From the perspective of regional financial institutions, the probability of maintaining low risk, medium risk, and high risk is 0.98, 0.97, and 0.76, respectively, and the stability of low risk and medium risk is stronger than that of high risk. State conversion identification of RFRs CI The state conversion identification of RFRs CI is described in Fig 10. As described in Fig 10, the probability of maintaining low risk is 0.97, and the probability of maintaining high risk is 0.93. The stability of the two-zone system is strong, but the probability of maintaining moderate risk is less than 0.3, indicating that the moderate risk fluctuates greatly. The dimension mainly analyzes the state transition probability of low risk and high risk. The probability of high risk and low risk level conversion is high. RFRs prediction 1. Prediction test of regional financial pressure index. Based on Shanghai, the fitting trend of the regional financial pressure index from 2002 to 2020 is analyzed in Fig 11. Fig 9. EW test of financial risks in different dimensions. (a) Conversion probability of macroeconomic risk level; (b) Conversion probability of regional economic risk level; (c) Conversion probability of regional financial structure dimension. https://doi.org/10.1371/journal.pone.0286685.g009 As illustrated in Fig 11, the observed value of the fitted regional financial risk EW comprehensive pressure index has a high degree of coincidence with the fitted value, and the observed value and the fitted value curve have a good agreement as a whole, indicating that the DL regional financial risk EW MS-VAR model has strong predictive ability and high credibility of the predicted data. 2. Prediction test for RFRs. The proposed RFRs-oriented EW model is used to estimate the three-regime conversion probability of the RFRs CI. The results are plotted in Fig 12. According to Fig 12, the probability of low risk maintaining low is 0.90, and the probability of high risk maintaining high is 0.88. The stability of the two-regime system is strong, but the probability of medium risk conversion is relatively low. Thus, the fluctuation of medium risk is large. Therefore, the conversion probability of risk CI mainly analyzes low risk and high risk. The probability of conversion from low risk to medium risk is high. The regional financial risk EW MS-VAR model of DL can better analyze the conversion probability of regional financial risk EW index. Discussion Based on the DL model and MS-VAR model, this paper constructs the regional financial risk EW model from three aspects: macroeconomic operation EW index, regional economic risk EW index, and regional financial institution risk EW index. The model is empirically studied and analyzed. The results show that the macroeconomic pressure index fluctuates greatly in time series, while the regional economic pressure index fluctuates slightly in general, and fluctuates around 0 in most periods. From 2011 to 2018, the non-performing loan ratio began to decline, and the overall regional financial comprehensive stress index fluctuated slightly, fluctuating around 0. EW MS-VAR model of DL regional financial risk has strong risk prediction ability. The model can well analyze the conversion probability of regional financial risk EW indicators and has good risk EW ability. In this research direction, literature [23] gives EW of regional financial risks based on macroeconomic indicators, analyzes the changing trend of macroeconomic indicators by constructing multiple regression models, predicts possible economic crises and financial risks, and provides EW information for the government and financial institutions. Literature [24] constructs a technical analysis model through stock index, bond price and exchange rate, analyzes the trend of market indicators, predicts possible fluctuations and risks in the market, and provides reference for investors and financial institutions. The method in literature [25] has high accuracy. By constructing multi-dimensional evaluation model and supervision index system, the risk status of financial institutions is analyzed and forewarned. Compared with these studies, the advantages of this paper lie in the following points. First, risk characteristics can be captured more accurately. DL model and MS-VAR model can capture the nonlinear relationship and state switching characteristics of data more accurately to describe the characteristics of regional financial risks more accurately and improve the prediction accuracy. Second, data processing is more flexible. The EW model of regional financial risk adopts deep learning model and MS-VAR model, which can deal with nonlinear, heterogeneous and multivariable data flexibly to better meet the forecasting needs of different data types. Then, it is more interpretable. Compared with the traditional economic model, the deep learning model and MS-VAR model are more explanatory, can present the forecast results intuitively, and better provide decision support for decision makers. Finally, the EW model of regional financial risk can be flexibly adjusted and optimized according to the changes of data and forecast demand, and has stronger adaptability. Experimental result In order to improve the laws and regulations of the financial system and optimize the regional financial risk EW model, this paper constructs the regional financial risk EW indicators from four aspects: macroeconomic operation EW indicators, regional economic risk EW indicators, regional financial institution risk EW indicators and financial legal supervision system. According to the DL algorithm idea, the financial risk EW indicators are analyzed, the indicator system is improved, and the MS-VAR model is constructed. Finally, the regional financial risk EW MS-VAR model based on DL is constructed. Taking Shanghai as the research object, the model is empirically researched and predicted from several aspects, such as the comprehensive index of economic pressure, the comprehensive index of regional financial pressure, the MS-VAR EW test, and the comprehensive index of regional financial risk. The results show: (1) The overall fluctuation of the regional economic pressure index is small, fluctuating around 0 in most periods, and the periods with large changes in the pressure index are mainly from 2004 to 2006, 2007, 2009 and 2016. The macroeconomic pressure index fluctuates greatly in the time series, and the impact of the global financial crisis on regional economic development is less than the impact on macroeconomic development. (2) After the financial crisis, the local government increased the supervision of non-performing enterprise and household loans, which greatly reduced the non-performing loan ratio of the regional banking industry. From 2011 to 2018, the non-performing loan ratio began to decline, and the overall fluctuation of the regional financial comprehensive stress index was small, fluctuating around 0. (3) In terms of the macroeconomic dimension, the probability of maintaining risk among low, medium and high risks is all 0.91, the stability of the transition between risk zones is relatively strong, and the probability of maintaining the original risk level is relatively high. (4) From the perspective of the regional economic dimension, it is easier to convert from low risk to moderate risk, but it is more difficult to convert from high risk to moderate risk. From the perspective of regional financial institutions, the probabilities of maintaining low risk and moderate risk are 0.98 and 0.97, respectively, which is stronger than maintaining the stability of high risk. (5) From the perspective of the state transition of the regional financial risk composite index, the maintenance probabilities of high risk and low risk levels are both 0.93 and 0.97, which are higher than the maintenance probability of medium risk. From the regional financial pressure index prediction test, the overall observation value and the fitting value curve are in good agreement, indicating that the DL regional financial risk EW MS-VAR model has strong risk prediction ability. The model can better analyze the regional financial risk EW index conversion probability. Future research direction Due to the limited energy, this paper still has some limitations in the study of regional financial risk EW and prevention and control based on deep learning model. The future study of regional financial risk EW can be started from the following aspects: First, it is necessary to further strengthen cross-domain cooperation and data integration capabilities. Regional financial risk involves the financial systems of many fields and countries, and it needs interdisciplinary and cross-disciplinary cooperation, integrating various relevant data and establishing a comprehensive and systematic risk EW model. Second, it is necessary to introduce advanced artificial intelligence technology and big data analysis technology. At present, artificial intelligence technologies such as machine learning and deep learning, as well as big data analysis technologies, have been gradually applied to the financial field. In the future, regional financial risk EW can also be more accurate and efficient through these technical means. In addition, it is necessary to deepen the research on the theory and method of risk EW, develop more flexible and operable EW indicators and models, and establish a sound risk EW system. Finally, it is necessary to strengthen international cooperation and information sharing and form a global risk monitoring system. Regional financial risks cross national boundaries, so international cooperation and information sharing are needed to meet the risk challenges, and a global risk monitoring system should be established to provide more effective means for preventing and resolving regional financial risks. In a word, the research direction and means of regional financial risk EW in the future should be diversified and comprehensive, which requires crossdisciplinary and interdisciplinary cooperation, giving full play to the power of scientific and technological innovation, establishing a global risk monitoring system, and providing more reliable guarantee for financial stability and economic development.
2023-06-04T05:09:03.332Z
2023-06-02T00:00:00.000
{ "year": 2023, "sha1": "f6c51e5ab2a34a9d45f501af64309f6c481a1f4d", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f6c51e5ab2a34a9d45f501af64309f6c481a1f4d", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
251728489
pes2o/s2orc
v3-fos-license
Quality of life after extraction of mandibular wisdom teeth: A systematic review Objective The objective of this systematic review was to evaluate the impact of mandibular wisdom tooth extraction on a patient's quality of life “QoL”. Methods An electronic search was conducted through September 2021 on MEDLINE database, ELSEVIER- ScienceDirect, Ebsco, Scopus and Google Scholar to collect sufficient articles relevant to our subject. Data were extracted and analyzed from selected studies including study type, sample size and characteristics, duration of the observation after removal wisdom teeth, the questionnaire used for evaluation of this QoL and, the result. Results Of 107 studies, fourteen representing 4990 cases met the inclusion criteria. The quality of life has deteriorated but different factors contributed to his improvement. Thus, different instruments have been used in these studies: 24 the OHIP-14, 10 the OHQoLUK, 8 the HRQOL, 2 the EQ-5D-3L QOL, and 1 used UW-QOL. Conclusion The extraction of mandibular wisdom teeth has a negative effect on the quality of life during the first postoperative days but improved progressively by following the medical instructions given by the dental surgeon. Introduction The extraction of mandibular wisdom teeth represents the most frequent surgical procedure performed in oral surgery with a percentage of 5 million per year in the United States [1][2][3][4]8,14,16]. Different complications are frequently encountered in the majority of the population in the first few days following this extraction such as: osteitis, alveolitis, pain, trismus, edema as well as a difficulty of swallowing [2,3,10,16]. Thus, it should be noted that these complications might significantly lead to deterioration in the quality of life (QoL) during the immediate postoperative period [1,8,9] (Tables 5 and 6). Quality of life can be defined as "a state of well-being" which is based on two components. The first is the ability to perform daily activities that reflect physical, psychological, and social well-being and the second is the patient's satisfaction with the level of functioning, control of disease, and treatment-related symptoms [15,16]. For the assessment of this quality of life, several instruments have been used. We can identify in the study of Shugars et al. [3] the HRQOL, which allows us to appreciate the perception after the surgical extraction of mandibular wisdom tooth according to 4 domains "oral function, general activity, signs and symptoms, pain". In addition, Matijevic et al. [7] and Braimah et al. [11] used OHIP-14 or OHQoL-UK [11] to evaluate the quality of life with positive and negative aspects after this surgery. This systematic review of the literature aimed to determine the impact of the surgical removal of the third molar on physical, psychological, and social well-being by using different instruments. In addition, to expose the different measures, which contribute to his improvement. Materials and methods We conducted this review according to the Cochrane Handbook of Systematic Reviews and Interventions, the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) guidelines, and AMSTAR (Assessing the methodological quality of systematic reviews) guidelines [12,13]. It was registered on PROSPERO (ID: CRD42022319556). Criteria for considering studies for this review Types of studies: prospective and retrospective studies, observational and randomized clinical trials. Types of participants: Patients in good health who underwent surgical extraction of mandibular wisdom teeth. Types of interventions: Extraction of the mandibular wisdom tooth in different positions: "horizontal, vertical and mesio or disto-position". Types of outcome measures: The main objective was to determine the severity of quality of life impairment after mandibular wisdom teeth extraction by using different types of questionnaires. The primary outcome: depending on the postoperative days, this QoL differs with a significant deterioration in the 1st days but gradually improves. The secondary result: Several procedures have been reported in the literature to improve the quality of life of patients after mandibular wisdom teeth extraction. Selection of studies To identify studies included in or considered for this review, we developed detailed search strategies for each database searched until September 2021. Based on the search strategy developed for MEDLINE but revised appropriately for each database. A PICO approach was used in the databases search with MeSH and text words. The electronic data resources used were "National Library of Medicine, Washington" (MEDLINE-PubMed); the Cochrane Central Register of Controlled Trials (CENTRAL); (CINAHL-EBSCOhost); (ELSEVIER-ScienceDirect), (SCOPUS). The search was limited to human clinical studies and the last electronic search was performed in September 2021. The reference lists of the articles identified were cross-checked for other relevant articles (Table 1). Data collection and analysis Two review authors (LH and BC) separately examined the title and abstract of each article identified by the different search strategies. The authors classified relevant studies. Inclusion and exclusion criteria Publications written in English and French were included. While those in Arabic language systemic reviews, studies that did not include questionnaires, and those focusing on upper wisdom teeth were excluded. Data extraction and management All studies responding to the inclusion criteria underwent data extraction performed by at least two review authors. Both reviewers used a standardized data extraction sheet with the following parameters: study type, questionnaire quality of life, treatment in the control or placebo group, the total number of patients, and the total duration of observation. We present the characteristics of trial participants, interventions, and outcomes for the trials in the Characteristics of included studies. Study selection A total of 107 studies were identified. Of this, 13 duplicate articles were excluded, which resulted in 94 articles for analysis. After selected titles and abstracts according to the eligibility criteria required for our study, 74 full-text articles remained, of which 20 were excluded at this stage. Finally, 40 articles comprising 4990 patients were selected for inclusion in our work ( Table 2). Study results For the evaluation of the quality of life after removal of mandibular Table 2 Flow diagram showing the process of inclusion of the studies. The administration of an iodine-containing tampon in the socket after the extraction of impacted mandibular third molars has a positive impact on the oral health related quality of life. Average age 25.1 years Beech AN et al [24] 2018 Observational study Clinics 30 patients 1-7 days EQ-5D-3L QOL The use of a home facial cooling system "The Hilotherm" provides an improvement in the quality of life after extraction of the impacted mandibular wisdom tooth. Aged between 18 and 25 years Ibikunle AA et al [25] 2017 Observational study Clinics 124 patients aged between 18 and 51 years The patients' quality of life was impaired on days 1 and 3 after extraction of the impacted mandibular wisdom tooth, but was significantly improved on day 7 postoperatively. Essen A et al [26] 2017 Retrospective study based on a graph Ozone therapy showed a significant improvement in quality of life and a reduction in pain after extraction of the impacted mandibular wisdom tooth. Moreover, this treatment had no effect on postoperative swelling and trismus. Randomized clinical trial Clinics 60 patients 1-7 days OHIP-14 Intra-and extra-oral low power laser (LLLT) allows good healing, a significant reduction of pain, trismus, and swelling and improved quality of life on days 2 and 7 after extraction of the impacted mandibular wisdom tooth. Aged between 18 and 30ans Sancho-Puchades M et al [39] 2012 Prospective study Clinics 50 patients 1-7 days HRQOL-sp The extraction of the impacted mandibular wisdom tooth affects the quality of life especially in the first 5 days. Intraoperative conscious sedation with Midazolam provides comfort for the patient but has no effect in the postoperative period. Aged between 18 and 25ans Negreiros RM et al [40] Concerning the different prescriptions, five studies were interested in the prescription of corticosteroids alone [18,22,30] or associated with NSAIDs [5,6], and three included the effect of antibiotic therapy or prophylaxis [26,28,47]. Regarding the general and local factors, seven studies have evaluated the effect of age and sex variation [42](45) [46], smoking, poor oral hygiene [43], the position of the symptomatic or asymptomatic wisdom tooth [21] Features of every single study are reported in Table 3 3 Discussion The extraction of the impacted mandibular wisdom tooth creates an alteration in the quality of life in the patients postoperatively. This notion of quality of life includes several distinct parameters that describe more precisely the perception of the patient in front of this extraction while taking into account their worries, expectations, and several factors that improve or deteriorate their postoperative period. In relation to the functional limitation: Deepti C et al. [1], Aravena P et al. [2] as well as Shugars DA et al. [3], have represented this after the extraction of the mandibular wisdom teeth by several components. These include difficulty in working, performing sports and leisure activities, discomfort in opening the mouth, which may worsen with the installation of trismus, and difficulties in pronouncing words. Regarding pain, several authors in particular Xie L et al. [5], Braimah RO et al. [6], Lindeboom JA et al. [19], and Ai Lyn Lau A et al. [22] have discussed the value of preoperative prescription of anti-inflammatory drugs or the use of an iodine tampon in the postoperative socket for pain reduction. We also distinguish the physical disorder represented by a change in diet, the psychological suffering that leads to a temporary depression, but which will decrease until it disappears from the 3rd postoperative day according to most authors [1,3,11]. Now, to assess the impact of mandibular third molar extraction on patient quality of life, the studies in this work have used specific instruments such as OHIP-14, HQoLUK, HRQOL, EQ-5D-3L QOL, and UW-QOL. There was a significant deterioration in quality of life during the first 5 days after extraction of the impacted mandibular wisdom tooth, which improved after the 6th day. The use of these two questionnaires in this study identified that there is no difference between them. OHQoLUK-16 Chuang SKEt al [45] 2007 Prospective cohort study Regarding the scoring systems, the higher scores of OHIP-14, and HRQOL was correlated with a negative impact on quality of life, especially from day 1 to day 7 postoperatively. This finding could be explained by the difficulty of the operation involving osteotomy, separation, and incision as well as possible complications such as trismus, edema, and pain associated with surgical removal of the mandibular third molar [25,31,48]. Currently, when the impact of this extraction on quality of life was analyzed separately for each domain, the domain "physical pain" was mostly recorded by patients (91%) [1,6,22,43]. The present results reveal that pain seems to be the main reason for the deterioration of quality of life after this extraction, mainly on the 1st postoperative day [11,48], and decreasing linearly during the follow-up. These results may provide a source of information for clinical planning when considering prescribing analgesics for faster patient recovery. Many therapies have been proposed by several authors whose goal is to control postoperative pain and ensure a better quality of life such as: "aPDT laser [35], also the low-powered one (LLLT) [39]", ozone therapy [37] and/or hilotherapy [25]. Medication in the form of "intravenous injection of prednisolone [18] and submucosal dexamethasone [5] or even Bromelain [36] etc. Conclusion In summary, many studies have been conducted on the extraction of impacted mandibular wisdom teeth, and more specifically those evaluating the clinical quality of life after this extraction. Thus, the difference between these studies, notably the sample size, the protocols of realization, the duration of the study, and the criteria of judgment, allows a more precise exploration of this quality of life in all these parameters. In the present work, a synthetic conclusion can be formulated: the extraction of impacted mandibular wisdom teeth has a negative effect on the quality of life during the first postoperative days but improves progressively by following good postoperative instructions. Provenance and peer review Not commissioned, externally peer-reviewed. Compliance with ethical standards This research involved human participants. This was a retrospective analysis of published cases and did not require informed consent. Ethics approval and consent to participate were not included in this review. 2 Was the study population clearly specified and defined? 3 Was the participation rate of eligible persons at least 50%? Were all the subjects selected or recruited from the same or similar populations (including the same time period)? Were inclusion and exclusion criteria for being in the study prespecified and applied uniformly to all participants? 5 Was a sample size justification, power description, or variance and effect estimates Provided? For the analyses in this paper, were the exposure(s) of interest measured prior to the outcome(s) being measured? Was the timeframe sufficient so that one could reasonably expect to see an association between exposure and outcome if it existed? For exposures that can vary in amount or level, did the study examine different levels of the exposure as related to the outcome (e. g., categories of exposure, or exposure measured as continuous variable)? Were the exposure measures (independent variables) clearly defined, valid, reliable, and implemented consistently across all study participants? 10 Was the exposure(s) assessed more than once over time? Were the outcome measures (dependent variables) clearly defined, valid, reliable, and implemented consistently across all study participants? Were the outcome assessors blinded to the exposure status of participants? 13 Was loss to follow-up after baseline 20% or less? Were key potential confounding variables measured and adjusted statistically for their impact on the relationship between exposure (s) and outcome(s)?
2022-08-23T15:02:35.219Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "274246fde20f9b22897c61cf715ebc73d3ecb306", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.amsu.2022.104387", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "83a3d630fa5b81ca36d61de985cd55ed3d86f150", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246670364
pes2o/s2orc
v3-fos-license
Intratumoral Microbiota Impacts the First-Line Treatment Efficacy and Survival in Non-Small Cell Lung Cancer Patients Free of Lung Infection Background It has been known that there are microecology disorders during lung cancer development. Theoretically, intratumoral microbiota (ITM) can impact the lung cancer (LC) survival and treatment efficacy. This study conducted a follow-up investigation of non-small cell lung cancer (NSCLC) patients without lung infection to prove whether ITM indeed impacts the first-line treatment efficacy and survival. Methods We enrolled all patients diagnosed with NSCLC in our department from 2017 to 2019, whose tumor samples were available (through surgery or biopsy) and sent for pathogen-targeted sequencing. All patients received the first-line treatment according to the individual situation. In the short term, the efficacy of the first-line treatment was recorded. During the follow-up, the survival status, progress events, and overall survival (OS) period were recorded if a patient was contacted. Results Firstly, 53 patients were included, and our following analysis focused on the stage III and stage IV cases with ADC, SCC, or ASC tumors (47 cases). Several bacteria are associated with the LC status and progression, including N stages, metastasis sites, epidermal growth factor receptor (EGFR) mutation, first-line outcome, and later survival. The risk bacteria include Serratia marcescens, Actinomyces neesii, Enterobacter cloacae, and Haemophilus parainfluenzae; and the protective (against LC development and progression) ones include Staphylococcus haemolyticus and Streptococcus crista. In the logistic regression, the two-year survival can be predicted using the results of four bacteria (Haemophilus parainfluenzae, Serratia marcescens, Acinetobacter jungii, and Streptococcus constellation), with an accuracy rate of 90.7%. Conclusion ITM have links to malignancy, EGFR mutation, first-line outcome, and survival of NSCLC. Our results implied the potential anti-NSCLC activity of antibiotics when used reasonably. It is still necessary to deepen the understanding of the characteristics of ITM and its interactions with NSCLC tumors and the immune cells, which is significant in individualized approaches to the LC treatment. Introduction Lung cancer (LC), especially non-small cell lung cancer (NSCLC), is one of the primary causes of global death. Traditionally, the carcinogenic factors of LC mainly include genetic factors and environmental factors. In recent years, it has been realized that microbial flora possibly impacts LC development [1]. e role of the microbiome has become increasingly clear, and variation of the microecology has been noticed in LC patients [2]; different clinicopathologies may be related to different conditions of the lung microbiota (LM) [3]. LM is also involved in LC onset and malignant progression. Understanding the diverse contributions of the bacterial microbiota to carcinogenesis is of great importance in LC diagnosis and treatment. Currently, studies do not distinguish LC cases with or without lung infection (LI). Bacterial and viral infections had influence on the patients' prognosis, affecting the immune system and impairing the outcome of anticancer treatments [4]. ese cases have two major diseases, and the situation should be more complicated than LC alone. For patients free of LI, the features of tumor-associated microbiota are particularly informative in the investigation of tumorigenesis driven by LM. Additionally, published studies concerning the intratumoral microbiota (ITM) mainly focused on the basal clinical characteristics. eoretically, IMT may impact the immune response and inhibit the treatment efficacy. So far, these factors and consequences are poorly understood in LC. It is reasonable to evaluate whether these interactions can impact the LC survival and treatment efficacy. However, very limited studies have analyzed the prognosis and investigated these potential influences induced by ITM. Here, we conducted this 5-year follow-up study of NSCLC patients without respiratory infection and proved that ITM indeed impacts the first-line treatment efficacy and survival, which will show the potential anti-NSCLC activity of antibiotics. It is still necessary to deepen the understanding of the characteristics of ITM and its interactions with NSCLC tumors and the immune cells, which is significant in individualized approaches to the LC treatment. Patients. We enrolled all patients diagnosed NSCLC in our department since 2017, whose tumor samples were available (through surgery or biopsy) and sent for pathogentargeted sequencing. e inclusion criteria are as follows: (1) patients diagnosed with lung cancer, including adenocarcinoma (ADC), squamous cell carcinoma (SCC), and adenosquamous carcinoma (ASC) and (2) patients with the basic demographic information and tumor characteristics. e exclusion criteria are as follows: (1) patients diagnosed with a definite respiratory infection and other system disease. e smoking history and average number of cigarettes smoked every year were acquired. e EGFR and TP53 mutation features were extracted from the electronic medical record system. In addition, 5 μm paraffin-embedded tumor-tissue sections were prepared, and the PD-L1 expression in the immunohistochemistry analysis was acquired from the Department of Pathology. All patients received the first-line treatment according to an individual situation. In the short term, the efficacy of the first-line treatment was recorded. During the follow-up (at most 5 years), the survival status, progress events, progression-free survival (PFS), and overall survival (OS) period were recorded if a patient was contacted. Targeted Sequencing of Intratumoral Microbiota (ITM). We used all the collected tumor samples to perform the pathogen-targeted sequencing. e sequencing process was performed in the Pathogeno One Pan-Infectious Pathogen high-throughput sequencing system by Shanghai Bingyuan Medical Technology Co. e report of each patient was acquired and documented in the dataset. For each known pathogenic microorganism, two fields were used for analysis: the reads of known bacteria and the presence of each microorganism. Outcome Measures. 53 patients were included, and 47 cases were followed for analysis. e risk bacteria include Serratia marcescens, Actinomyces neesii, Enterobacter cloacae, and Haemophilus parainfluenzae; the protective (against LC development and progression) ones include Staphylococcus haemolyticus and Streptococcus crista. Statistical Analysis. e data are expressed as numbers with proportions (%), mean with SD, or median with 95% confidence interval (CI). e differences in values derived from categorical variables were compared using the chisquared test. One-way ANOVA was used for three or four groups. Overall survival (OS) in relation to the bacterial result was evaluated by Kaplan-Meier survival curve and log-rank test. e Cox proportional hazard model was also used to discover the potential risk of the relationship between multiple factors and the overall postoperative survival. Statistically significant prognostic factors identified by univariate analysis were further included in the multivariate analysis. A P value <0.05 was considered statistically significant. pathological types were as follows: the numbers for adenocarcinoma (ADC), squamous cell carcinoma (SCC), and adenosquamous carcinoma (ASC) were 26 (49.1%), 21 (39.6%), and 3 (5.7%), respectively. Besides, there were three cases of other types, including two poorly differentiated carcinomas and one large cell lung cancer. e main metastasis sites were mediastinal lymph nodes, lung, bone, liver, and brain. Association between Intratumoral Microbiota (ITM) and Disease Characteristics. Given there were only three stage-I/II cases, and only three cases had the other pathological types (including poorly differentiated carcinoma and large cell lung cancer), our following analysis focused on the stage III and stage IV cases with ADC, SCC, or ASC tumors (47 cases). If some microbiota had no more than four positive cases, the results might be unreliable. erefore, those microbiota results with rare cases were culled from the raw data. First, the association between the microbiota and the pathological type was probed. Among ADC, SCC, and ASC, the ASC tumors had a higher abundance of Serratia marcescens (2.67 ± 4.619 counts) versus ADC (0.27 ± 0.962 counts, P < 0.05) and SCC (0 counts, P < 0.05). However, the case number of ASC was three, and this conclusion is still to be verified. Next, there was no association between ITM and the major stage (T stage and M stage). But for N stages, among N0 to N3, we noticed different features in the presence of Actinomyces neesii and Haemophilus (Table 2). ese two bacteria were negatively related to the metastasis in the lymph node (P < 0.01). Next, the main metastasis organs/tissues (including mediastinal lymph nodes, lung, bone, liver, brain, and pleura) showed noticeable association with the ITM. Tumors with Serratia marcescens were more likely to develop brain metastasis (P < 0.01, Table 3), and those with Enterobacter cloacae were more likely to develop metastases to the mediastinal lymph node (P < 0.05, Table 3). Moreover, for the first time, we noticed that ITM can link to the EGFR mutation (Table 4). For example, EGFR mutation was negatively related to Haemophilus parainfluenzae (P < 0.05) but positively with Serratia marcescens (P < 0.01). Furthermore, Acinetobacter jungii was positively correlated with PD-L1 expression (PD-L1 positive/negative � 4/8) in Acinetobacter jungii-positive cases, in comparison with 7/41 in Acinetobacter jungiinegative cases, (chi-square � 4.168, P � 0.041). Collectively, ITM is notably associated with disease characteristics of NSCLC. Association between ITM and the First-Line Treatment Outcomes. Next, we evaluated whether ITM may impact the efficacy of first-line treatments (targeted therapy or chemotherapy). Also, only stage III and IV cases were analyzed. Overall, there is no association between ITM and the response to the first-line treatment. However, in the hierarchical analysis, we noticed that the presence of Haemophilus parainfluenzae was negatively correlated with response to the first-line treatment for stage IV patients (Table 5). Association between ITM and Survival. Initially, we used the Kaplan-Meier method to evaluate the association between ITM and survival in stages III and IV. If any case, the number of the ITM/target event (progressed or death) double-positive set was not more than two, this index was omitted. Similar to the response to the first-line treatment outcomes, the presence of Haemophilus parainfluenzae was related to the poorer PFS of stage IV patients (Table 6 and Figure 1(a)). When stage III and IV cases were pooled together, we found that Staphylococcus haemolyticus infection was linked to the longer PFS (Table 7 and Figure 1(b)). Meanwhile, for pooled cases (stage III and IV), Serratia marcescens was related to better OS (Table 8 and Figure 1(c)) and the presence of Haemophilus parainfluenzae was related to the poorer OS (Table 9 and Figure 1(d)). Also, the Cox regression analysis (using the Enter model) showed that, besides Staphylococcus haemolyticus, Streptococcus crista is also associated with better PFS (Table 10). On the contrary, Haemophilus parainfluenzae and Corynebacterium jergeri were two risk factors for OS (Table 11). Finally, in the logistic regression model, the two-year survival was predicted (for stage III or IV patients with ADC, SCC, or ASC), using the following seven variables: age, major stage, pathological type, and the results of four bacteria (Haemophilus parainfluenzae, Serratia marcescens, Acinetobacter jungii, and Streptococcus constellation). e variables and their contribution are listed in Table 12. Using this regression, the predicted results (at the cutoff value 0.5) are as follows: 31 true-negative cases, 1 false-positive case, 3 false-negative cases, and 8 true-positive cases (with an accuracy rate of 90.7%). Discussion It has been recognized that LC has non-negligible links to pathogenic microorganisms, such as Haemophilus influenzae, Moraxella catarrhalis, Granulicatella, Abiotrophia, Streptococcus, and Mycobacterium tuberculosis [5][6][7][8] Haemophilus parainfluenzae has been regarded as an indicator of LM changes triggered by preoperative prophylaxis in LC patients [9]. It was observed in 43.3% to 63.3% cases of LC patients [9], and it is reasonable to believe that this bacterium is a cancer-promoting strain. Interestingly, there are also some seemingly unsupportive evidence. For example, prodigiosin is a secondary metabolite, isolated from a culture of Serratia marcescens. It induces LC apoptosis in both caspase-dependent and caspase-independent pathways [10]. It is still early to tell whether Serratia marcescens has a driving or suppressive effect on LC. However, based on Table 3, if Serratia marcescens drives LC progression, a possible pathway is metastasis in the brain. Similarly, as shown in Table 2, the reason for the driving effect by Actinomyces neesii may be due to the promotion of lymph node metastasis. Also, Haemophilus parainfluenzae, as a risk factor to survival, may promote the EGFR-WT carcinomas but not EGFR mutant ones, and these cases cannot be treated by targeted TKIs, which is a possible reason for the poor survival. e mechanisms underlying the impact of ITM on survival may include the following and various aspects. First, [11]. e dysbiosis of some carcinogenic microbiomes causes direct DNA damage and inflammation [12]. Moreover, inflammation triggered by microbial dysregulation can largely impact invasion and angiogenesis, which significantly drives the malignant progression. Known targets of microbiome-induced inflammation include TLRs, NF-ĸB, and STAT3 [13][14][15]. ese direct effects exerted by microbes are highly possibly carcinogenic. Also, there are some indirect mechanisms. For example, through modulation of immune response, the response to treatment and prognosis can be influenced. Besides, the inflammatory effect triggered by IMTs, their inhibitive effects on the immune system can also play an essential role [12]. At least partially, ITM can result in the exhaustion of immune cells, and they inhibit antitumor immunity together with the tumor cells. For example, they may exert suppress NK cells [16]. However, given the digestive tract has more abundant flora, most attention to the relationship between microbiome and cancer has been paid to colorectal cancer and gastric cancer. ere are limited data about the impact of ITM on the prognosis of LC, but related research can be used as reference. Recently, a Chinese study found nine enriched bacteria in the lung of NSCLC patients [17]. Also, the analysis of T cells and B cells implied that these bacteria in the lung may change the immune cell infiltration in LC tissues. Recently, a retrospective study of 69 NSCLC reported that patients treated with anti-PD-1 antibodies receiving antibiotics had greatly decreased objective response rate, OS, and PFS compared to those who did not use antibiotics [18]. is result highlights that inappropriate usage of antibiotics may influence the flora of the tumor environment and impair the treatment effect. Indeed, the use of antibiotics has been reported to be associated with an elevated risk of LC onset [19,20]. Moreover, when not limited to the local lung tissue, another interesting issue is whether the gut-lung axis may impact the outcome of chemotherapy and later survival through the impact of microbiota. is issue is being investigated by a multicenter, prospective, double-blind randomized trial [21], but the final result has not yet been announced. Still, the present study has some limitations. e main shortcoming is the small sample size. e number of many classification results was around 5 in one cell, which is the major reason for the inconsistency between the univariate analysis and binary regression analysis. Also, due to the limited sample size, the performance of bacteria in the 2-year prediction model is not outstanding enough, and we cannot divide the dataset into the training set and test set; hence, the scalability of the model is still unclear. Conclusion is novel study proved that ITM was related to malignancy, EGFR mutation, first-line outcome, and survival of NSCLC. Also, our results implied the potential anti-NSCLC activity of antibiotics when used reasonably. It is still necessary to deepen the understanding of the characteristics of ITM and its interactions with NSCLC tumors and the immune cells, which is significant in individualized approaches to the LC treatment. Data Availability e data used to support this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2022-02-09T16:20:48.938Z
2022-02-07T00:00:00.000
{ "year": 2022, "sha1": "c7623668012d9cd3b33b1cf921bd91f6fb7f1788", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jhe/2022/5466853.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "af045a3ae2760dd956c053843a97fdc6622f712c", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232259663
pes2o/s2orc
v3-fos-license
Generation and Characterization of a Polyclonal Human Reference Antibody to Measure Anti-Drug Antibody Titers in Patients with Fabry Disease Male patients with Fabry disease (FD) are at high risk for the formation of antibodies to recombinant α-galactosidase A (AGAL), used for enzyme replacement therapy. Due to the rapid disease progression, the identification of patients at risk is highly warranted. However, currently suitable references and standardized protocols for anti-drug antibodies (ADA) determination do not exist. Here we generate a comprehensive patient-derived antibody mixture as a reference, allowing ELISA-based quantification of antibody titers from individual blood samples. Serum samples of 22 male patients with FD and ADAs against AGAL were pooled and purified by immune adsorption. ADA-affinities against agalsidase-α, agalsidase-β and Moss-AGAL were measured by quartz crystal microbalance with dissipation monitoring (QCM-D). AGAL-specific immune adsorption generated a polyclonal ADA mixture showing a concentration-dependent binding and inhibition of AGAL. Titers in raw sera and from purified total IgGs (r2 = 0.9063 and r2 = 0.8952, both p < 0.0001) correlated with the individual inhibitory capacities of ADAs. QCM-D measurements demonstrated comparable affinities of the reference antibody for agalsidase-α, agalsidase-β and Moss-AGAL (KD: 1.94 ± 0.11 µM, 2.46 ± 0.21 µM, and 1.33 ± 0.09 µM, respectively). The reference antibody allows the ELISA-based ADA titer determination and quantification of absolute concentrations. Furthermore, ADAs from patients with FD have comparable affinities to agalsidase-α, agalsidase-β and Moss-AGAL. Introduction Fabry disease (FD, online Mendelian Inheritance in Man (OMIM) no. 301500) is a rare X-chromosomal-linked lysosomal storage disorder, caused by a deficiency of the α-galactosidase A (AGAL; EC 3.2.1.22) enzyme. The progressive accumulation of the AGAL substrate globotriaosylceramide (Gb3) results in a life-threatening multisystemous disease including heart failure, cardiac arrhythmia, cerebrovascular events, and end-stage renal disease. [1] Currently, in addition to chaperone therapy, FD is treatable by enzyme replacement therapy (ERT) using either agalsidase-α (0.2 mg/kg body weight every other week; Shire/Takeda, Lexington, MA, USA) or agalsidase-β (1.0 mg/kg body weight every other week; Sanofi-Genzyme, Cambridge, MA, USA). [2,3] Treatment with either agalsidaseα or agalsidase-β demonstrated beneficial effects on disease progression and manifestations in affected patients. [4] Comparable to other lysosomal storage disorders such as Pompe or Gaucher diseases, which are also treated by ERT, classical male FD patients are at high risk for the formation of persisting neutralizing anti-drug antibodies (ADA) against both components of ERT [5][6][7][8]. Due to the rapid disease progression the measurement of ADA titers is important for an individually tailored treatment management in affected patients. Currently, several different approaches to measure ADAs are used, including ELISA-based measures (also including IgG subclass analyses) [5,9], inhibitory-based measures [5,7,10,11], cell-based measures (to identify effects on cellular ERT uptake) [12] and bed-side tests [13]. All assays are suitable to identify ADAs, but titers are hardly comparable between assays or even between patient cohorts measured by different laboratories. In general, ELISA-based assays are most popular due to the reproducibility and feasibility. However, the weakness of these ELISA-based ADA measures is the lack of an appropriate reference antibody, allowing to quantify the absolute concentration of antibodies in a patients' sample. Our hypothesis is that a comprehensive patient-derived antibody mixture used as a reference allows an ELISA-based quantification of antibody titers from single blood samples. In contrast to the current applied protocols, this method would allow a simple, fast and reliable determination of antibody concentrations in routine clinical practice. In the current study, we pooled serum samples from 22 patients with FD and positive for neutralizing ADAs to generate a reference antibody against recombinant AGAL. Subsequently, the reference antibody was used to measure individual ADA titers in 40 ADApositive FD patients and ELISA-based titers were validated against inhibition-mediated measured titers. Finally, the purified reference antibody was biochemically characterized by measuring the binding affinity to three different recombinant AGALs. Generation of an Anti-AGAL Reference Antibody from Human Serum Samples The current study aimed to generate an anti-AGAL reference antibody for the direct measure of patients' anti-AGAL antibody concentrations. Therefore, agalsidase-α-coupled NHS-activated high-performance columns were used to extract anti-AGAL antibodies from 22 AGAL-inhibition positive male patients' sera by immune adsorption (Figure 1). To verify a successful immune adsorption, SDS-PAGE with subsequent Coomassie staining and western blot analysis was performed ( Figure 1A). Mouse IgG control demonstrated the typical pattern of heavy (50 kDa) and light (25 kDa) IgG chains. IgG level was most dominant in the elution fraction compared to the raw serum and flow through fraction. Western blot analysis was performed to control if agalsidase-α (51 kDa) dissociates from columns during immune adsorption process ( Figure 1B). Agalsidase-α was not detectable within raw serum and flow through fraction, however, a slight agalsidase-α signal was observed within the elution fraction. Additional control ELISAs with BSA, a negative control peptide and a non-homologous AGAL from Aspergillus niger as baits excluded a non-specific binding of the reference antibody ( Figure A1). Next, ELISA and inhibition assays were performed to determine if IgGs in the elution fraction were specific to AGAL and still have inhibitory capacities. ELISA revealed high concentrations of AGAL-binding IgGs within the elution fraction compared to the raw serum fraction (p < 0.0001) and flow through fraction (p < 0.0001) ( Figure 1C). Inhibition of agalsidase-α by anti-AGAL reference antibody was demonstrated by inhibition assays ( Figure 1D). One µg flow through fraction (immune adsorbed serum) (111.6 ± 12.2 pg/µg) showed less inhibition of agalsidase-α than one µg raw serum (225.2 ± 42.9 pg/µg; p = 0.0002). The highest inhibitory capacity was observed for the elution fraction (13,737.0 ± 751.4 pg/µg) compared to raw serum (p < 0.0001), and flow through fraction (p < 0.0001), supporting previous ELISA-based results. Furthermore, these data demonstrate that (in vitro) a 24-fold molar excess of ADAs is required to inhibit the same amount of AGAL. Raw sera from 40 patients and their corresponding purified total IgGs were used for the ELISA-based determination of anti-AGAL antibody concentrations. Measured concentrations of AGAL-specific IgGs in patients' sera showed a high correlation between titers from purified total IgGs and sera from different samples (patients) (r 2 = 0.9972, p < 0.0001; Figure 1E). This was confirmed by a Bland-Altman plot showing a low percentage difference between AGAL-specific IgG concentrations measured from patients' sera or their respective purified total IgGs ( Figure 1F). Since 18 from 40 analyzed patients' sera were not represented in the anti-AGAL reference antibody, individual data (represented and not represented in anti-AGAL reference antibody) are shown in Figure A2. Again, concentrations of AGAL-specific IgGs showed no significant difference between patients' sera and respective purified total IgGs ( Figure A2A,B), resulting in high correlations between samples represented (r 2 = 0.9790, p < 0.0001; Figure A2C) and not represented in anti-AGAL reference antibody (r 2 = 0.9997, p < 0.0001; Figure A2D). This was also supported by Bland-Altman plots showing that almost all data were arranged within 95% confidence interval (−27.56 and 21.2%; Figure A2E, −28.7 and 13.34%; Figure A2F). Validation of Anti-AGAL Antibody Concentrations in Human Samples A validation of ELISA-based measured anti-AGAL antibody concentrations was performed by using titration analyses, previously established in our lab. [8] Therefore, the amount of agalsidase-α required for antibody saturation was measured from 36 patients' and compared to ELISA-based AGAL-specific IgG concentrations ( Figure 2). Patients' sera ( Figure 2A) as well as respective purified total IgGs ( Figure 2B) showed high correlations with the inhibitory capacities (r 2 = 0.9063 and r 2 = 0.8952, both p < 0.0001). For a further confirmation, ELISA-based determined anti-AGAL antibody concentrations of 12 patients' sera were used to compute the amount of agalsidase-α required for antibody saturation. Computed and measured amount of agalsidase-α required for antibody saturation were plotted against corresponding ELISA-based determined anti-AGAL antibody concentrations (r 2 = 0.6622, p < 0.0001; Figure A3). Biochemical Characterization of the Reference Antibody Literature showed a cross-reactivity for neutralizing ADAs for agalsidase-α and agalsidase-β [5]. However, to the best of our knowledge, the dissociation constant (KD) of neutralizing ADAs for these enzymes used to treat FD has not been determined so far. In this study, we quantified the interaction between ADAs and different AGALs by quartz crystal microbalance with dissipation monitoring (QCM-D) for the first time. To immobilize AGAL on quartz crystals through streptavidin-biotin interactions, agalsidase-α, agalsidase-β and Moss-AGAL were biotinylated. Western blot analyses demonstrated successful labeling of all three AGALs ( Figure A4A) and activity measures revealed no functional disturbances after biotin-labelling (data not shown). Next, the SiO 2 -coated quartz crystals were coated with a lipid bilayer, followed by a streptavidin monolayer ( Figure A4B). In QCM-D, the binding of macromolecules to the crystal are detectable as decreases in the frequency and allows for label free and real time monitoring of surface binding events. Subsequently, biotin-labeled AGALs were immobilized on the streptavidin layer on the crystal ( Figure A4B). First QCM-D analyses where a commercially available AGAL antibody was passed over the AGAL functionalized crystals, showed significant binding of the antibody to the crystals and allows detecting of all three different AGALs ( Figure A4C). Subsequently, different concentrations of the anti-AGAL reference antibody were passed over agalsidase-α, agalsidase-β and Moss-AGAL functionalized crystals, demonstrating a concentration-dependent binding compared to control (without biotincoupled AGAL; Figure 3). The adsorbed mass was calculated from the changes in frequency using the Sauerbrey equation (Figure 3). Subsequently, the concentration dependent binding of the ADAs to the crystals was used to compute the dissociation constants and revealed comparable binding to agalsidase-α and agalsidase-β (KD: 1.94 ± 0.11 µM and 2.46 ± 0.21 µM, respectively) as well as for Moss-AGAL (KD: 1.33 ± 0.09 µM). Discussion The formation of neutralizing ADAs against infused AGAL has a major impact on therapy efficiency and thus disease progression in affected male patients with FD [7,8]. Therefore, it is highly warranted to identify patients at risk and quantify ADA titers for subsequent individual therapeutic approaches. In the current study, we provide a method to generate a polyclonal reference antibody from human serum samples for ELISA-based ADA titer measurements. Antibody titers from different studies and even from the same cohorts are usually difficult to compare. This problem is also well known in other LSDs and the reasons are multifactorial. In absence of a reference antibody, ELISA-based assays for antibody measurement are usually expressed as relative values compared to either an ERT-naïve sample (best case) or, if no ERT-naïve sample is available (most commonly), a negative control sample. Furthermore, commercially available antibodies recognize only a limited number of epitopes, which not necessarily represent the antibody of interest, and are from other host species, requiring own secondary antibodies for detection compared to the human samples. Our method, using a polyclonal reference antibody from human samples allowed the measurement of ADAs, reflecting the real antibody concentration (µg/mL) of the affected patients. After purification, the reference antibody still demonstrated an inhibitory capacity against AGAL. However, as recently demonstrated, ADAs from affected patients with FD can also bind to other, non-catalytically important domains for example with direct effects on cellular uptake [12]. Since a 24-fold molar excess of the reference antibody was required to inhibit AGAL, it can be concluded that in addition to enzyme activity neutralizing ADAs also non-inhibitory antibodies against AGAL were purified. However, no conclusions should be drawn to the general inhibitory capacities of individual ADAs in patients since some of the antibodies with inhibitory effects might be missed during the purification due to weaker binding affinities. Thus, although an ELISAbased measure might be superior to detect all free ADAs in affected patients compared to an inhibition assay, for a comprehensive determination of the ADA status functional assays should be performed in affected patient, too. Our ELISA data were supported in that the individual titers correlated well with the amount of enzyme required for ADA saturation during infusions [8]. In this respect, since ADAs can be saturated by AGAL during infusions [11], serum samples for ADA titer measures should be drawn ideally directly before the next infusion, or at least one week after an infusion to minimize false negative results. ADAs from patients with FD demonstrate a high cross-reactivity against agalsidase-α as well as agalsidase-β [5,7]. However, to the best of our knowledge, the affinities (KD) for different AGALs is unknown so far. Our QCM-D analysis shows that the polyclonal reference antibody has comparable KDs against agalsidase-α and agalsidase-β, as well as against Moss-AGAL. Moss-AGAL (ELEVA) is a recombinant human AGAL expressed in the genetically modified moss Physcomitrella patens. Preclinical studies suggest an improved uptake of the enzyme by mannose receptors instead of mannose-6-phosphate receptors into cells [14], while data from a phase I study showed good safety and tolerability of Moss-AGAL after a single dose of 0.2 mg/kg i.v. [15]. Although we observed slight differences for KDs against agalsidase-α, agalsidase-β and Moss-AGAL, KDs were in the same magnitude (10 −6 M). In addition, it can be concluded that the plant-based production of AGAL does not result in an increased affinity of present ADAs. High ADA affinities seem to be associated with increasing inhibitory capacities [16], while decreasing affinities might show a beginning of tolerization [17]. Hence, future research determining individual ADA affinities from affected FD patients against the infused enzyme is now warranted to assess if individual differences might occur and if they affect treatment outcomes by reduced AGAL inhibition, while decreasing affinities over longitudinal measures might show a beginning of tolerization of the patient. Therapeutic options or conditions lowering ADA affinities could also be an aim of future research. Patients' Samples Adult male FD patients (n = 40) with at least 6 months of ERT (agalsidase-α or agalsidase-β) and positive for neutralizing ADAs were included. Presence of neutralizing ADAs was determined and measured routinely in our lab using serum-mediated inhibition assays [5][6][7]. Time point of serum collection and determination of neutralizing ADA status was the last visit (2016-2020). Serum samples for the reference antibody or for individual ADA measures were drawn at least one week after the last infusion. Only one sample from this visit was used for the generation of the reference antibody and for subsequent ADA titer measures. Previous reports demonstrated a significant variation of antibody epitopes against AGAL. [12,18] Therefore, patients' samples with known ADA epitopes were used [18] to ensure that the reference antibody represents a wide spectrum of antibody epitopes against AGAL. Purification of Total IgGs from Human Sera Total IgGs from patients' sera for titer measures were purified by negative selection as described previously using Melon Gel IgG Spin Purification Kit (Thermo Fisher Scientific, Darmstadt, Germany) according to manufacturer's instructions [8,11]. In brief, 100 µL serum were diluted 1:10 with Melon Gel buffer, incubated with 100 µL settled Melon Gel, and inverted for 5 min at room temperature. After protein adsorption, total IgGs were separated via centrifugation at 12,000× g for 5 min. BCA (Thermo Fisher Scientific) and SDS-PAGE analysis was performed as reported previously to estimate the purified IgG content and to control the success of purification [11]. Generation of a Reference Antibody by Immune Adsorption To adsorb and purify anti-AGAL antibodies from patients' sera by positive selection, the immune adsorption was performed as described previously. [12] In short, 1 mg agalsidase-α (Shire/Takeda) was coupled to 1 mL HiTrap N-hydroxysuccinimide (NHS)activated high performance columns (GE Healthcare, Freiburg, Germany; no. 17071601) according to the manufacturer's instructions. After ligand coupling, columns were washed, deactivated and equilibrated. Sera from 22 patients (each 100 µL) were pooled, diluted 1:10 with 1× PBS and loaded on the column. The column was washed with eight column volumes 1× PBS and the flow through fraction containing the unbound serum proteins was collected for later analysis. Anti-AGAL antibodies were eluted with 100 mM glycine pH 2.2 and directly neutralized with 1 M Tris-HCl pH 9. Elution fractions were concentrated, dialyzed and collected in 1× PBS to receive the anti-AGAL reference antibody. Raw serum fraction, flow through fraction, and elution fraction were used for SDS-PAGE analysis followed by Coomassie staining and western blot analysis to verify successful immune adsorption. To further characterize the reference antibody, ELISA and inhibition assays were performed. SDS-Page and Western Blot Analysis To detect IgGs within raw serum, flow through, and elution fractions, SDS-Page was performed. In short, 15 µg samples were used for SDS-Page followed by subsequent Coomassie staining. Coomassie staining was performed according to manufacturer's instructions (Thermo Fisher Scientific). Mouse IgG was loaded as positive control. For western blot analysis 10 µg samples and 100 ng agalsidase-α as positive control were blotted onto PVDF membranes. After blocking overnight in Tris buffered saline with 5% milk powder, detection was performed using an anti-AGAL antibody (ab168341, Abcam, Cambridge, UK; working concentration: 100 ng/mL) and a secondary horseradish-peroxidase-labeled goat anti-rabbit IgG antibody (12-348, Sigma-Aldrich, St. Louis, MO, USA; working concentration: 100 ng/mL). ELISA-Based Measurement of AGAL-Binding IgGs 96-well plates were coated with 100 ng agalsidase-α per well over night at 4 • C and washed three times with PBS. For negative controls, 100 ng BSA per well was used. After blocking with 2% BSA/PBS for 1 h at room temperature, wells were washed again. To detect extracted anti-AGAL antibodies from patients' sera by immune adsorption, serial dilutions of elution, flow through, and raw serum were loaded into the wells and incubated for 2 h at room temperature. After five washing steps with 0.1% Tween-20/PBS, anti-hIgG antibodies conjugated with HRP (ab98624, Abcam; working concentration: 20 ng/mL) were applied and incubated for 1 h at room temperature. Wells were washed again five times with 0.1% Tween-20/PBS. For IgG detection 50 µL 1-Step TMB-ELISA Substrate Solution (Thermo Fisher Scientific) were added to the wells, followed by 50 µL 2 M sulfuric acid to stop the reaction after 15 to 20 min. Absorption was measured at 450 nm. To measure ADA titers from patients' sera, serial dilutions of 40 (22 represented in anti-AGAL reference antibody and 18 additional) sera and corresponding purified total IgGs, both starting with 4 µL in 100 µL PBS, were loaded into the wells and incubated for 2 h at room temperature. To ensure that patients' sera and purified total IgGs had the same IgG and protein concentration, sera were diluted 1:10 with Melon Gel buffer and purified total IgGs were supplied with an individual amount of BSA before loading into the wells. A serial dilution of the anti-AGAL reference antibody, starting with 800 pg/µL, was used as reference. IgG detection was performed as described above. To calculate AGAL binding IgG concentrations, linear regressions within serial dilutions of patients' sera and purified total IgGs were applied. Finally, the concentrations were calculated using the equation obtained from the anti-AGAL antibody reference curve. Preparation of Small Unilamellar Vesicles (SUVs) SUVs were prepared as reported previously [20,21]. Lipids were first dissolved in chloroform and mixed in the desired molar ratio in a glass vial (25 mg/mL 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) and 2 mol% 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine-N-(biotinyl) (DOPE-biotin)). Subsequently, the solvent was evaporated with a nitrogen stream while simultaneously turning the vial in order to obtain a lipidic film. The residing solvent was removed for at least 1 h in a desiccator connected to a vacuum pump. The dried film was re-hydrated in MilliQ water to a concentration of 1 mg/mL, vortexed to ensure that the lipids were fully dissolved and transferred into an Eppendorf tube. The lipids were sonicated for about 30 min until the opaque solution turned clear right before use. The obtained SUVs were stored in the fridge and used within two weeks. QCM-D Measurements QCM-D measurements were performed with a Qsense Analyser from Biolin Scientific using SiO 2 -coated sensors (QSX303, Biolin Scientific, Gothenburg, Sweden). Measurements were performed at 22 • C using four parallel flow chambers and one Ismatec (Grevenbroich, Germany) peristaltic pump with a flow rate of 75 µL/min. In this work, the seventh overtone was used for the normalized frequency (∆f7) and dissipation (∆D7). QSense Dfind software from Biolin Scientific and the standard Sauerbrey modeling was used to calculate the film thickness. QCM-D sensors were first cleaned by immersion in a 2 wt% sodium dodecyl sulfate solution for 30 min and subsequently rinsed three times with Milli-Q water and then with ethanol. The sensors were then dried under a nitrogen stream and activated with 10 min UV/ozone treatment using a UV/ozone cleaner (Ossila, Sheffield, United Kingdom). For the formation of supported lipid bilayers (SLBs), small unilamellar vesicles (SUVs) were diluted to a concentration of 0.1 mg/mL in buffer solution (50 mM Tris, 100 mM NaCl, pH 7.4) containing 10 mM of CaCl 2 directly before use and flushed into the chambers after obtaining a stable baseline. The quality of the SLBs was monitored in situ, where high quality SLBs are defined by ∆f = −24 ± 1 Hz and ∆D < 0.5 × 10 −6 . Afterwards, a solution of streptavidin SAv (3 µM) was passed over the SLB and followed by the addition of enzymes (agalsidase-α, agalsidase-β, or Moss-AGAL (2 ng/mL, each). Each solution was incubated on the QCM-D crystal until a stable plateau was reached and was subsequently rinsed away with buffer. For the titrations, dilutions of antibodies were passed over the QCM-D crystals ranging from 1:1,500 to 1:200 dilution of a stock solution (1 µg/mL). Inhibition Assay and Titration of Neutralizing ADAs To further control if AGAL inhibiting antibodies were extracted from patients' sera by immune adsorption, inhibition assays were performed as described previously [8,11]. In short, 1 µg flow through fraction, raw serum, and negative controls (mouse IgG, serum from a healthy control, and purified total IgGs from the healthy control) were pre-incubated with 1 ng agalsidase-α for 10 min at room temperature. To calculate the inhibitory capacity of the elution fraction, 100 ng of the reference antibody were pre-incubated with increasing amounts of agalsidase-α (0 to 20 ng) [8]. Residual AGAL activity was determined using 4-methylumbelliferyl-α-D-galactopyranoside (Biosynth, Staad, Switzerland). N-acetylgalactosamine (Santa Cruz Biotechnology, Dallas, TX, USA) was used to inhibit endogenous α-galactosidase B activity [22]. Finally, the amount of inhibited agalsidase-α was calculated (pg per µg sample). The amount of agalsidase-α required to saturate ADAs in patients' sera was determined as described previously [8]. In short, 5 µg patients' purified total IgGs were pre-incubated with a serial dilution of agalsidase-α for 10 min at room temperature. To express agalsidase-α inhibition in percent, residual AGAL activities were normalized against inhibition-negative controls. Agalsidase-α inhibition was plotted against the amount of agalsidase-α and saturation was defined as the amount of enzyme required to reduce the neutralizing capacity of 5 µg patients' total IgG below the ERT neutralizing threshold of 10% (background threshold) [8,11]. ELISA-Based Calculation of the Amount of Agalsidase-α to Saturate Anti-AGAL-Antibodies ELISA-based determined anti-AGAL antibody concentrations were used to estimate the amount of agalsidase-α to saturate anti-AGAL-antibodies. Correlation of 36 patients' anti-AGAL antibody concentrations determined in serum and their corresponding measured amount of agalsidase-α required for antibody saturation resulted in the equation Y = 3.996 × X + 48.10, where X is the AGAL-specific IgG concentration in serum (µg/µL) and Y is the amount of agalsidase-α to saturate anti-AGAL antibodies (mg). Statistics If not stated otherwise, all experiments were performed at least three times. Continuous variables were expressed as mean with standard deviation (SD). Two-tailed student's t test, one-way analysis of variance (ANOVA) with correction for multiple testing or twoway ANOVA with Tukey test were used for statistical analysis. Correlation analyses were performed using Pearson correlation coefficient (r 2 ). p-values < 0.05 were considered as statistically significant. For appropriate statistical analyses and visualization GraphPad PRISM v8.0 software (GraphPad Software Inc., La Jolla, CA, USA) was used. Conclusions We conclude that the generation of a reference antibody from human blood samples is a feasible tool for ELISA-based ADA titer determination, allowing to express titers as real concentrations. Furthermore, ADAs from patients with FD have comparable affinities to agalsidase-α and agalsidase-β, as well as Moss-AGAL. Figure A3. Computation of the amount of agalsidase-α necessary to saturate anti-AGAL antibodies using ELISA-based determination of anti-AGAL antibody concentrations in patients' sera. ELISA assays were used to determine anti-AGAL antibody concentrations in 12 patients' sera. Amount of agalsidase-α required for antibody saturation was computed by equation Y = 3.996 × X + 48.10, where X represents AGAL-specific IgG in serum and Y represents amount of agalsidase-α to saturate anti-AGAL-antibodies. AGAL: α-galactosidase A; IgG: Immunoglobulin G.
2021-03-18T05:13:05.054Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "206309300d1c0bda122f3f7bffeeb9e36455ef3a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/5/2680/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "206309300d1c0bda122f3f7bffeeb9e36455ef3a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226283830
pes2o/s2orc
v3-fos-license
Representation, Learning and Reasoning on Spatial Language for Downstream NLP Tasks Understating spatial semantics expressed in natural language can become highly complex in real-world applications. This includes applications of language grounding, navigation, visual question answering, and more generic human-machine interaction and dialogue systems. In many of such downstream tasks, explicit representation of spatial concepts and relationships can improve the capabilities of machine learning models in reasoning and deep language understanding. In this tutorial, we overview the cutting-edge research results and existing challenges related to spatial language understanding including semantic annotations, existing corpora, symbolic and sub-symbolic representations, qualitative spatial reasoning, spatial common sense, deep and structured learning models. We discuss the recent results on the above-mentioned applications –that need spatial language learning and reasoning – and highlight the research gaps and future directions. Description This tutorial provides an overview over the cutting edge research on spatial language understanding. However, we cover some background material from various perspectives given that ACL community has not paid enough attention, in the last two decades, to this topic. There are a few emerging research work very recently looking back into the importance of spatial language in various NLP tasks. One of the essential functions of natural language is to express spatial relationships between objects. Linguistic constructs can encode highly complex, relational structures of objects, spatial relations between them, and patterns of motion through space relative to some reference point. Spatial language understanding is useful in many research areas and real-world applications. This topic recently has attracted the attention of various sub-communities in the intersection of Natural Language, Computer Vision and Robotics. The complexity of spatial language understanding and its importance in downstream tasks that involve grounding the language in the physical world has become to some extent evident to the NLP research community. Compared to other semantically specialized linguistic tasks, standardizing tasks related to spatial language seems to be more challenging as it is harder to obtain an agreeable set of concepts and relationships together with a formal spatial meaning representation that is domain independent (Pustejovsky et al., 2011;Kordjamshidi et al., 2010;Mani, 2009;Pustejovsky, 2017;Dan et al., 2020). For example, compare this with recent work on temporal relations within Computational Linguistics. This has made research results on spatial language learning and reasoning diverse, task-specific and, to some extent, not comparable. While formal meaning representation is a general issue for language understanding, formalizing spatial concepts and building formal reasoning and machine learning models based on those constitute challenging research problems with a wealth of prior foundational work that can be exploited and linked to language understanding. In this tutorial, we overview four themes: 1) Spatial Semantic Representation; 2) Spatial Information Extraction and; 3) Spatial qualitative representation and reasoning 4) Downstream applications of spatial semantic extraction and spatial reasoning including language grounding, robotics, navigation, dialogue systems and tasks that require combining vision and language. The semantic representation section covers the works that have attempted to arrive at a common set of basic concepts and relationships (Bateman, 2010;Hois and Kutz, 2011), as well as making existing corpora interoperable (Pustejovsky et al., 2011;Mani and Pustejovsky, 2012;Kordjamshidi et al., 2017;Kordjamshidi, 2013). We discuss the existing qualitative and quantitative representation and reasoning models that can be used for investigation of interoperabiltiy of machine learning and reasoning over spatial semantics (Cohn et al., 1997). Spatial language meaning representation includes research related to cognitive and linguistically motivated spatial semantic representations, spatial knowledge representation and spatial ontologies, qualitative and quantitative representation models used for formal meaning representation, and various spatial annotation schema and efforts for creating specialized corpora. We discuss various datasets that either focus on spatial annotations or downstream tasks that need spatial language learning and reasoning. Particularly, natural language visual reasoning data (Suhr et al., 2017(Suhr et al., , 2018. Moreover, continuous meaning representations for spatial concepts is another aspect to be highlighted in the tutorial, e.g., Deruyt-tere et al.). We overview the state-of-the-art for extraction of spatial information from language, both the abstract semantic extraction (Kordjamshidi et al., 2011;Kordjamshidi and Moens, 2015) and extraction that is driven by various target tasks and applications. We discuss machine learning models including structured output prediction models, deep learning architectures and probabilistic graphical models that have been used in the related work. Finally, we overview the usage of spatial semantics by various downstream tasks and killer applications including language grounding, navigation, self-driving cars, robotics (Tellex et al., 2011;Kollar et al., 2010), dialogue systems (Kelleher and Kruijff, 2006) and human machine interaction, and geographical information systems and knowledge graphs (Stock et al., 2013;Mai et al., 2020). Spatial semantics is very closely connected and relevant to visualization of natural language and grounding language into perception, central to dealing with configurations in the physical world and motivating a combination of vision and language for richer spatial understanding. The related tasks include: text-to-scene conversion; image captioning; spatial and visual question answering; and spatial understanding in multimodal settings (Rahgooy et al., 2018) for robotics and navigation tasks and language grounding (Thomason et al., 2018). The current research using end-to-end monolithic deep models fail to solve complex tasks that need deep language understanding and reasoning capabilities (Hudson and Manning, 2019). Throughout this proposal, we will highlight the importance of combining learning and reasoning for spatial language understanding and its influence on the semantic representation and type of the learning models as well as the performance on various applications. Regarding the question of reasoning, we (a) point out the role of qualitative and quantitative formal representations in helping spatial reasoning based on natural language and the possibility of learning such representations from data to support compositionality and inference (Hudson and Manning, 2018;Hu et al., 2017); and (b) examine how continuous representations contribute to supporting reasoning and alternative hypothesis formation in learning (Krishnaswamy et al., 2019). We point to the cutting edge research that shows the influence of explicit representation of spatial entities and concepts (Hu et al., 2019;Liu et al., 2019). The main goal of this tutorial is to combine these current related efforts from different communities and application domains into one unified treatment, to identify the challenges, problems and future directions for spatial language understanding. Outline The tutorial will cover the following syllabus: • Spatial Representations -Linguistic corpora and semantic annotations -Spatial knowledge representation and spatial calculi models -Distributed representations • Spatial Information Extraction -Spatial entity and relation extraction -Spatial ontology population -Considering domain knowledge and pragmatics in spatial extractions • Spatial Semantic Grounding -Combining vision and language (symbolic and multimodal embeddings) -Capturing spatial common sense -Grounding language in 2D and 3D physical worlds -Generating referring expressions • Spatial Reasoning -Overview on natural language and visual reasoning tasks and data -Modeling compositionality and spatial reasoning in (Deep) learning models • Downstream tasks -Spatial concepts in dialogue systems -Spatial reasoning for QA and VQA -HRI, navigation and way-finding instructions -Corpus-based GIS systems Prerequisites and reading list Familiarity with machine learning and natural language processing will be helpful for this tutorial. Our selected reading list is as follows. Acknowledgements This project is supported by National Science Foundation (NSF) CAREER award #1845771.
2020-11-10T14:21:44.795Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "c7c56ca5453b37e68b3bbc38112fb0306a175037", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/2020.emnlp-tutorials.5.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "c7c56ca5453b37e68b3bbc38112fb0306a175037", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
14019052
pes2o/s2orc
v3-fos-license
Evaluating a Vehicle Auditory Display : Comparing a Designerʼs Expectations with Listenersʼ Experiences This paper illustrates a method for the early evaluation of auditory displays in context. A designer was questioned about his expectations of an auditory display for Heavy Goods Vehicles, and the results were compared to the experiences of 10 listeners. Sound design is essentially an isolated practice and by involving listeners the process can become collaborative. A review of the level of agreement allowed the identification of attributes that might be meaningful for the design of future auditory displays. Results suggest that traditional auditory display design guidelines that focus on the acoustical properties of sound might not be suitable. INTRODUCTION Sound is one of the easiest ways to augment any environment and has always been used as a method of communicating information (Delage, 1998). Yet the use of sound in human-computer interaction remains problematic. Brewster (2008) raised this issue, despite successful research into the use of non-speech sounds going back to the early 1990s. Sound design is not an expertise easily conveyed (James, 1998). Robare and Forlizzi (2009) highlight the lack computing sound design guidelines, despite the number of sound enabled products having increased dramatically since 2000. Auditory displays have been defined by Kramer (1994) as an interface between users and computer systems using sound. Displays differ from interfaces in that they are mono-directional (McGookin & Brewster, 2004). Sound has long been used to convey information in vehicles, and researchers have emphasized the suitability of auditory displays (Hirst & Johnson, 1992, Graham, 1999, McKeown, 2005, Fagerlönn & Alm, 2010. Barrass and Frauenberger (2009) argue that designers need to consider the context of use, particularly given that the conditions in vehicles can be 'complex and dynamic' (Cao at al., 2010 p. 109). Watson and Sanderson (2007) tell us that an auditory display's effectiveness at communicating information should be evaluated according to its context of use. By context we mean the ambient auditory environment or soundscape (Schafer, 1977). The soundscape mapping tool (SMT) is a way of abstracting and visualising sound events that allows designers to represent designs, and listeners to record experiences (McGregor et al., 2010). The SMT was developed and validated with groups of audio professionals and listeners (McGregor et al. 2006(McGregor et al. , 2007. SOUNDSCAPE MAPPING TOOL The SMT has three distinct phases, identification, classification and visualisation. The sound designer identifies sound events within a sound design, and/or soundscape. Both the designer and listeners classify the sound events according to a list of attributes (see Table 1). The results are then visualised by the researcher for ease of comparison by the designer. Table 1: Sound event classification The visualisation takes the form of a "map", the key of which is shown in Figure 1. Each sound event is given a code and is represented by a combination of shapes, colours and symbols that are overlaid onto a grid that captures where the listener heard the sound. If a sound event is heard to move during the recording, then the start and end points are both marked and joined. The designer (second author) and 10 listeners took part in this study. The 10 listeners were a sample of convenience made up from staff and students at Edinburgh Napier University. Materials The designer made an 11 minute 41 second stereo recording of the auditory display within a moving Heavy Goods Vehicle (HGV). A professional driver was driving the truck with a co-driver, the designer was sitting in the centre on the back seat/bunk bed. The recording was made with a pair of electret microphones attached to the designers' spectacles. This near-ear microphone technique creates a partial binaural effect, improving distance perception and reducing insidehead-locatedness for listeners (Blauert, 1996). Procedure The designer supplied a list all of the sound events in the recording. The designer then classified what he had heard. Listener tests were conducted in a quiet office. The listeners were provided with fully enclosed stereo headphones. Listeners were asked to listen to an audio recording and answer questions about what they heard. The first author translated the tabulated information into soundscape maps. Results The designer identified 20 different sound events within the recording (see Table 2). Seven of the sound events were part of the auditory display (AD). The 13 remaining ambient sound events where either vehicle related (10) or people related (3). When the sound designer listened to the recording he did not identify four of the sound events but still classified them so that the results could be compared to the listeners experiences. The listeners were aware of all of the sound events. The designer considered the sound events to be close and predominantly to his left (see Figure 2). A single sound event was heard to change locations (car passing). The listeners experienced the sound events as being farther away and predominantly to the left (see Figure 3). The listeners did not identify the movement of the car passing. Both the designer and the listeners classified all of the sound events except for three as sound effects. The vocalisations made by the driver, co-driver and designer were classified as speech. For the material attributes the designer considered all of the AD sound events to be gas. The listeners classified the AD sound events as predominantly solid. Mechanical vehicle sounds were mostly rated as solid. Within the interaction attribute, impulsive was applied to sound events such as the handbrake release, intermittent for the windscreen wiper and continuous for the engine. When listeners did not agree with the designer they tended to classify events as intermittent rather than impulsive. Code Description There was a wider variation within the temporal attributes, only very short sounds were classified by both the designer and listeners as short. The turn signal was medium and the engine was long. There was little consistency within the spectral attributes. In general the designer and listeners did not agree upon the classification of the dynamics attributes. The listeners classified 15 out of 20 sound events as being informative, all of the remaining sound events were neutral. The designer's classifications were more evenly distributed: informative (9), neutral (7) and uninformative (4). The listeners classified all of the auditory display sound events as informative. For the aesthetics attribute none of the sound events were found to be pleasing. Only a single sound event (tachograph) was classified as unclear by both the designer and the listeners. All of the AD sound events were rated as clear by the listeners, with 14 out of the 20 total sound events being clear. The majority of the sound events were rated as having no affective content. By looking at the level of agreement between the designer and the listeners' classification for the auditory display and the ambient sounds it is possible to identify attributes that might be of interest to designers. The attributes can be split into experience and physical properties. All of the experiential attributes (type, awareness, content, emotions and aesthetics) had a high level of agreement for the AD (≥71%), whereas the physical properties (temporal, spectral, interaction, clarity, material, dynamics and spatial) typically fell below 57%. Interestingly, the level of agreement over the content of a sound was low for ambient sounds at only 23% (see Table 3). Table 3: levels of agreement The type of sound had a 100% level of agreement for both the AD and ambient sounds. Agreement about awareness was high for both the AD and the ambience. The agreement between the designer and the listeners for the spatial attributes was 0%, which suggests that further work needs to be done on identifying an appropriate method for capturing spatial information. Responses were similar for the left and right orientation but there was a noticeable difference for the depth. Attribute AD Ambient The level of agreement for the content was high at 86% for the AD but low for the ambient sound events (23%). Whilst this is an issue for describing sound events in general, the attribute is useful specifically for describing auditory displays. The inverse is true for the dynamics attribute where consistency is higher for ambient sound events (69%) than for the AD (14%). Fagerlönn & Liljedahl (2009) warn that end users may not feel confident enough to provide informed feedback about sound designs. Coleman (2008) highlighted the distrust that sound designers have for non-experts' descriptions. There are a number of issues to address. Accurate measurements of sound are difficult to achieve (Moore, 1997). Stopping and listening takes sound events out of context. Individual perceptions vary, making classification difficult (Porteous and Mastin, 1985). Perception includes 'stuff around the edges', context, background, history, common knowledge and social resources (Brown & Duguid, 2000) Any method to capture the experience of inhabiting a soundscape will have issues with granularity. Balance must be achieved with gathering sufficient data, and overwhelming participants. Only limited time periods can be studied, as there are necessary time constraints for listeners' availability and fatigue. Discussion The physical properties of sounds have been used for the stylised designs of sonifications and earcons. The finding of a low level of agreement of the physical properties of sound challenges the use of conventions in this area of sound design. Specifically, the wisdom of the use of guidelines to aid the design process of auditory displays should be investigated further. This work demonstrated that the SMT was suitable for capturing the intentions of a sound designer and the experiences of 10 listeners. The trial also provided information about how the SMT could be developed further. This paper contributes evidence that auditory environments can be abstracted and visualised in a manner that allows designers to represent their designs, and listeners to record their experiences.
2016-01-22T01:30:34.548Z
2011-08-24T00:00:00.000
{ "year": 2011, "sha1": "5d0177b77a4944f0fe42a38ce9ca1c071f513932", "oa_license": "CCBYNC", "oa_url": "https://www.napier.ac.uk/~/media/worktribe/output-203686/evaluating-a-vehicle-auditory-display-comparing-a-designers-expectations-with-listeners.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "5d0177b77a4944f0fe42a38ce9ca1c071f513932", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
216281841
pes2o/s2orc
v3-fos-license
Flexural behavior of composite open web steel joist and concrete slab Experimental research has been carried out to study the behavior of composite open web steel joists under monotonic loading. Four composite joists were fabricated with two types of web members and span-to-depth ratios were tested. The concrete slab 400 mm wide, and 90 mm overall depth overlaid on a corrugated steel sheet. The composite system was simply supported over 3000 mm span and subjected to a uniformly distributed loading. Test results are presented in terms of slip between composite slab and the top chord of steel joist, load-deflection and load-strain relations. Based on the experimental results, it can be concluded that lowering span/depth ratio has a significant effect on failure pattern with increase in ultimate capacity about 8%. Additionally, the results show that using double angles web type has only a little influence on deflection but increased ultimate capacity about 10% over similar composite joist with rounded bars web. Introduction Steel-concrete composite beams have been recognized as one of the most economical structural system for multi-storey buildings and bridges. A good example of composite construction is composite open web steel joist, which comprises anopen-web steel joist and a cast-in-place composite concrete slab. The benefit of open web configuration permits an easy accommodation for service systems like ducts, and pipes. In addition, using steel sheets as a permanent formwork results in speedy construction compared with conventional composite beams. To ensure that the composite steel-concrete system acts as a single unit, shear connectors are attached to the steel top chord and embedded in the concrete slab. Composite open web steel joist is designed as simply supported beam and its ends are either bolted or welded to the supporting beam or joist girder. In 1965, the first testing of composite open web steel joists was carried out by Lembeck [1]; followed by Wang and Kaley in 1967 [2]. The composite action was achieved by extending the web members above the top chord angles into the concrete slab. Cold-formed hat-shaped sections were used to construct the top and bottom chords. The experimental results of composite steel joists showed that there was about 20 percent reduction in deflection, and an increase approximately 14 percent in ultimate moment than that of the tested conventional joists. Also, it was found that Robinson and Fahmy (1978) [3] tested a number of composite joists with metal steel deck. The results showed that the strength and stiffness of composite joists were greater than non-composite joists. Gibbings et al. (1991) [4] tested eight composite joists with spans ranging from 40 ft. to 56 ft. and depths from 14 in. to 36 in. The concrete slabs were casted from normal concrete. Headed studs with ¾ in. diameter were used as shear connectors. The researches verified the assumption of neglecting any contribution of top chord in computing the ultimate strength design method of the tested joists. Four simply supported composite open web steel joists were examined under static loading to investigate the effect of web member type and span/depth ratio on the structural performance of these joists. The ultimate capacity, deflection, strain profile, and mode of failure were presented and discussed. It is worth to mention that this study provides a basis for the experimental research of the behavior of this type of light eight structures which are commonly used in Iraq, considering various types of loading and using modern types of concrete. Concrete Normal strength concrete cylinders were cast from the same batch of concrete, and were performed on the days of composite joist tests to determine the compressive strength in accordance to ASTM C39/C39M -14 [5]. Ordinary Portland cement conforming to the Iraqi Specification No.5/1984 [6] was used. Sand falling to Zone II was used as Fine aggregate, and 12.5 mm maximum size coarse aggregate was used for the concrete mix. The aggregates were complied with the Iraqi Specification No.45/1984 [7]. Steel Three tensile coupons were extracted from each component of the steel joist in accordance with ASTM 370 specification [8]. The main values of test results as yield stress and ultimate stress are shown in table 1. Figure 1 shows the coupons of steel specimens during tensile test. Description of Test Specimens Experimental investigation on the behavior of four simply supported composite open web steel joists was conducted. All joists had a constant span of (3m) and designed with two kinds of span-to-depth ratio, 13.5 or 15.5. The specimens were fabricated using 2L 50X50X5 mm and 2L 76X76X5 mm back-to-back double angles for the top and bottom chords, respectively. Two different types of web members were used, solid 25 mm in diameter plain rounded bars and 2L40x40x5 mm double angles. Description and details of the composite joist specimens are presented in table 2 and figure 2. A total of 28 double row shear connectors with 16 mm diameter and 75 mm length after welding were placed in the strong position. The number of shear connectors is designed as per Steel Joist institute SJI, 2015 [9] for all specimens so that it exceeds the tensile force of the bottom chord to ensure ductile failure. Plywood boards with inner dimensions of 400 mm width and total of 90 mm depth were placed before pouring of concrete. Then, normal concrete was cast on the top of profiled steel sheet and already placed welded wire fabric of 6 mm bars. Burlap sheeting was used to cover the concrete slab for moist curing for 28 days. Figure 3 shows the profiled steel sheet with the welded wire mesh, casting, and curing of the concrete slab. Table 2. Description and details of tested specimens Loading Scheme, Description and Measurements A typical instrumentation arrangement was used for each specimen. Eight strain gauges were used to measure the strain of the steel joist with one strain gauge placed on the concrete slab, resulting in a total of nine strain gauges per joist, as shown in figure 4. Vertical deflection was measured at the centerline of the composite joist using a linear variable differential transducer (LVDT). In addition, relative slip measurement of the concrete slab was made using another LVDT attached to the steel joist top chord. A data acquisition system connected to a computer was used to collect the data from the strain gauges and the LVDTs. The load procedure was similar for all tests. The load was applied using a single hydraulic ram and distributed by three-tiers of 250x250 mm I-beam to the concrete slab, see figure 5, to simulate a uniformly distributed force pattern. To stabilize the composite joist specimen, an initial load equal to 10 percent of the calculated strength was applied. Then, all the instrumentations were re-initialized and load increments of 1 kN were applied until failure. Figure 6 presents the composite joist specimen under load and the data acquisition system that used was in this experiment. Result and Discussion As mentioned previously, the main objective of this paper was to investigate the structural behavior of composite joists with two types of web members and span-to-depth ratios. The results of experimental work consist of load-deflection at midspan response, crack pattern, load-slip at interface between the concrete slab and the top chord, and strain behavior were presented herein. Table 3 shows the ultimate force, maximum mid-span deflection for all tested specimens. In figure 7, specimen N13.5DOM, exhibited an increase in stiffness compared to the other specimens, which showed a similar behavior until the ultimate failure of the joists. It also appears that the measured BCEE4 IOP Conf. Series: Materials Science and Engineering 737 (2020) 012017 IOP Publishing doi:10.1088/1757-899X/737/1/012017 6 deflection at midspan at ultimate load of specimen N15.5ROM was slightly more than the deflection of other three tested specimens. Crack Pattern and Failure Modes During the tests, the cracks in the concrete slabs near the support started at approximately 60% and 54% of the ultimate load in specimens N13.5ROM and N13.5DOM, respectively. Delamination between the concrete slab and the profiled sheet was observed at 460 kN near to the supports for both specimens N13.5ROM and N13.5DOM. For specimens N15.5ROM and N15.5DOM, cracks in concrete slabs near midspan started under relatively small loads (approximately 31% and 38%), while delamination between the concrete slab and the profiled sheet was marked at 330 kN and 400 kN, respectively. Photographs of the crack patterns at failure stage of all tested composite joists are presented in figure 8. It can be noticed that the cracks occurred mostly within the shear span for specimens N13.5ROM and N13.5DOM, whereas the cracks for specimens N15.5ROM and N15.5DOM were observed close to the midspan. However, it can be remarked that all the composite joists have exhibited significant composite action during the test without any failure in shear studs. Strain Behavior and Neutral Axis of the composite joists Strain gauges were placed at various locations to examine the strain behavior of the composite joists. Figure 9 depicts the load-strain relationship for the tested specimens. Based on these results, one can conclude that the neutral axis for all specimens are located within the top chord of the steel joist, which are about 217 mm for specimens N13.5ROM and N13.5DOM, and 168 mm for specimens N15.5ROM and N15.5DOM from the bottom chord of the steel joist. That means all the steel joists are subjected to only tensile forces and there is no compression buckling, as recommended by SJI [9]. Except for specimen N13.5ROM there was a slight local bucking at the first compression diagonal web member occured before reaching the ultimate load. Also, it is noticeable that the strain behavior of the bottom chord of the composite joists were similar, except for strain gauge (S7) in N13.5DOM composite joist. The load-strain behavior for this strain gauge was approximately linear because it was located at interaction of two web members. Load-Slip Behavior Because of using two different types of material, steel joist and concrete slab, the relative slip at the interface layer should be checked. LVDT sensor was attached to the top chord of the steel joist to record the relative movement values with the concrete slab. The slip behavior of all tested joists is shown in figure 10, where each figure presents the load versus slip for two specimens. The percentage of decrease in slip, with respect to N13.5ROM, is 10% for composite joist N15.5ROM. While the increase in slip percentage is equal to 65% for composite joist N13.5DOM at the ultimate load. In addition, it can be noticed that the specimen N15.5DOM has the greater slip value compared with the other three specimens. Conclusion This study investigates the behavior of uniformly loaded simply supported composite joists. Based on the experimental results reported in this paper, the following conclusions can be made: 1-All the tested composite joists failed in bottom chord yielding and concrete crushing without any failure in the shear connectors. 2-The ultimate strength of composite joists with double angle web member increased by 1.8 -14%, as compared to composite joists with rounded bar web. Also, it can be mentioned that the composite joists reached their ultimate strengths without buckling in either types of web members. 3-Decreasing the span-to-depth ratio leads to increase the concrete cracks, and deflection values from 4.4-16.4%. Two failure modes were observed in the tests: the specimens with span/depth ratio of 13.5 failed in shear, while specimens with lower span/depth ratio, equal to 15.5, failed in flexure. 4-Full composite action was observed until yielding for all tested joists, as expected, since the number of studs exceed the steel bottom chord yielding force, which is compatible with the design recommendation of Steel Joist institute SJI, 2015 [9].
2020-03-12T10:45:14.916Z
2020-03-06T00:00:00.000
{ "year": 2020, "sha1": "a29f6592febc349c54372f5c17fda3e87e50904f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/737/1/012017", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d775680582693f86996401edb0bef17b4da8d578", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
224889111
pes2o/s2orc
v3-fos-license
Blended phenotype of adult-onset Alexander disease and spinocerebellar ataxia type 6 Alexander disease is an autosomal dominant hereditary disease characterized by progressive spastic paraplegia, ataxia, and bulbar symptoms caused by mutations in the glial fibrillary acidic protein (GFAP) gene. Previous nation-wide surveillance revealed that the prevalence rate in Japan is estimated at 1 in 2.7 million people.1 Meanwhile, SCA6 is an autosomal dominant spinocerebellar ataxia characterized by adult-onset pure cerebellar ataxia. The prevalence of SCA6 is estimated to be 1 in 100,000 people in Japan.2 Here, we report an extremely rare case presenting with a blended phenotype of adult-onset Alexander disease and SCA6. Clinical report The patient was a 57-year-old woman presenting with progressive ataxia, facial and limb numbness, dysarthria, and dysphagia. Familial history showed that her father was genetically diagnosed with SCA6 (figure, A). At age 35 years, the patient experienced clumsiness in her right hand. At age 45 years, she developed rotation vertigo and visited our department. Her familial history suggested the diagnosis of SCA6, which was genetically confirmed. She gradually developed mild dysarthria since then, although she did not experience difficulty in verbal communication. At age 54 years, numbness appeared in both upper limbs and the lower face. At age 55 years, she developed dysphagia and noticed rapid exacerbation of dysarthria to the point that verbal communication became difficult. Neurologic examination showed cerebellar ataxia, dysarthria, dysphagia, spastic paraplegia, and sensory disturbance in all modalities in the extremities and lower face. MRI showed severe atrophy of the medulla oblongata and cervical spinal cord with preserved basal pons, which is called tadpole appearance. The anterior portion of the medulla oblongata showed high-intensity signals on T2-weighted MRI. The midbrain tegmentum and the cerebellum were also atrophic (figure, B-D). All these MR findings are characteristic of Alexander disease. Her DNA sample was obtained with IRB-approved informed consent. PCR-based fragment analysis was performed to detect triplet repeat expansions in CACNA1A. Direct nucleotide sequencing analysis was conducted for whole exons of GFAP with specific genomic primers (supplemental method e-1, links.lww.com/NXG/A326). The mutational analysis revealed both an expansion of the CAG repeat (23 repeats) in CACNA1A and a known heterozygous point mutation in the GFAP gene (c.827G>T, p.R276L) (figure, f). Her mother had developed difficulty in walking at age 66 years and was diagnosed clinically with amyotrophic lateral sclerosis. However, her brain MRIs were also suggestive of Alexander disease (figure, e), although cerebellar atrophy was less severe than in the proband. Based on the clinical radiologic findings and genetic data, we concluded that the proband inherited Alexander disease from her mother and SCA6 from her father and presented with a blended phenotype of both diseases. Discussion Alexander disease is divided into 3 types based on location: type 1 is the cerebral dominant type, type 2 is the medulla oblongata/ spinal cord dominant type, and type 3 is the mixed type. According to clinical symptoms and imaging findings, our patient was classified as type 2. Previous reports have shown that patients with the mutation also presented with type 2 phenotypes, predominantly bulbar and pyramidal signs with minimal cerebellar ataxia. In these individuals, brain MRI revealed typical T2-weighted high-intensity lesions in the medulla oblongata and cervical lesion and mild cerebellar atrophy. 3,4 In contrast, our patient developed ataxia as the initial symptom, suggesting that concomitant SCA6 played a major role in the symptomatic onset with the subsequent rapid deterioration caused by the GFAP mutation. Whether the underlying degeneration process of SCA6 triggered the rapid deterioration caused by the GFAP mutation is an intriguing issue to be discussed. In the animal model of SCA1, activation of astrocytes with increased expression of GFAP occurred early in the absence of neuronal death. This suggests that expanded polyglutamine stretches per se are the trigger of astrocyte pathology. 5 Likewise, astrocytic expanded polyglutamine affected glial glutamatergic clearing and caused neuronal dysfunction in Huntington disease model mice. 6 Therefore, it is plausible to hypothesize that accumulation of expanded polyglutamine stretches and dysfunction of GFAP proteins might confer synergistic effects on the clinical course in our patient. Accumulation of similar patients or model animal Figure Genetic data of the proband and brain MRI of the proband and his mother (A) Pedigree chart of the family. The proband is shown with an arrow. Those who underwent genetic tests are indicated as E+ and not as E−. Patients whose diagnosis was established as corresponding diseases are shown in black and suggestive but not established in gray. Her paternal grandfather developed a progressive gait abnormality suggestive of SCA6. Her mother was diagnosed as Alexander disease based on typical brain MRI findings, although she did not undergo genetic tests. (B-E) Brain MRIs. T2-weighted images of the proband (B) and her mother (E) and fluid-attenuated inversion recovery (FLAIR) images of the proband (C, D) showed atrophy of the midbrain, cerebellum, medulla, and upper cervical spinal cord, the latter 2 designated as tadpole appearance characteristic for Alexander disease. High-intensity signals were observed in the dentate nucleus (C) and medulla oblongata (B, D) in the proband. Cerebellar atrophy was more severe in the proband than in her mother (D, E), presumably due to concomitant SCA6. (F) Mutational analysis of the proband. The upper row: PCR fragment analysis for the triplet repeat expansion in the SCA6 locus. The lower row: an electropherogram of a GFAP mutation c.827G>T (p.R276L). An arrow shows the heterozygous mutation in GFAP. GFAP = glial fibrillary acidic protein. experiments using double-transgenic mice would be warranted to investigate the hypothesis. Comprehensive gene analysis using next-generation sequencers has occasionally revealed pathogenic mutations of multiple mendelian inherited diseases, and the concept of blended phenotype has been proposed. 7 This report describes a blended phenotype caused by an extremely rare combination of Alexander disease and SCA6. A take-home message is that in cases in which clinical course and physical examination are atypical or complicated, and multiple familial history exists, the possibility of blended phenotypes should be taken into consideration.
2020-10-19T18:03:37.756Z
2020-10-02T00:00:00.000
{ "year": 2020, "sha1": "22c5ad152de9a52e1bcaaf120d1cc7d82079afa2", "oa_license": "CCBYNCND", "oa_url": "https://ng.neurology.org/content/nng/6/6/e522.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f4f92289163f30be26b4fe780f484391e4df536", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203521901
pes2o/s2orc
v3-fos-license
Urea-based polymethacrylamide purification A purification procedure of polymethacrylamide (PMAm) based on the urea feature of being a powerful Hbond breaker is proposed here. Polymethacrylamide was synthesized in water and the final product presents itself as an insoluble gel what is an obstacle to purification. This physical gel, kept together by numerous interchain H-bonds between amide groups in the polymer, is dismantle by a urea solution (4 mol.L -1 ) making the polymer soluble. Methanol is then used to precipitate the polymer, keeping urea, water and methacrylamide in the methanolic phase. FTIR, TGA, GPC and HPLC were used to characterize the final product and to prove the efficiency and convenience of this method. The procedure proposed here has presented a good recovery of the polymer and can be considered more convenient, less time-consuming and efficient than others such as dialysis. INTRODUCTION Hydrogels are gaining attention due to their use as contaminant removers, bioimplants and other contemporary applications [1-2]. The forces that hold the hydrogel together can be non-covalent, such as hydrogen bonding, in this case the material is called a physical hydrogel [2]. Covalent hydrogels, on the other hand, present crosslinks considered not reversible, so that, once formed, the gel becomes insoluble in any solvent [2]. Polymethacrylamide is a polymer that forms hydrogels kept together by H-bonds between amide groups. Copolymers of polymethacrylamide have been used in many fields, such as water decontamination [3]. Some polymethacrylamide derivatives present low toxicity and have been proposed to be used in vivo [4]. On the other hand, their monomers, methacrylamide included, are quite toxic [5]. Agents like LiCl and urea, at high concentrations, are able to create repulsive interactions or to break the intermolecular H-bonds, disassembling the gel [6]. Urea is a particularly interesting agent because it is non-toxic and non-expansive, used for decades as a protein denaturant [7]. In order to generate pure hydrogels, avoiding monomer contamination, purification is crucial. In this work, a new method based on solubilization/precipitation of polymethacrylamide using concentrated urea solutions is proposed and its viability is presented. Purification is crucial to pure gel production. Similar approaches can be useful for other polymers prone to form hydrogen bonding-linked physical hydrogels. Polymethacrylamide synthesis Polymethacrylamide (PMAm) was synthesized from methacrylamide (Sigma-Aldrich, 98%) by free radical polymerization. Sulfate persulfate (P.A., Merck) was used as initiator. Three different ratios of monomer/initiator were used, generating three different materials (see table 1). The reactions were performed in water (80% m/m) at 70 o C for 1 hour under argon atmosphere. PMAm purification After one hour of polymerization, the content in the flask (hydrogel and liquid) was mixed with a 4 mol.L -1 urea (P.A., Vetec) solution in a proportion of 1:5 (m/v) and stirred for 24 h until complete dissolution. The solution was then poured slowly in methanol (P.A., Quimis) also under stirring, up to the ratio of 2:1 (solution:methanol). The product is then vacuum filtered using a quantitative paper filter (Whatman No 1440125) and washed several times with methanol at room temperature. The polymer is then dried at 50 °C under air. Urea solution has the function of break down the gel and make the polymer soluble in aqueous solution. In the urea solution, the polymer chains become soluble and the water-based gel is dismantle, what allows the next steps of the procedure proposed. After solubilization, polymethacrylamide is precipitated in methanol as mentioned. All other components are either soluble (urea) or miscible (water) in the alcoholic phase. Characterization A Prestige 21, Shimadzu, Fourier transform infrared (FTIR) spectrometer was used to collect the spectra of the material in KBr pellets form 4500 to 500 cm -1 . 64 scanning were taken. The thermogravimetric analysis (TGA) was performed in Netzsche model STA449F3 equipment. Approximately 10 mg of the sample was used and analyzed under nitrogen flux (100 ml.min -1 ). The heating rate was set to 10 °C.min -1 and the thermograms were taken from 30 to 550 o C. High performance liquid chromatography (HPLC) used a Bio Rad Aminex HPX-87H (300 x 7.8 mm) column at 45 °C and a refractive index detector Waters 410. The eluent was H 2 SO 4 0.01 eq.L -1 in water and the flow rate 0.6 mL.min -1 . Sample solutions were prepared in magnesium perchlorate aqueous solution (0.45 mol.L -1 ), a good solvent for polymethacrylamide, methacrylamide and urea [6]. The injection volume was 20 µL. Recover efficiency Even after the whole cycle of purification and taken into account that the polymerization was stopped after only one hour (conversion is less than 100 %), the global yields range from 60 to 74% for all the batches (table 1), suggesting a low level of losses during the purification procedure. Table 1: Average molar mass, polydispersity and yield to polymers in different monomer/initiator ratios in the procedure. Characterization The FTIR spectrum of a representative sample of the three polymer batches after purification can be seen in Figure 1. The broad band centered at 3440 cm -1 with a shoulder at 3200 cm -1 is associated with the stretching of the N-H bond of primary amide functional groups. A signal in 2990 cm -1 can be associated with the C-H stretching of the methyl group. A characteristic signal in 1655 cm -1 is due to the stretching of C=O of the amide carbonyl group. The signals at 1477 and 1384 cm -1 refer to C-N stretching and C-H bending groups, respectively [8]. FTIR confirms that the three batches of materials with different monomer:initiator ratios are chemically equivalent (data not shown). FTIR is sometimes a good tool to check for contamination of a polymeric product. Unfortunately, literature indicates that the FTIR spectrum of urea [9] is very similar to the spectrum of PMAm. Therefore, FTIR by itself cannot be considered an ideal technique to check the absence of urea in the PMAm material. Thermogravimetry was other technique used to confirm the success of the purification method. Comparing the thermograms of urea and purified PMAm (figure 2), different transitions are observed and, most important, the transitions seen on pure-urea thermograms are not observed on the PMAm one. Urea thermogram presents a thermal event in the range of 158 to 235 °C due to cyanuric acid formation. This acid is a cyclic compound formed by the union of three urea molecules that occurs during this first thermal step [10]. This event is not seen in the purified PMAm thermogram, indicating a low level of urea contamination. Finally, High Performance Liquid Chromatography (HPLC) was used to furtherly verify the presence (actually the absence) of urea contamination on the final product (PMAm purified). This is quite important since a highly concentrated urea solution (4 mmol.L -1 or 240 g.L -1 ) is required for polymer previous solubilization. Elution times for urea methacrylamide (standards) and PMAm are summarized on Table 2. The signals are well separated in the chromatogram what is ideal for a good identification of each compound. figure 3, before and after purification. In figure 3, in the chromatogram of the sample before purification, a signal around 13 min is attributed to PMAm, but the major signal on the chromatogram is due to methacrylamide (around 43 min). The presence of the monomer is expected, since conversion did not reach 100%. On the other hand, after purification (one cycle only) the signal attributed to the monomer decreases almost to zero and the most pronounced signal is due to PMAm polymer. HPLC strongly suggests that the procedure is very efficient (being uncomplicated at the same time) because it indicates that urea is not coprecipitated with the polymer and the amount of residual monomers after the purification procedure is negligible. Figure 3: HPLC Chromatograms of the PMAm materials: before purification (raw product mix) and after one cycle of the procedure proposed. CONCLUSION The polymethacrylamide purification method presented here turned out to be a practical and efficient procedure. The global yield of the whole procedure shows values between 60,4-73,6 %. Very low contamination of urea and methacrylamide on the purified product was detected according to FTIR, TGA, GPC and HPLC analyses. Based on that, the procedure can be recommended as an alternative to PMAm purification, replacing other methods such as dialysis.
2019-09-19T09:14:44.133Z
2019-11-04T00:00:00.000
{ "year": 2019, "sha1": "347a88e1673a86548fb2aeaf61a88bd29029a250", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rmat/v24n3/1517-7076-rmat-24-03-e12443.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "536fe5fe5bc98867c095a2f2806b5707c453f724", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
259789208
pes2o/s2orc
v3-fos-license
POD POSITIONING AND GRAIN YIELD OF COMMON BEAN AS AFFECTED BY SOWING DENSITY, NITROGEN FERTILIZATION AND FERTILIZATION DEPTH The positioning of pods in common bean directly affects grain losses in mechanized harvesting. However, only few studies have assessed facttors that can affect pods positioning. The objective of this work was to determine the effect of plant density, nitrogen fertilization, and fertilization depth on the distribution of pods of the common bean. The field experiments were carried out in two cropping seasons, 2017 and 2018, during the winter period in the Cerrado region. The experimental design was randomized blocks in a 4x2 factorial scheme, with four replications. The treatments consisted of the combination of four sowing densities (5, 10, 15, and 20 plants m -1 ) with two depths of fertilizer application (6 and 12 cm). The results allowed inferring that the depth of the fertilization does not affect the distribution of pods in the common bean. Plant density does not affect common bean grain yield. More than a quarter of the common bean pods of the BRS FC104 are positioned close to the ground, below 100 mm, in the area where harvester machines operate. Nitrogen fertilization and plant density affect the distribution of pods in common bean plants. At higher doses of N (90 kg ha -1 ), plant density should be increased. On the other hand, at lower doses (45 kg ha -1 ), plant density must be reduced. It is concluded that the sowing density can be an efficient strategy to provide the highest positioning of pods in the upper part of the common bean plants, reducing harvest loss. Introduction The common bean (Phaseolus vulgaris L.) is the most important food legume for the direct consumption of the population worldwide (Ganascini et al. 2019), due to being the main cheap source of protein and minerals for the human diet (Sampaio et al. 2016). In Brazil, this legume is present in the diet of the poorest population, contributing to about 25% of their protein needs (Hungria et al. 1997;Souza and Ferreira 2017). In 2020 the world common bean production was 27,5 million tons, with Brazil standing out in the last decades as one of the largest world producers of this crop, just behind Myanmar and India (FAO 2022). In Brazil, common bean cultivation is carried out in three harvest seasons, the first being the "summer season", the second "drought season" and the third "winter season" (Tavares et al. 2013). In the 2021 agricultural year, the cultivated area of common bean in Brazil was about 2.8 million ha, distributed across the three harvest seasons, resulting in total grain production of about 2.9 million tons and average productivity of 1,035 kg ha -1 (CONAB 2022). Nitrogen is one of the limiting factors for the common bean to reach high yields (Lopes et al. 2011;Nascente et al. 2011;Lacerda et al. 2019). In addition, sowing depth (Gabriel Filho et al. 2010;Orlando Junior et al. 2021) and the depth of fertilizer application can delay crop development (Compagnon et al. 2013;Lacerda et al. 2014;Orlando Junior et al. 2021), making it necessary to develop research focusing on the depth of fertilizer application. Along the production process of the crop, the harvest is one of the most important stages since if not properly done, can result in losses and mechanical damage of the grains, interfering decisively in the quality of the product and its commercial value (Chicati et al. 2018;Pereira Filho et al. 2021). The mechanized harvesting of the common bean can be done in a direct or indirect way. In direct harvesting, the machines simultaneously perform all operations (cutting, threshing, and cleaning the grain), while in the indirect harvesting, equipment such as the reaper and the harvester are used in different operations, in that one is to cut and the other to thresh the plant (Soares et al. 2020;Pereira Filho et al. 2021). The architecture of the common bean plant is a factor that can be related to the efficiency in mechanized harvesting (Kläsener et al. 2018). In mechanized common bean harvesting, the loss rate is high due to a low height of the pod in most cultivars, with most of the pods concentrated in the lower 2/3 of the plant (Pereira Filho et al. 2021). As a result of this, the cutting platform reaches many pods, resulting in a significant loss of grains (Pereira Filho et al. 2021). An alternative would be to cause changes in the distribution of the pods in the plants through crop management practices to increase the positioning of the pods at the top of the plant. The increase in plant density provides an increase in the pod insertion height (Donato et al. 2021). However, the increase in plant density did not decrease the number of pods touching the soil surface (Horn et al. 2000;Santos et al. 2014). Studies performed to determine the effect of cultural practices on the distribution of pods in the common bean are scarce and performed a long time ago, while many new cultivars have been released in the last 10 years. Thus, the objective of this study was to determine the effect of plant density, nitrogen fertilization, and fertilization depth on the distribution of pods on the common bean plant. Description of the experimental site The experiments were carried out in the winter seasons of 2017 and 2018 under irrigation by central pivot, at the Capivara Farm of the Embrapa Arroz e Feijão, located in the municipality of Santo Antônio de Goiás, GO, under the geographical coordinates 16°28'00"S and 49°17'00"W, and at an altitude of 823m. According to the Köppen classification, the region's climate is Aw, tropical savanna (Alvares et al. 2014). There are two well-defined seasons throughout the year, one being dry, from May to September (autumn/winter) and the other rainy, from October to April (spring/summer). The average annual rainfall ranges from 1,500 to 1,700 mm. The average annual temperature is 22.7 °C, ranging from 14.2 °C to 34.8 °C. The maximum air temperature during the period of the two field experiments are shown in Figure 1. Before siting the experiments, the experimental areas were cultivated for five harvests under a notillage system, with corn/soybean grown in the summer and common beans in the winter. The soil in the experimental areas is classified as an Oxisol (Santos et al. 2018). Prior to the setup of the experiments, in June 2017, soil samples were collected to perform the chemical analysis (Teixeira et al. 2017), and the results are shown in Table 1. Experimental design and treatments In both cropping seasons (2017 and 2018), the experiments were laid out as a randomized blocks design in a 4x2 factorial scheme, with four replicates. The treatments consisted of four plant densities (5, 10, 15, and 20 plants m -1 ) and two fertilization depths in the sowing furrow (6 and 12 cm). The plots consisted of ten rows of 6 m, and the six central rows being considered as useful area, discarding 2 m from each end. Cultivar and crop management The BRS FC104 cultivar of common bean was used, which has a very early cycle, with a total of 65 days from sowing to harvest. Sowing was carried out in June 2017 and 2018 using a no-till seeder-fertilizer machine, provided with five rows spaced at 0.45 m and calibrated to distribute 25 viable seeds per meter. At the development stage of the first trifoliate leaf, thinning of plants was carried out in each plot in order to implement the treatments of plant densities (5, 10, 15, and 20 plants m -1 ). The seeder-fertilizer machine was equipped with furrow rods for fertilization and double discs for seeding and was always operated in the same direction, at a speed of 4 km h -1 . The experiments were installed in the second half of June 2017 and 2018. Pre-sowing fertilization was done by applying 300 kg ha -1 of N-P-K (0-30-10) in furrow immediately before sowing. The experiments were managed following the recommendations for the common bean crop (Sousa et al. 2004). Seeds were inoculated with Rhizobium tropici using two doses ha -1 (=2.4 million cells seed -1 ). Topdressing N fertilization using urea was performed at the V4 vegetative stage, at the third fully expanded trifoliate leaf. In 2017, the topdressing N fertilization was applied in a dose of 45 kg ha -1 in each plot and, while in 2018 the plots were subdivided into two, receiving doses of 45 and 90 kg ha -1 of N. Central pivot sprinkler irrigation was used according to the needs of the crop (Cunha et al. 2013). Phytosanitary management was carried out in order to keep the crop free from pests, diseases, and weeds. Data collection and measurements The following were evaluated: length of the stem (LS), the height of the highest pod (HHP), the distribution of pods (DP), number of grains per pod (NGP), the number of pods per plant (NPP); the mass of 100 grains (M100G), the percentage of grains retained in sieves 10 (PGS10 = 4 mm) and 12 (PGS12 = 4.5 mm) and the grain yield (GY). The LS was determined in five plants pulled from the outer rows of the useful area during the vegetative phase "V3". The HHP was determined in five plants, in each plot, in the reproductive phase "R9". The DP was evaluated through an image of each subplot, in the R9 phase. The image was taken with a photographic camera positioned on the ground, perpendicularly and at 45 cm from the row of target plants. The plants were kept in their natural state and some leaves were removed from them to expose the pods. The camera was adjusted to take the image of the entire plants. Using the PowerPoint application, each image was processed to contain only the plants, from the base to the highest pod. Horizontal lines were projected on the image to divide it into 15 equal sections ( Figure 2). The counting of pods, or fractions of pods, were made in each of the 15 sections. Thus, the pods, or fraction of pods, counted in sections 1 to 5 were classified as pods in the upper third (PUT), those in sections 6 to 10 as pods in the middle third (PMT) and those in sections 11 to 15, as pods in the lower third (PLT). When the same pod appeared in two different sections, for example in sections 5 and 6 or 10 and 11, it was counted in both thirds. That is, the same pod was counted in the upper and middle third or in the middle and lower third, respectively. The percentage of pods, or fraction of pods, in each third of the plant was obtained in relation to the total number of pods observed in the image. Knowing the average HHP and the percentage distribution of all pods in each section of the plant (PUT, PMT and PLT), the percentage of pods positioned at more than 100 mm from soil surface (P100) was calculated. PUT, PMT, PLT and, P100 parameters were determined only in 2018 and for the N fertilization applied at 12 cm depth. NGP, NPP, M100G, PGS10 and PGS12 were evaluated in five plants randomly harvested in the center of the central row of each plot. GY was determined in the useful area of each plot. M100G and GY were expressed in g and kg ha -1 , respectively, after the moisture content was corrected to 13%. Statistical analysis Data from each experiment were first submitted to tests of normality and homogeneity of variances for each variable. The data obtained at the different locations were subjected to group experiment analysis. In case of significant differences between locations, the results of each location were analyzed separately. On the analysis of variance, the F test (p<0.05) was applied and, when Fc was significant, mean values of the treatments were compared using the T-test at 5% of significance for the qualitative variables, and subjected to regression analysis for the quantitative variables, using the statistical software Sisvar (Ferreira 2019). Results Many attributes related to plant morphology were affected by the cropping season. Higher values of LS, HHP, PLT, and P100 were observed in 2017 and PUT in 2018. On the other hand, the effect of the plant density was only observed over HHP. Regarding fertilization, the application of fertilizer at a 12 cm depth resulted in higher LS, compared to fertilization at 6 cm depth. Besides, interactions were only observed between cropping season and plant density for HHP (Table 2). The increase in the plant density linearly increased HHP, such that the HHP at 20 plants per meter was about 19% higher as compared to 5 plants per meter (Figure 3). Table 3. Number of grains per pod (NGP -n°), number of pods per plant (NPP -n°); mass of 100 grains (M100G -g), the percentage of grains retained in sieves 10 (PGS10 -%) and 12 (PGS12 -g) and grain yield (GY -kg ha -1 ) of the common bean cultivar BRS FC104, based on the cropping season, the plant density and the fertilization depth in the sowing furrow. As observed in the parameters related to plant architecture, the parameters related to productivity components of the common bean were also affected by the cropping season. Higher values of NGP and GY were found in the 2017 cropping season, while for M100G, PGS10, and PGS12 higher values were observed in 2018. Fertilization depth did not affect the productivity components and the grain yield of the common bean, while Plant density influenced the NPP and PGS12. However, interactions between the cropping season and plant density and between the cropping season and fertilization depth affected NGP and PGS12, respectively (Table 3). Linear but opposite responses occurred for NPP and PGS12 to the plant density. Regarding the NPP, its values reduced at higher values of plant density, while PGS12 values increased with the plant density of increasing (Figure 4). Nitrogen doses and plant density did not affect the architectural parameters (HHP, PUT, PMT, PLT, and P100) of the common bean cultivar BRS FC104. However, there was a significant interaction between nitrogen doses and plant density for the parameters PLT and P100 (Table 4). For the application of the 45 kg ha -1 of N, the plant density did not cause a significant effect on P100 ( Figure 5A), while the higher plant density provided a reduction in PLT ( Figure 5B). However, with the application of 90 kg ha -1 of N, there were significant increases in P100 with the increase in plant density, but for PLT, there was no significant effect ( Figure 5). Discussion Plant growth parameters may be influenced by edaphoclimatic conditions (Ribeiro et al. 2018;Araújo et al. 2020;Donato et al. 2021). The better development of plants in 2017 may be related to the air temperatures being more favorable for the crop development (Ribeiro et al. 2018;López-Hernández and Cortés 2019;Ribeiro and Maziero 2022). In 2018, during the common bean growing cycle, air temperatures were higher (Figure 1), which may have resulted in lower plants, with shorter lengths of hypocotyl and epicotyl and amounts of pods positioned in the upper portion of the plants, above 100 mm from the soil surface. The occurrence of temperatures above 30-32 o C along the day results in damages to the establishment, growth, and development of the crop (Somavilla et al. 2020;Ribeiro and Maziero 2022). Because common bean is a short-cycle crop, it is more sensitive to changes in environmental conditions (Pereira et al. 2014;Somavilla et al. 2020). This may have an even greater effect when it comes to the BRS FC104 cultivar evaluated in this work since it is a super early plant that has a cycle of 65 days. Higher values of LS, HHP, and P100, as observed in 2017 (Table 2), are important results, since they benefit mechanized harvesting of the common bean, contributing to increasing the efficiency of mechanized harvesting, reducing grain loss. Threshing machines generally cut the plants at average heights close to 100 mm, which are considered too high (Soares et al. 2020). In 2017 and 2018, the percentage of pods positioned below P100 was 27.7% and 51.3% respectively, indicating a large number of pods in the action area of the cutting bar of the threshing machine. Thus, it can be inferred that losses in 2018 would be greater than in 2017 if the harvest was carried out with a combine harvester. Our results show that by increasing the plant density, a linear increase on the HHP values occur in common bean. This increase in the plant density causes greater competition between plants for light, water, and nutrients, and plants tend to increase heights (Mondo and Nascente 2018). Consequently, the pods will occur at a higher height relative to the soil surface. The deeper application of fertilizer (12 cm depth) promotes higher LS. This is probably related to greater root development, which favors greater nutrient absorption and better seedling growth, as previously reported (Girardello et al. 2014;Lacerda et al. 2014;Orlando Junior et al. 2021). Thus, it is likely that the deeper fertilization favored greater root development and caused significant effects on the initial growth of plants (Orlando Junior et al. 2021), which may explain the higher values of LS. As reported for the plant architecture parameters, the productivity components were also influenced by the climatic conditions (Pereira et al. 2014;Amaro et al. 2014). The maximum air temperature in 2018 was higher than in 2017. Higher air temperatures cause an increase in the consumption of reserve substances by the plant, which may cause a reduction in productivity (Santos et al. 2014). Additionally, higher air temperature is the environmental factor which exerts the greatest influence on the abscission of flowers and pods, reducing grain filling in common bean, causing a significant reduction in the grain yield of the crop (Mondo and Nascente 2018). On the other hand, the higher NGP reduced M100G and, consequently, the grain size, reducing PGS12. This inverse relationship between NGP and M100G is usually reported, both in common beans and in soybeans (Dalchiavon and Carvalho 2012). In our study, a clear reduction in the NPP was related to the increase in the plant density. Similar result has been reported in the literature (Souza et al. 2014). This is because the largest number of plants per area increases competition between plants and causes a reduction in the number of pods per plant, but provides an increase in grain size (Santos et al. 2014). Corroborating this information, the highest NPP affected the grain size, since PGS12 increased with a decrease in NPP (Figure 4). According to literature, the number of pods and grain size are inversely proportional (Costa et al. 1997;Locatelli et al. 2014). The absence of fertilization depth effects over the productivity components and the grain yield of the common bean may be related to the characteristics of the plant and, also with the soil management (Lacerda et al. 2014). About 80% of the common bean roots are located within 20 cm (Pereira et al. 2014) and the experiments were conducted under no-tillage systems, in which the highest levels of nutrients are concentrated in the most superficial layers of the soil (Nascente et al. 2014). Thus, even providing greater development of the hypocotyl and epicotyl, it is likely that fertilization in deeper layers did not provide improvements in the absorption of nutrients that would allow greater productivity of grains, since in the surface layers there were nutrients enough for the full development of plants. Soils with adequate levels of nutrients and organic matter do not provide significant increases in grain yield of the common beans (Carvalho et al. 2014). Nitrogen is a key nutrient that is part of several plant structures and thus significantly affects plant growth (Ribeiro et al. 2018). In this work we also observed significant interactions between nitrogen doses and plant density for the parameters PLT and P100. Thus, with the application of the 45 kg ha -1 of N, the higher plant density provided a reduction in PLT and did not cause a significant effect on P100. However, when 90 kg ha -1 of N was applied, there were significant increases in P100 with the increase in plant density and there was no significant effect on PLT. Thus, the use of higher doses of nitrogen, and a higher plant density, is a strategy to increase P100, providing a reduction in losses in the harvest (Santos et al. 2014;Lacerda et al. 2019). Conclusions About 25% of common bean's pods of the BRS FC104 cultivar, are positioned below 100 mm from the soil surface, corresponding to the height of the cutting bar in the most combine harvester. Nitrogen fertilization and plant density affect the distribution of pods in the common bean, BRS FC104 cultivar. To reduce the grain loss at the harvest the plant density must be increased for higher N doses (90 kg ha -1 ) and reduced for lower N doses (45 kg ha -1 ).
2023-07-12T15:05:46.906Z
2023-06-09T00:00:00.000
{ "year": 2023, "sha1": "3ddc026a400b8cc6eb36b8491aabc6a8da7f0c18", "oa_license": "CCBY", "oa_url": "https://seer.ufu.br/index.php/biosciencejournal/article/download/67282/36159", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a2f1eece939cbc5ea4790de4275a6c9fce4dafa8", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
239085240
pes2o/s2orc
v3-fos-license
Consumption of healthy food and ultra-processed products: comparison between pregnant and non-pregnant women, Vigitel 2018 , Introduction Healthy eating is a determinant of individuals' health status. The golden rule of the Food Guide for the Brazilian Population, is that, "always prefer fresh or minimally processed foods and culinary preparations to ultra-processed food", brings recommendations that seek to promote a healthy diet prioritizing the consumption of cereals, beans, roots and tubers, milk, fruit, vegetables, eggs, meat, water, and culinary preparations made with these food, as well as small amounts of salt, sugar and fat. 1,2 Consumption of ultra-processed food, such as soft drinks, powdered juices, cookies, packet snacks, margarine, sweetened yogurts, instant noodles, chicken nuggets and, among others, should be avoided at all stages of life. 1,2 The high consumption of ultra-processed food, due to their high concentration of sugars, fat, salt, food dyes and other additives, 3 is associated with the development of chronic non-communicable diseases for both general population and pregnant women. 3,4 A study that analyzed pregnant women's diet in Botucatu/SP/Brazil found that one quarter of the average energy consumption came from ultraprocessed food, with higher numbers in younger pregnant women, of higher schooling and having their first child, during the trimesters of pregnancy. 5 In Ribeirão Preto/SP/Brazil, ultra-processed food accounted for 32% of the total calories consumed by pregnant women, and the consumption was higher among younger women with a better socioeconomic level. 6 In Campinas/SP/Brazil, a study with high-risk pregnant women showed the negative impact of ultra-processed food items on the nutritional profile, leading to higher energy density, high sugar content, sodium, fat, and low protein and nutrient content. 7 It is noteworthy that national investigations on the consumption of ultra-processed food during pregnancy are still scarce. In pregnant women, unhealthy eating is a risk factor for the occurrence of anemia, 8 excessive weight gain, 9,10 hypertension 9,11 and gestational diabetes, 12 postpartum weight retention, preterm delivery, low birth weight 12 and other conditions that affect the woman and fetus' health. A prospective cohort study (2009)(2010)(2011)(2012)(2013)(2014) with 660 pregnant women from Mexico City showed an association between better diet quality, evaluated by the Maternal Diet Quality Score, and lower risk of children with low birth weight (odds ratio=0.22; p< 0.05). 13 In a population-based cohort in Sweden, pregnant women with the worst diet quality had a 4.3 times higher risk of excessive weight gain compared to the segment with the best food quality (p=0.010); in addition, weight gain increased the risk of emergency cesarean delivery by two times. In the gestational period, physiological changes and fetal growth increase the demand for energy and nutrients, 14 highlighting the need to monitor food consumption and nutritional status. Maternal eating habits stimulate the child's tastebuds through the amniotic fluid and breast milk, emphasizing the importance of the golden rule of the Brazilian Food Guide to promote the acceptance of food from six months of life onwards. 2,15,16 Considering that healthy eating is essential for the mother and child's health, and that data on the Brazilian pregnant women's eating patterns are scarce (although interesting to the academic community, managers, and healthcare professionals), the objectives of this study were: to characterize pregnant women's eating habits and compare them to women of reproductive age, living in Brazilian capitals and in the Federal District, and analyze the association between pregnancy and eating habits. Methods This is a cross-sectional population-based study that used data from the Sistema de Vigilância de Fatores de Risco e Proteção para Doenças Crônicas por Inquérito Telefônico (Vigitel, 2018) (Surveillance System for Risk and Protective Factors for Chronic Diseases by Telephone Survey), including registrations of women aged between 18 and 50 years, living in households served by at least one landline phone. Our telephone survey used a probabilistic sample selected as follows: initially, a systematic and stratified drawing was carried out based on postal code (CEP) of at least 5,000 telephone lines in each city, from the electronic register of fixed residential lines of telephone companies. Then, the lines drawn in each capital and Federal District (DF) were redrawn and divided into replicas of 200 lines, each replica reproducing the same proportion of lines per postal code from the original register. In the second stage of the sampling, one of the adults (≥ 18 years of age) living in the selected households was chosen, a step that was performed after the identification, among the lines drawn, of those who were not eligible for the system. Some of the lines for this non-eligibility were business, out of service, or non-existent lines, as well as phone numbers that did not respond to six calls made at different times/days. 17 The minimum sample size of approximately two thousand individuals in each city is necessary to estimate, with a 95% confidence coefficient and a maximum error of two percentage points, the frequency of any risk factor in the adult population. To compensate for the bias of non-universal coverage of fixed telephone lines, weighting factors were used. The final weight (post-stratification) attributed to each individual interviewed allowed for the statistical inference of the results of the system for the adult population of each city, considering that it equals the sociodemographic composition estimated for the adult population with telephone by the Vigitel sample in each city, that is, the one that estimates the total adult population in the same city in the year of the research. 18 We considered the following variables in the sociodemographic composition: sex (female and male), age group (18-24, 25-34, 35-44, 45-54, 55-64 and 65 and more) and schooling level (without formal education or incomplete elementary school, complete elementary or incomplete high school, incomplete higher education, and complete higher education). All information on the sample design of the Vigitel telephone survey, as well as the procedures used in the interviews, were published. 17 In this study, women between 18 and 50 years old were distributed in two subgroups -pregnant and non-pregnant -and were described according to the following sociodemographic variables: age group (18 to 29 or 30 to 50 years old), schooling (0 to 11 or 12 or more years of schooling), skin color/race (white, black, or mixed), marital status (without spouse or with spouse) and possession of health insurance (yes or no). In Vigitel, eating habits were investigated through questions that assessed the weekly and daily frequency of food consumption that were considered food quality markers. Raw vegetables, cooked vegetables, fruit, natural fruit juice, soft drinks or artificial juice were the selected markers, and its consumption frequencies were categorized into 0 to 2, 3 to 4, and 5 or more days a week. The daily frequency of the intake of these food was analyzed, as well as the type of soft drink or artificial juice. Food consumption was also checked the day before, through the question: "Now I'm going to list some food and I'd like for you to tell me if you ate any of them yesterday (from the moment you woke up to when you went to sleep)" (yes or no). The food were organized into two groups based on NOVA classification, which characterizes the food according to the extent and purpose of industrial processing 2 : Natural or minimally processed food covered the following food groups: Raw and cooked vegetables: lettuce, cabbage, broccoli, watercress, spinach, tomato, cucumber, zucchini, eggplant, squash or beet. Meat and eggs: beef, pork, chicken, fish, fried eggs, boiled eggs or scrambled eggs. Milk. While ultra-processed food products covered: Soft drinks. Other sugary drinks: boxed juices, canned juices or juice powders. Ready-to-eat food products: instant noodles, package soups, frozen lasagna or other frozen readyto-eat dishes. Sliced bread, hot dog or hamburger buns. In the statistical analyses, initially, we estimated the weighted percentage distribution of women according to the selected characteristics, and the differences between the groups were verified via Pearson's chi-square test with second order correction (Rao & Scott), considering a significance level of 5%. Then, we estimated the percentage frequencies of food consumption, according to the categories of weekly and daily food consumption frequency between pregnant and non-pregnant women, and the differences were verified by the prevalence ratios (PR) adjusted for age, marital status and possession of health insurance. The percentages of unprocessed or minimally processed and ultra-processed food consumption for the subgroups were also evaluated. The analyses were performed in the statistical program Stata 15.1, in the survey module, which considers the complex sampling of the research. In this study, the groups of fresh or minimally processed food (total) and cereals, roots, and tubers (rice, polenta pasta, couscous, corn, potatoes, cassava, yams, squashes, carrots, sweet potatoes, or okra) were not presented in the results due to the insufficient number of observations, which made it impossible to produce estimates with adequate precision according to the condition of interest. The objectives of this research were informed to all the participants at the time of the telephone contact and the informed consent form was replaced Table 3 shows that no significant differences were detected in the daily frequencies of food consumption and the type of soft drink among pregnant women and women of reproductive age. The comparison between pregnant and non-pregnant women regarding to the consumption of fresh or minimally processed and ultra-processed food on the day before the interview is shown in Table 4. The percentage of consumption of ultra-processed items was similar between groups, reaching 94.8% in pregnant women and 90.4% in non-pregnant women. Only soft drinks (12.3% versus 25.1%) and sauces (7.4% versus 16.6%) did pregnant women present lower percentages of consumption. Discussion This study compares the pregnant women and women of reproductive age (18 to 50 years old) eating habits, living in Brazilian capitals and the Federal District, who participated in the Vigitel telephone survey (2018). The results indicate that both pregnant women and non-pregnant women had a high consumption of ultra-processed products. by the verbal one. The study was approved by the National Commission of Ethics in Human Beings Research, Ministry of Health, under Opinion Number 355,590 on July 26, 2013. Results We analyzed the data of 13,108 adult women between 18 and 50 years old, of whom 1.93% (n=179) were pregnant at the time of the interview. The mean age of pregnant women was 29.7 years old (CI95%=28.3-31.1) and, regarding to sociodemographic characteristics, most were between 18 and 29 years old (50.4%), with schooling up to 11 years (60.9%) and with health insurance (61.0%). Regarding to skin color/race and marital status, 49.3% declared themselves mixed skin color and 61.6% had a partner at the time of the study ( Table 1). Pregnant women had a higher proportion of fruit juice consumption with a frequency of 3 to 4 days a week (36.4% versus 19.1%) and fruit 5 or more days a week (74.2% versus 48.5%), in relation to nonpregnant women ( Table 2). Pregnant and non-pregnant women's food consumption, Vigitel 2018 Pregnant women had a more frequent consumption of fruit and natural juices, and less frequent of soft drinks and industrialized sauces. On the other hand, they did not differentiate regarding the consumption of essential food in the gestational period such as vegetables, legumes and oilseeds, meat and eggs and milk, and those considered unnecessary such as soft drinks, sweets, snacks and margarine. Regarding to sociodemographic characteristics, pregnant women were similar to those observed in other studies. In a national hospital-based study (2011-2012) with data on 23,894 puerperal women was observed that 70.4% were between 20 and 34 years old and 10.5% were 35 years or older; most of them declared themselves mixed skin color (56.1%), reported having a partner (81.4%) and almost 9% had completed higher education. 19 According to the National Health Survey (2013), among women that reported having had prenatal care in their late pregnancy, the subgroups aged 20 to 29 years old (50.6%), 30 to 39 years old (36.2%), mixed skin color (49.9%) and with complete higher education (16.8%) comprised most of the sample. 20 These comparisons denote the representativeness of the Vigitel sample, whose results are similar to those of national hospital and population-based studies deve-loped with pregnant women. The findings of this study showed higher frequencies of fruit and natural juice intake in pregnant women. A study that compared pregnant women's eating habits assisted at a primary care in the city of Botucatu/SP of women of reproductive age in Brazilian capitals (2010), found no differences in the percentages of regular consumption of fruit and vegetables, pregnant women presented a percentages of 30.2% (18 to 24 years old), 36.8% (25 to 34 years old) and 37.5% (35 to 44 years old). 21 In Spain, a study that monitored pregnant women until the postpartum period observed that the recommendation of fruit consumption (3 to 4 portions/day) was not met, with an average of 1.7 per portions/day; furthermore, when comparing the mean intake in the three trimesters and postpartum, there was a reduction of 221.4 g/day to 189.0 g/day (p<0.001). 22 Fruit are indispensable in food and are recommended to prevent various chronic diseases, given by their content of dietary fibers and other nutrients, as well as bioactive compounds with antioxidant action. 18 Estimates of a systematic review on food consumption patterns in 195 countries indicate that insufficient fruit intake (< 250g/day) has caused 2 million deaths and 65 million years of disability adjusted life-years (DALYs) in the world, in 2017. 23 A systematic review study with meta-analysis found that a 200 g/day increase in fruit intake would reduce the relative risk of stroke (18%), cardiovascular disease (13%), coronary artery disease (10%), cancer (4%) and all-cause of mortality (15%). 24 Data from three prospective cohorts conducted in the United States showed an association between fruit consumption and lower risk of diabetes, especially blueberries, grapes/raisins, apples/pears and bananas. 25 The dark peel of the fruit such as blueberry, jabuticaba, blackberry, and grapes is rich in anthocyanins; while white and red grapes have high levels of resveratrol, apples and prunes or plums contain quercetin and chlorogenic acid, which are antioxidant compounds that contribute to prevent disease. 25 A study that analyzed data from prospective cohorts found that the consumption of natural juice (≥ 1 portion/day) increases the risk of diabetes by 21%, and that the equivalent replacement of juice by whole fruit significantly reduces the risk of a disease. 25 Compared to the whole fruit, natural juice has lower dietary fiber content, higher glycemic load and does not provide the same feeling of satiety, therefore, preference should be given to the consumption of whole fruit. 2,26 The preparation of juice results in the loss of nutrients that are sensitive to light, oxygen and heat, making it advisable to reduce the time of interval for ingestion, besides avoiding the use of sieves and the addition of sugars. 2,26 For pregnant women, it is recommended to ingest citrus fruit at lunch and dinner to increase the absorption of non-heme iron present in food of plant origin. 27 In this study, it is noteworthy that approximately 95% of the pregnant women reported the consumption of ultra-processed products on the previous day and that soft drinks and sauces (mayonnaise, ketchup, or mustard) were the only food products in which pregnant women had a lower frequency of consumption. In the city of Botucatu/SP, regular soft drink consumption (≥ 5 days/week) was 17.5% (18 to 24 years old), 18.9% (25 to 34 years old) and 8.3% (35 to 44 years old) among pregnant women, which were lower percentages than those observed among non-pregnant women: 36.5%, 31.0% and 26.0%, respectively. 20 The consumption of ultra-processed food products impairs the nutritional quality of the diet, increasing the content of added sugar, sodium, fat, and reducing the content of dietary fiber, potassium, and protein. 2,28 In the United Kingdom, a cross-sectional study identified a high prevalence of inadequacy for added sugar (77.2%), saturated fat (80.2%), dietary fiber (93.6%), sodium (86.7%), and protein (92.3%) in the sample segment with higher intake of ultra-processed products. 3 Pregnant women ought to remove these food products from their daily diet, due to the risk of excessive weight gain and other negative health outcomes for both women and children. 10,12 A study conducted with pregnant women in Ribeirão Preto/SP detected a significant association between the consumption of ultraprocessed products and a higher inflammatory potential in the diet. 29 In this study, the pregnant women's eating habits were similar to the non-pregnant women, except for the consumption of fruit and juices, which was more frequent, and soft drinks and sauces, which was less frequent. Regular consumption of vegetables was reported by less than half of the pregnant women, and no differences were found in the daily consumption of vegetables, fruit, juices, milk and beans/oilseeds. In general, these findings agree with other studies, such as that of Gomes et al., 21 which identified high prevalence of eating practices considered healthy and unhealthy among pregnant women. An integrative review that sought to investigate the Brazilian pregnant women's eating habits revealed that, in most of the analyzed studies, the results differed from the national recommendations on food. 30 These findings reinforce the necessity to disseminate the recommendations of the current Food Guide for the general population, and to develop campaigns and educational materials to support health professionals to adapt the guidelines of the Guide into the gestational period. As for the limitations of this study, memory bias may interfere with the participants' ability to report the frequencies of food intake.The number of pregnant women in this sample was small, and may not represent the general universe; however, in comparison between the groups regarding sociodemographic characteristics, the analysis showed differences for age (younger), marital status (with spouse) and possession of health insurance. The representativeness of the Vigitel sample was restricted to individuals who had a fixed residential telephone line and who lived in the capitals of the 26 Brazilian States and in the Federal District in 2018. To minimize this limitation, weighting factors were applied that reduce the differences observed in populations with and without telephone lines, and the weigh on the post-stratification allowed the estimates to be extrapolated to all individuals. 17 In our study, the estimates were based on the weights calcu-lated by the Vigitel survey and the age cut was defined by the condition of interest (pregnant -yes or no). This study identified that pregnant women and women of reproductive age had high consumption of ultra-processed products. In comparison between the groups, pregnant women presented more positive results regarding the consumption of fruit and natural juices, with higher percentages, and soft drinks and sauces, with lower percentages. The results highlight the necessity of interventions aimed to promote pregnant women and women in general to eating healthy, in addition to the importance of adapting official dietary guidelines for pregnant women.
2021-10-21T15:26:33.343Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "af35b5d7f2befdfd632fc4db0184e24444a86692", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rbsmi/a/G4ThXXhjNDstWnGqbW45bhx/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7138f2d375f321026588077d2854829c02544970", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
268602645
pes2o/s2orc
v3-fos-license
CNN Based Face Emotion Recognition System for Healthcare Application INTRODUCTION: Because it has various benefits in areas such psychology, human-computer interaction, and marketing, the recognition of facial expressions has gained a lot of attention lately. OBJECTIVES: Convolutional neural networks (CNNs) have shown enormous potential for enhancing the accuracy of facial emotion identification systems. In this study, a CNN-based approach for recognizing facial expressions is provided. METHODS: To boost the model's generalizability, transfer learning and data augmentation procedures are applied. The recommended strategy defeated the existing state-of-the-art models when examined on multiple benchmark datasets, including the FER-2013, CK+, and JAFFE databases. RESULTS: The results suggest that the CNN-based approach is fairly excellent at properly recognizing face emotions and has a lot of potential for usage in detecting facial emotions in practical scenarios. CONCLUSION: Several diverse forms of information, including oral, textual, and visual, maybe applied to comprehend emotions. In order to increase prediction accuracy and decrease loss, this research recommended a deep CNN model for emotion prediction from facial expression. Introduction We suggest a face emotion recognition system that makes use of the CNN algorithm to recognize a person's feelings when they are photographed.A deep learning CNN model trained to classify frames into non-crash or non-crash groups is used by the system to analyse each video frame [1] For image classification applications, CNN has shown no revealed as an accuracy rate greater than 95%, ex They require no more preparation than procedures [2]. These algorithms are commonly used for computer vision tasks such as object recognition and image classification and are trained on labelled facial data to identify visual features such as eye, nose, and mouth positions [3].If a new face is touched system, CNN searches for facial features specific to that person to insert the face [4]. This embedding can then be compared to others in the system's database for consistency.CNN-based facial recognition systems have applications in various industries such as social networking, advertising, security, and surveillance [5 However, facial recognition technology has become an issue of ethical privacy, especially due to potential bias and manipulation of technology [6,7].important implications for potential applications in psychology, human-computer interaction, marketing, security, and in research, such as improving human-robot interaction, driver fatigue in vehicle insights, advertising and retail and improving the customer experience [8]. Additionally, CNN-based facial emotion identification has shown promising results in overcoming challenges faced by traditional emotion recognition systems, such as changes in facial expression and lighting conditions [9]. By employing CNN algorithms to recognize emotions from facial expressions, researchers hope to achieve high accuracy rates.This will pave the way for the creation of practical and effective real-world applications across a range of fields [10]. Literature Survey Faces display a wide variety of emotions that are shared by all people.Many apps that need extra security or private data have used facial recognition technologies [11].To determine a person's emotional state, facial emotion detection may be used to examine facial expressions of sadness, pleasure, surprise, rage, and fright.Accurate facial emotion identification and detection are crucial for marketing goals [12]. Most companies depend on the reactions that clients have to all of their products and services.Intelligent systems make it possible to determine whether a customer is interested in a product or service based on their emotional response to a captured image or video [13] people have elaborated emotional profiles on their face, these words are universal. Facial recognition technology has been incorporated into a variety of applications, particularly those related to data privacy or other security [14].Facial expression recognition is a technique that involves analysing a person's expressions of happiness, surprise, anger, and fear to determine the person's current state of emotional wellbeing [15] Accurate facial recognition and recognition is necessary for consumption effectively meet marketing objectives [16].Most industries are based on products and services generated by consumers as a whole [17]. It is possible to determine if a customer is satisfied with a product or service based on the emotional reaction they have to a photograph or video that was taken by an artificially intelligent system [18].This can be done by observing the customer's reaction.In the past, a range of machine learning techniques, such as Random Forest and Support Vector Machine (SVM), were used in order to predict sentiment from photographs that had been modified [19]. For example, recent advancements in computer vision have seen considerable improvements as a result of the use of deep learning.The use of a convolutional neural network (CNN) model can be helpful when attempting to identify facial expressions.The training and testing aim may both be met with this dataset.To estimate sentiment from edited photos, a variety of machine learning methods have been utilized in the past, such as Random Forest and SVM [20]. For instance, current developments in computer vision have significantly improved thanks to deep learning.To recognize face expressions, a convolutional neural network (CNN) model might be utilized.This dataset meets both training and testing purposes. Proposed System The system we suggested uses CNN algorithm to detect facial expression.The aim is to develop a system capable of detecting an individual's emotions through the use of a camera.The proposed methodology involves the utilization of a convolutional neural network model that has undergone deep learning to accurately classify video frames into two categories, namely accident-related and non-accident-related.This model will be employed to conduct a comprehensive analysis of each frame of a given video. The classification of photographs using convolutional neural networks has shown to be a speedy and accurate process.For comparatively smaller datasets, CNN-based image classifiers have outperformed earlier picture classification techniques, achieving accuracy levels of above 95%.Additionally, they need less preparation.CNNs, a kind of deep learning algorithm, are extensively used in computer vision applications including object detection and picture classification.In order to teach CNNs the specific characteristics of diverse faces, such as the positioning of the eyes, nose, and mouth, they are trained on a big dataset of tagged faces for facial recognition. CNN analyses the facial characteristics of a new face as it is added to the system and produces a unique representation of the face known as a face embedding.This embedding may then be compared to the embeddings of other faces in the system's database to determine if there is a match.The usage of CNN face recognition systems has increased recently in a number of industries, including social networking, advertising, security, and surveillance.The privacy and ethical implications of face recognition technology are also a concern, particularly in light of possible biases and the risk of misuse.The collection does have some restrictions, however, such the relatively poor quality of the photographs and the fact that the emotions were annotated through crowdsourcing, which might result in mistakes and inconsistencies. Image Pre Processing People connect with one another via speech, gestures, and emotions.As a result, a wide range of industries have a significant need for technology that can recognize the same.In terms of artificial intelligence, it will be considerably simpler for a machine to interact with humans naturally if it can understand human emotion. It could be useful in psychotherapy and other healthcare settings.The presentation style of an E-Learning system might vary depending on the learner.However, static emotion recognition is generally not very useful.It is vital to understand the user's emotions over time in a genuine setting. The project's goal is to address this problem and enhance human-machine communication by developing a system that can recognize facial expression.The system will be developed such that it may be used in a variety of fields, including marketing, security, healthcare, and education.The graphical formalism is a simple tool that can be utilized to represent a system by illustrating the input data, the various operations carried out on it, and the resulting output data.One of the most critical modelling tools is information technology. CNN model The creation of the system's component models is facilitated through its utilization.The constituents encompassing the system's functionality, its utilized data, an external entity that engages with it, and the modality of Furthermore, it illustrates the process of data transformation within the system's information flow.The graphical representation of information flow and data modifications as it moves from input to output is a method commonly employed.It may also be divided into levels that depict increasing information flow and functional complexity and utilized to illustrate a system at any degree of abstraction. Results Figure 4 shows example test photos for predicting emotions from supplied facial expressions, which cover all possible emotions including sadness, anger, contempt, surprise, and fear.It can be seen from both graphs that the suggested deep CNN performed better at predicting emotions from videos than from facial expressions. Conclusion The study of emotions has become an important topic of research that may assist with a number of aims by delivering some insightful facts.Whether consciously or unintentionally, individuals utilize their facial expressions to communicate their feelings.Emotional research has lately developed as an important academic topic that has the ability to contribute to a broad variety of efforts through the supply of relevant data.People transmit their ideas and emotions by their facial expressions, whether they are doing so purposely or unwittingly.To have a better understanding of people's sentiments, one may refer to a variety of various forms of information, including spoken, written, and visual material.According to the conclusions of this study, a deep CNN model should be employed for assessing emotions based on facial expressions in order to boost prediction accuracy and minimize loss.Several diverse forms of information, including oral, textual, and visual, may be applied to comprehend emotions.In order to increase prediction accuracy and decrease loss. Facial expression using CNN has been a hot research topic because the ability to accurately read facial expressions has EAI Endorsed Transactions on Pervasive Health and Technology | Volume 10 | 2024 | EAI Endorsed Transactions on Pervasive Health and Technology | Volume 10 | 2024 | information transmission within it are inclusive of these components. Figure 3 . Figure 3. Proposed data flow diagram EAI Endorsed Transactions on Pervasive Health and Technology | Volume 10 | 2024 |
2024-03-22T16:18:33.437Z
2024-03-18T00:00:00.000
{ "year": 2024, "sha1": "e67d92e6dd365f3bd7dec105ba347c27707f43a7", "oa_license": "CCBY", "oa_url": "https://publications.eai.eu/index.php/phat/article/download/5458/3023", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "58b5d3af6c0c200328230b2644b736f7dbf70103", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
259250491
pes2o/s2orc
v3-fos-license
Patient navigation across the cancer care continuum: An overview of systematic reviews and emerging literature Patient navigation is a strategy for overcoming barriers to reduce disparities and to improve access and outcomes. The aim of this umbrella review was to identify, critically appraise, synthesize, and present the best available evidence to inform policy and planning regarding patient navigation across the cancer continuum. Systematic reviews examining navigation in cancer care were identified in the Cochrane Central Register of Controlled Trials (CENTRAL), PubMed, Embase, Cumulative Index of Nursing and Allied Health (CINAHL), Epistemonikos, and Prospective Register of Systematic Reviews (PROSPERO) databases and in the gray literature from January 1, 2012, to April 19, 2022. Data were screened, extracted, and appraised independently by two authors. The JBI Critical Appraisal Checklist for Systematic Review and Research Syntheses was used for quality appraisal. Emerging literature up to May 25, 2022, was also explored to capture primary research published beyond the coverage of included systematic reviews. Of the 2062 unique records identified, 61 systematic reviews were included. Fifty‐four reviews were quantitative or mixed‐methods reviews, reporting on the effectiveness of cancer patient navigation, including 12 reviews reporting costs or cost‐effectiveness outcomes. Seven qualitative reviews explored navigation needs, barriers, and experiences. In addition, 53 primary studies published since 2021 were included. Patient navigation is effective in improving participation in cancer screening and reducing the time from screening to diagnosis and from diagnosis to treatment initiation. Emerging evidence suggests that patient navigation improves quality of life and patient satisfaction with care in the survivorship phase and reduces hospital readmission in the active treatment and survivorship care phases. Palliative care data were extremely limited. Economic evaluations from the United States suggest the potential cost‐effectiveness of navigation in screening programs. INTRODUCTION Worldwide, over 19 million people were diagnosed with cancer and nearly 10 million cancer deaths were reported in 2020, resulting in more than 50 million people estimated to be living with cancer, 1,2 a figure that continues to increase because of a growing and ageing population, early detection, improved diagnostic methods, and improved treatment.Optimal cancer care requires evidence-based guidelines across the cancer care continuum (i.e., early detection, diagnosis, treatment, survivorship, palliative care, end of life) 3 for screening and surveillance, ongoing evaluation of the effects of cancer and its treatment, interventions for symptom management, coordination between specialists and primary care providers, and provision of sustainable and cost-effective follow-up care.It is also recommended to include personalization of care that aims to empower cancer survivors and support self-management. 4,5][8][9][10] These disparities occur across the cancer care continuum and may be attributed in part to several factors, including limited access and engagement with health care services and insufficient or inequitable allocation of health resources. 11,12These disparities can manifest in various ways.For developed nations like the United States, there is growing evidence for significant disparities across the cancer care continuum for racial and ethnic minorities or culturally and linguistically diverse populations, such as African American, Asian American, indigenous, Latino or Hispanic, and Pacific Islander populations. 13Because of several social determinants of health, including lack of health insurance coverage and other financial resources, the disparate outcomes across the cancer continuum may include reduced access to screening and follow-up of abnormal findings, reduced adherence to treatment regimens, and less favorable outcomes in length of survival and quality of life. 14,15[18][19] As cancer care continues to improve, the treatment process for many cancers becomes more complex, with multistep evaluation methods for diagnosing screening abnormalities and cancer symptoms and for multimodal treatment regimens. 8There is growing recognition that navigating the health care system as a person with cancer or an informal caregiver can be an overwhelming experience, especially for those facing multiple barriers to accessing health care. Barriers faced by people with cancer include structural, cultural, and individual characteristics, such as a lack of personal knowledge and financial means, lack of health insurance coverage, geographic distances from care providers, and the lack of resources for cancer care. These challenges can begin at the time of diagnosis and continue throughout treatment, follow-up care, and survivorship. 20,21Well established optimal care pathways for people with cancer can help them to better understand, and engage with, complex health systems and know which questions to ask of their health care professionals to ensure they are receiving the best care. 22However, because cancer care is complex, individuals will require further support at different stages throughout the cancer continuum.Among the many clinical 566 -PATIENT NAVIGATION IN CANCER CARE interventions that have been developed to address barriers to clinical care, patient navigation has been identified as a strategy for overcoming patient-level and system-level barriers, to reduce cancerrelated disparities, and to improve access to and coordination of timely care for those most in need. 8,23e history and early conception of patient navigation can be traced back to its development after the American Cancer Society National Hearings on Cancer in the Poor in late 1980s. 24Based on the findings of these hearings, the first patient navigation program was developed and launched by Dr Harold Freeman in 1990.This program originally aimed to save lives by eliminating barriers to facilitate early detection and time to cancer treatment. 24Subsequently, there were milestones, such as the Patient Navigator and Chronic Disease Prevention Act being passed by Congress, which became law in 2005. 246][27] However, patient navigation generally refers to the role and activities that enable people affected by cancer to overcome health care barriers and facilitate access to quality health and psychosocial care across the cancer care continuum. 27,28tient navigation programs for people with cancer can differ significantly in terms of the staffing and services provided.Patient navigation may be delivered by health care professionals (e.g., nurses, social workers) or lay workers (e.g., peer supporters, people with cancer) with different educational backgrounds and training or may be delivered through digital systems (i.e., automated systems).Depending on the needs of the individual (i.e., the person with cancer and their caregivers), identified barriers, and individualized cancer care goals, navigators provide a wide range of support to help people with cancer overcome barriers to obtain optimal and timely cancer services and effectively use available care resources. Findings from several reviews in the literature indicate that patient navigation has potential to improve access and continuity of care, cancer screening rates, timeliness of diagnosis, and cancer treatment completion rates.Improvements in quality-of-life indicators, including emotional well-being, have previously been reported. 29Although the literature has suggested the benefits of patient navigation in cancer, it is somewhat unclear whether: (1) findings are directly applicable to the diverse settings worldwide where navigation and navigation programs or platforms are still in their infancy or nonexistent; (2) which components are most critical and effective for improving experiences, outcomes, and efficiency; and (3) how learnings from implementation and evaluation efforts can inform workforce and policy planning across health systems globally.Therefore, it is critical to review existing evidence in the literature to provide a contemporary understanding of patient navigation models in cancer care.Accordingly, the aim of this review was to provide an overview of the existing literature in patient navigation in cancer care and the status of cancer patient navigation models and to promote consistency in expectations across the international community by conceptualizing cancer patient navigation using existing evidence.To achieve this, we conducted an overview of systematic reviews and a review of emerging primary studies that address the following research questions: METHODS This overview of systematic reviews and emerging literature (i.e., primary studies published beyond the coverage of included systematic reviews) was prepared and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines, 14 and was prospectively registered with the Prospective Register of Systematic Reviews database (PROS-PERO identification number CRD42022327013).Reference lists of eligible studies were also scrutinized.The search strategy (see Table S1) focused on the following key terms and overlapping concepts: navigation (e.g., navigator, care coordination, case management) and cancer (e.g., malignancy, oncology, neoplasm). Databases Despite the extensive literature in the field, there was no universally agreed conceptualization of the term navigation.Therefore, a separate set of narrative reviews and systematic reviews was retrieved with the same search terms from the main screening and selection of systematic reviews that were used to address the primary and relevant secondary questions.Moreover, we applied an organic process in developing a definition of navigation, whereby our initial working definition evolved in line with the ongoing synthesis of evidence.To address primary and secondary research questions (independent from the definition of navigation), we incorporated the use of methodological search filters, controlled vocabulary terms, and specific search terms to limit the search results to reviews, systematic reviews, and meta-analyses.To ensure recency of review findings, concurrent supplemental searches for primary research articles published beyond the coverage of the included systematic reviews (i.e., from January 1, 2021, to May 25, 2022) were also performed using the same databases but omitting use of the review-focused methodological search filters.Figure 1 depicts the approach adopted in this review to answer the research questions of interest. Eligibility criteria Eligibility criteria were developed in accordance with the PICO (population, intervention, comparison/control, outcome) framework. The populations of interest for this review were threefold: 1. Individuals of any age at risk of, or diagnosed with, cancer; 2. Their caregivers; or 3. Providers of cancer care (e.g., patient navigators, oncologists, hematologists, primary care, allied health professionals, nurses); this wider approach has been adopted in this umbrella review in an attempt to capture the international literature in case there are providers who are delivering patient navigation but not labelled as navigators. Regarding the intervention, an operational definition of patient navigation models consistent with Wells and colleagues 30 In addition to the description provided by Wells and colleagues, 30 we also integrated elements of patient navigation services summarized by Dalton and colleagues, 31 which included activities involving: (1) care coordination, (2) facilitating linkages to follow-up services, or (3) reducing or eliminating barriers to cancer care. In terms of study designs, quantitative systematic studies incorporating any comparator (e.g., treatment as usual or standard care) were eligible for inclusion, as were all relevant qualitative or mixed-methods systematic reviews.To address the primary and secondary research questions, reviews that incorporated the collection of data pertaining to the following discrete outcome categories were included: clinical outcomes, process outcomes, economic outcomes, and perceptions and experiences of the specific populations. Study screening and selection After searching the databases and de-duplication, identified articles were imported into Covidence for screening.Two reviewers (O.A.A. and J.J.) independently screened titles and abstracts of retrieved articles from the search strategy that potentially adhered to the study eligibility criteria.Full texts were then reviewed by the same two reviewers for inclusion in the review.Disagreements were initially resolved by discussion or, when consensus could not be reached, by adjudication from a third reviewer (R.J.). F I G U R E 1 Schematic overview of search strategies for reviews and primary studies. Critical appraisal The JBI Critical Appraisal Checklist for Systematic Review and Research Syntheses (JBI; formerly the Joanna Briggs Institute) was used by two independent reviewers (any two reviewers from the following assessed each included systematic review: O.A.A., J.J., F.C. W., R.J., or Y.D.) to assess the quality of studies that systematically evaluated outcomes related to effectiveness, cost-effectiveness, or phenomena of interest.This tool evaluates systematic reviews across 11 study quality domains, with each domain rated as yes, no, unsure, or not applicable.Disagreements were resolved by discussion until consensus was achieved.No appraisal of study quality was conducted for the primary research articles that were published beyond the coverage of the included systematic reviews. Data extraction, analysis, and synthesis Data extraction was performed using a standardized data extraction form.Relevant systematic review characteristics and findings related to effectiveness, cost-effectiveness, and other phenomena of interest were extracted independently by reviewers (O.A.A., J.J., F.C.W., R.J., or Y.D.) and were checked for accuracy by additional reviewers (F.C.W. or J.J.).Any disagreements were resolved by discussion or, if required, by a third reviewer (R.J.).Where outcome data were missing or inadequately and/or inconsistently reported, data were directly extracted from the primary research article if possible. Descriptive analyses of all included studies were performed using narrative synthesis.Effect sizes and relevant numerical results derived from quantitative analyses were presented in tabulated format.The final or major findings from qualitative analyses were presented in tabulated format and supplemented with relevant contextual information.For any study that conducted an economic evaluation, a health economist (J.R.) and member of the authorship team (F.C.W.) conducted the data synthesis/analysis.This umbrella review included two distinct types of evidence, namely, systematic reviews (and meta-analyses) and primary studies. It was an a priori decision that the narrative synthesis of systematic review evidence and emerging evidence from primary studies were to be conducted separately.It was expected that this approach could enable readers to clearly identify what evidence had been included in the systematically reviewed literature versus newer primary studies. Within the analysis and synthesis, special consideration was given to overall cancer diagnoses or specified cancer subtypes, various population subgroups (e.g., different age groups, culturally and linguistically diverse people, indigenous people), equity of access to cancer patient navigation, equity in outcomes related to cancer patient navigation, intervention components, type of delivery personnel, as well as defined episodes of cancer-related care or general cancer-related care across the cancer care continuum.Indigenous people were of particular interest for nations (e.g., Australia and Canada) with First Nations populations who experience inequity in cancer outcomes. Definition of cancer care continuum For the purpose of analysis in this review, the cancer care continuum comprised stages from early detection, diagnosis, treatment, survivorship, and palliative care and end of life. 3The survivorship phase refers to the period after primary cancer treatment, and the palliative care phase refers to those living with advanced, chronic, or terminal cancer. 32The end-of-life phase refers to the last weeks and days of life. 33There is also an increasing recognition of the importance of supportive and palliative care throughout the cancer journey rather than only at the end of life. 34 Guatemala, Brazil, India, Kenya, Nigeria, Mexico, Malaysia, Iran, Pakistan, Bangladesh, Thailand, Singapore, and Hong Kong.Two reviews 89,90 did not report the countries of their primary studies. No systematic reviews focused on palliative or end-of-life care.In two reviews, 45,88 the cancer continuum stage was unclear. Qualitative reviews exploring navigation needs, experiences, and barriers Seven reviews focused solely on qualitative research, with publication dates of primary studies ranging from 2002 to 2018.Within these qualitative reviews, the number of included primary studies ranged from three to 29, whereas the total number of participants included in each review ranged from 38 to 114.Two reviews included only primary studies that were conducted in the United States, 49,51 whereas the remaining five reviews [46][47][48]50,52 consisted of primary studies from different countries, including the United States, the United Kingdom, Canada, Australia, England, Belgium, France, Ireland, Italy, Norway, Sweden, Netherlands, Ghana, Kenya, Uganda, Malawi, Nigeria, Zambia, India, China, and Hong Kong. Two reviews 48,49 consisted of primary studies focusing on a single cancer type (i.e., cervical cancer), whereas the remaining N = 5 reviews 48,49 or treatment (N = 1), 50 or they included different stages across the continuum, which may be early detection, diagnosis, treatment, survivorship, and/or end of life (N = 4). 46,47,51,52e phenomena of interest of each individual review were: patient perceptions; experiences or needs related to care coordination between primary care providers and oncologists 46 ; experiences of patients with significant mental health difficulties and health care professionals' attitudes toward accessing cancer care 47 ; barriers preventing women from using cervical cancer screening services in sub-Saharan Africa 48 ; barriers and facilitators to cervical cancer screening among refugee women in the United States 49 ; cancer patients' needs, values, and preferences during their cancer treatment experiences 50 ; experiences of adult patients with cancer who used patient navigation programs in hospital, including how patient navigators affect the challenges patients encounter in the cancer care continuum 51 ; and experiences of adult patients with cancer who received counselling from nurses. 52 Critical appraisal Our critical quality appraisal of the included systematic reviews is presented in Table S3.Overall, the reviews generally met the majority of the requirements in the checklist, with 97% of studies Effectiveness of navigation The effectiveness of cancer patient navigation on various patient outcomes across the cancer continuum is summarized in Table S4, 572 - which presents the number of systematic reviews and the number of unique primary studies that reported on each outcome.Navigation components, population groups, and cancer types are also reported for each outcome.An overview of the evidence of effectiveness of patient navigation for outcomes that were investigated in more than two primary studies across the cancer continuum is provided in Figure 2. Outcomes within the palliative care phase of the continuum were not included in the figure because only one primary study investigated each outcome.Evidence of effectiveness was considered strong for outcomes in which multiple reviews and multiple primary studies reported corresponding positive findings.Evidence of effectiveness was reported as inconclusive for outcomes in which reviews and primary studies reported conflicting findings or there was a small number of primary studies.Evidence of effectiveness was limited for outcomes that were only included in only one systematic review and a small number of primary studies. Early detection Twenty-six of the reviews included primary studies focusing on uptake or adherence to cancer screening programs, predominantly for breast, colorectal, cervical, and lung cancer.There was overwhelming evidence from 172 unique primary studies across the 26 reviews that various patient navigation interventions were effective at improving rates of cancer screening.Evidence from reviews suggested that interventions delivered in the home, community, face-to-face, via telephone, individually, or in group sessions were equally effective in improving screening rates.However, a combination intervention approach (navigation combined with mass media or general education) appeared to be most effective. 36,41,53,56,58,67,78In particular, navigation increased screening rates when combined with education or multi-strategy interventions.Components of navigation, including outreach, mass media, and mailed print materials, produced inconsistent evidence of improving screening rates. 40,41 was unclear whether culturally tailored navigation interventions were more effective than standard navigation interventions, with subgroup analyses revealing significant effects in certain population groups (Latino, Asian American) but not others (African American). 56,65,76,92Reviews on Latino men, 76 Hispanic women, 75 Appalachian populations, 67 African American men, 68 minority populations in general, 36 patients with limited English proficiency, 79 and populations adversely affected by health disparities 77 ; as well as among Hispanic, African American, low-income Chinese American women 66 and medically underserved populations 42 indicated that navigation was effective in improving screening rates.For example, Luque and colleagues' review on community-based screening navigation programs targeting Hispanic women identified that navigation improved screening uptake compared with usual care (OR, 1.67; 95% CI, 1.24-2.26;N = 5; n = 2343). 75Similarly, Rogers and colleagues reported that patient navigation was significantly better than control interventions at increasing colorectal cancer screening uptake among African American men, with an OR of 2.84 (95% CI, 1.23-6.49;p = .01). 68Reviews focused on Asian women in western or Asian countries 40 and women of lower socioeconomic status 41 indicated that effectiveness in increasing screening rates was limited to certain intervention components only.One review 65 on racial and ethnic minority groups found inconsistent evidence on the effectiveness of navigation on screening completion but consistent evidence that patient navigation reduced rates of discontinuation of appointments. Overall, interventions that were tailored to an individual were most effective based on a thorough understanding of the barriers affecting their health promotion behavior.In addition, screening rates were seen to improve when the navigators received rigorous training. 61Two reviews included outcomes relating to cancer screening knowledge 62,80 and reported that patient navigation interventions, particularly with nurse navigators, were effective at improving patient knowledge regarding breast, lung, cervical, or colorectal cancer screening. Cancer diagnosis Ten reviews included primary studies focused on diagnostic resolu- Cancer treatment Overall evidence from 43 unique primary studies across nine reviews suggested that patient navigation interventions were effective at reducing the time from diagnosis to initiation of primary treatment. For example, Wu and colleagues reported that patients who received navigation had a significantly shorter time from diagnosis to treatment (difference of −9.07 days; 95% CI, −14.08 to −4.06 days; p = .0004). 86However, two reviews concluded that the evidence was mixed or was not significant, 6,90 with one review showing improvements in the time to treatment initiation that were more pronounced among Hispanic women than non-Hispanic White women. 6Patient navigation programs that assessed the time to treatment initiation often included decision aids, cultural messaging, and bilingual support.Thirteen unique primary studies across eight reviews reported on adherence to treatment or treatment completion outcomes produced mixed evidence.Four reviews reported that patient navigation programs were effective at improving treatment (i.e., surgery, chemotherapy, radiotherapy) adherence, 29,31,53,86 and Ali-Faisal and colleagues' review comprising 23 primary studies, reported increased adherence to treatment for patient navigation versus usual care (OR, 2.53; 95% CI, 1.02-6.30;p = .05). 53Four reviews suggested there were no significant differences in treatment completion between patients who were provided navigation and those who received usual care. 6,63,64,74However, Wu and colleagues' meta-analysis (N = 3) showed that individuals who received patient navigation had a significantly higher treatment completion rate (OR, 2.45; 95% CI, 574 - 1.56-3.87;p = .0001)compared with those who did not receive navigation. 86It was unclear from the reporting within these reviews how treatment adherence or treatment completion were measured and whether these outcomes differed substantially.Three unique primary studies reported across two reviews suggested that patient navigation was associated with increases in enrolment and adherence to clinical trials. 29,59Outcomes, including treatment interruption and receipt of appropriate treatment, were included in one review 59 investigating the efficacy of care coordination, but evidence came from individual primary studies.One review 38 reported that patient navigation provided to patients during active treatment resulted in fewer unplanned hospital admissions and reduced length of hospital stay, intensive care unit admission rates, and emergency visits. Evidence from eight reviews consisting of 21 unique primary studies suggested that generally patient navigation significantly improved the quality of life of patients with cancer; however, two of those reviews reported inconclusive findings regarding the effects of nurse-led navigation interventions on quality-of-life outcomes. 43,85r example, Tho and Angs 85 identified no significant differences between patient navigation and usual care in improving the quality of life for patients with cancer who were undergoing treatment (pooled weighted difference, 0.41; 95% CI, −2.89, 3.71; p = .81;N = 3; n = 477).Similarly, evidence from eight reviews consisting of 18 unique primary studies suggested that navigation could improve the patient satisfaction with care, but two reviews reported inconclusive findings. 28,43Wells and colleagues' pooled standardized mean difference from nonrandomized controlled trials (N = 4) was 0.39 (95% CI, −0.02, 0.80; p = .06),indicating that patients who received patient navigation (n = 241) were not more satisfied than those who did not (n = 176). 28The positive effects on patient satisfaction and quality of life were often most significant in racial and ethnic minority populations, including indigenous populations, 57 and when navigation programs included culturally sensitive care as well as addressing logistical and practical barriers and providing counselling and emotional support. 28,64,69There was no clear evidence whether community health workers, lay navigators, or nurse case managers were better placed to deliver effective navigation. Cancer survivorship Patient navigation programs appeared to increase adherence to surveillance appointments in women who had breast or cervical cancer compared with women who received usual care. 6,87For example, Yang and colleagues 87 reported that patient navigation significantly increased adherence to cervical follow-up appointments within 12 months (OR, 3.23; 95% CI, 2.14-4.88;N = 2; n = 707), and >12 months (N = 1; n = 565).Individual reviews also reported that patient navigation had positive effects on communication, 74 decision making, 89 and treatment knowledge 74 but inconclusive effects on fatigue 38 and return-to-work outcomes (intervention vs. control: OR, 0.61; 95% CI, 0.24-1.57;p = .31;N = 2; n = 221). 81Improvements in anxiety, depression, and distress after patient navigation programs were generally not supported by the literature.One review 69 conducted in socially disadvantaged groups found inconsistent effects of navigation on quality of life but significant improvements in depression. Palliative care There was only one review, which was limited to breast cancer patients, that included two primary studies reporting on outcomes relevant to palliative care. 90One primary study suggested that patients receiving palliative-intent treatments may have less contact with a patient navigator than those receiving curative-intent treatment, and one primary study reported that a patient navigation program may result in fewer patients missing palliative care appointments.No included reviews or primary studies reported specifically on the effectiveness of patient navigation during end-of-life care. Cost and cost-effectiveness of navigation Nine systematic reviews and two additional, recent primary studies (not covered in the reviews) were identified with health economics evidence pertaining to the cost-effectiveness of patient navigation in cancer care.The first primary study, by Bucho-Gonzalez and colleagues, 93 focused on individuals from low-income and underinsured communities presenting for colorectal cancer screening in the United States and was undertaken to assess the budgetary effects of startup and roll-out of a colorectal cancer screening program for this population.Given its targeted focus in this specific population in the United States and its focus on budgetary effects only, the study provided no evidence relevant to cost-effectiveness, and the results may not be generalizable to the international context (i.e., other health care systems).The second primary study, by Herman and colleagues, 94 assessed the cost-effectiveness of screening promotion for English-speaking or Spanish-speaking adults from a medically underserved or underinsured community who were not adherent with colorectal cancer screening guidelines in the United States. Cost-effectiveness in this case was assessed in terms of the cost per additional person screened, and it was demonstrated that tailored community-to-clinic navigation was likely to be highly cost-effective -575 continuum, including all types of cancer. 29,35,39Two systematic reviews have focused on specific populations/cancer types only including colorectal cancer 37,42 and older patients with cancer (aged 70 years and older). 44One systematic review assessed the evidence relating to the impact of case management on improving the quality of life of patients with cancer. 45Limited numbers of eligible studies were identified by all these systematic reviews of variable methodological quality, with the majority of available evidence emanating from the United States.Three systematic reviews focused on patient navigation specifically to increase cancer screening. 35,40,42Those systematic reviews indicated that patient navigation was costeffective and potentially cost saving when increasing screening completion is the primary outcome of interest.One of those reviews focused on cost-effectiveness evidence for interventions to increase breast and cervical cancer screening uptake among Asian women in western or Asian countries noted that a significant gap exists in relation to evidence of cost-effectiveness and the long-term sustainability of these programs. 40The authors concluded that vigorous study design and economic evaluation methodologies should be used in future studies to generate valid evidence on the cost-effectiveness of intervention programs to increase breast and cervical cancer screening uptake among Asian women.Three reviews 29,35,37 included health outcomes in their cost-effectiveness analyses, suggesting that quality-adjusted life-years saved through patient navigation interventions outweighed the intervention costs. Overall, these reviews determined that patient navigation has the potential to improve health care use, cost, quality-of-care, and quality-of-life outcomes.However, the limited numbers of included studies and their heterogeneity in terms of populations investigated, study settings, and methodological quality suggest that more rigorous research is needed before definitive conclusions can be reached about the cost-effectiveness of patient navigation in cancer care. Synthesized qualitative findings on patient experiences, needs, and preferences An overview of the qualitative findings of patient experiences, needs, and preferences reported in nine of the included reviews is provided in Table S5. Needs Six reviews [46][47][48][49][50][51] comprising 28 qualitative studies explicitly stated the navigation needs of individuals diagnosed with or being screened for cancer.Three of the six reviews 46,50,51 presented navigation needs at all phases of the cancer continuum.This included the provision of information (e.g., concerning physical effects, finances, and emotional effects as well as supportive services, additional resources, and care coordination), timely access and scheduling of care, holistic care, advising and answering patient questions, addressing financial and logistical barriers, providing practical assistance even at completion of treatment, the provision of physical and emotional support, being available and accessible for support at all phases of the cancer continuum, sharing information from the multidisciplinary team, and emotional support from other cancer survivors. 8][49] Luft and colleagues described the needs of refugee women in the United States regarding cervical cancer screening and observed that individuals sought navigation services that addressed language barriers, logistical issues, knowledge limitations, and cultural barriers, such as modesty, cancer stigma, fear, and religious beliefs. 49Furthermore, participants 576 - expressed that receiving navigation from someone from their own community who knows the native language and is trained in health education increases trust.Similar needs were expressed by participants in a review of women in sub-Saharan Africa. 48Leahy and colleagues presented the views of individuals with significant mental health difficulties who presented for cancer screening and concluded that the development of a patient navigator role was needed to facilitate communication between patients with significant mental health difficulties, health care professionals, and mental health care professionals. 47though a review on navigation in indigenous populations in Australia by Gifford and colleagues did not include any qualitative studies that explicitly listed needs, further analysis of all studies noted that interventions often had little relevance to the Australian indigenous communities that participated in them. 57Furthermore, the review highlighted the importance of focusing on all aspects of wellness-emotional, spiritual, mental, as well as physical-and emphasized a need to engage indigenous communities to develop, deliver, and evaluate navigation services. 57 Individuals' preferences Two reviews 50,52 comprising 32 qualitative studies provided information on preferences of navigation.Tay and colleagues identified cancer survivors' preferences for patient navigation as providing patient-centered coordination and an explanation of clinical care (e.g., symptom management, resource assistance, coordination of care, coordination of services) or as individualized holistic support (e.g., providing practical assistance, emotional support, and empowerment when navigators are present for patients at key phases of the cancer care continuum) that was contingent on the patient's personal circumstances and existing support networks. 52tchell and colleagues highlighted that individuals with cancer appreciated navigation delivered in the form of home visits, telephone, and email communication because it reduced stress and issues with transportation. 50However, they also noted that patients with cancer valued the peer interaction that often came from attending clinic visits.There was a strong preference for peer support and navigation provided by other cancer survivors.Preferences for decision making and information delivery (quantity, timing, source) varied. Early detection Eight reviews 36,41,56,62,68,72,80,92 Other stages of the cancer continuum: General population The remaining three reviews 45,64,84 detailed barriers that included the lack of clear selection criteria for navigators and the extensive time and resources involved in holistic navigation (e.g., multiple phone calls with each patient at different time points, responding to requests for services, hiring personnel to be available and accessible).Facilitators to implementation included effective communication between navigation service providers, patients, and health care providers; making sure the role of the navigator is clear and that navigators are well trained; and centralizing services or incorporating a triage or computer centralized system that reduces resources used. Other stages of the cancer continuum: Underserved populations Seven reviews described barriers and facilitators to implementation or uptake at various stages of the cancer continuum.Of these seven, four reviews 65,83,89,90 provided insight into barriers and facilitators relating to underserved populations (e.g., racial or ethnic minorities, low socioeconomic status, etc.).Failure to recognize and account for literacy skills, education levels, and cultural beliefs of ethnic minorities prevented successful implementation.Furthermore, navigation that was facilitated by bilingual, culturally competent personnel who understood the language and the social and cultural context of target participants bridged the gap between cultures and eliminated low health literacy barriers. For a list of barriers and facilitators to patient navigation at the system, provider, and individual levels, see Figure 3. Findings from emerging literature (primary studies) Descriptive characteristics of included primary studies Of 2119 unique records identified, in total, 53 relevant primary studies published since 2021 were included, as presented in the PRISMA diagram for primary studies (see Figure S2, with study characteristics and outcomes presented in Table S6).Overall, 20 of the 53 studies (37.3%) focused on screening, reporting on adherence, screening knowledge, no-show rates, and attitudes and beliefs regarding screening.Across these studies, five reported using technology other than email or telephone calls.This included Google Hangouts and WhatsApp, 95 an online patient navigation tool, 96 social media and phone applications, 97 Zoom, 98 and web-based tool and video conferencing. 99Four implementation and feasibility studies were also included. Early detection [105][106][107][108][109] Multicomponent patient navigation interventions (navigation combined with education and media) proved to be effective in promoting screening in both general and underserved populations, with higher screening uptakes. 103,105,107,109,110Evidence from the studies suggests that culturally tailored or community-based programs with a focus on education, assistance with payments, transportation, and social networks with underserved populations (e.g., high-poverty rural counties in Texas, low-income Latina women, African F I G U R E 3 Summary of barriers and facilitators across the cancer continuum at the system, provider, and individual levels.EMR indicates electronic medical record. 578 - 2][113] Furthermore, the collaboration with a local health system through the inclusion of a community health worker navigator led to better screening knowledge and attitudes toward screening along with a reduction in cancer stigma. 114Therefore, patient navigation involving community health care workers is also an effective method of increasing screening adherence in underserved populations. 112,115ncer diagnosis and treatment There were significant improvements in follow-up and treatment adherence rates as well as attendance at cancer peer support groups after the implementation of patient navigation programs. 97,116,117e inclusion of patient navigation for organizing appointments, orientating patients to tailored resources, coordinating team care, and establishing communication using social media applications from the time of diagnosis through to treatment reduced abandonment rates. 117,118In addition, it has been demonstrated that navigation components focused on patients' knowledge of the health care system and available financial aids increase patient accrual, particularly in late-phase clinical trials. 119ncer survivorship 4][125] Studies also focused on the effects on screening knowledge, stigmas, and access to care. 114,126Patient navigation resulted in improved access to supportive cancer care, completion of advanced care directives, and pain symptoms. Perceptions of patients and providers Six studies focused on participant perceptions.8][129] Commonly reported unmet needs included housing, financial, legal, and transportation issues. 130Participants highlighted the need for patient navigation to address institutional barriers by setting recruitment goals for minority participation in clinical trials and ensuring that interventions are accessible to minorities and that community outreach is used to build awareness. 131In addition, participants highlighted the importance of public education and advocacy to combat the ongoing financial barriers. 132Furthermore, patients identified the need for patient navigators to be consistent contact persons who cater to more general patients' needs (i.e., offering practical and emotional assistance) rather that fulfilling diseasespecific tasks. 133This need for communication and collaboration was echoed among people with cancer from underserved populations along with a highlighted need for culturally sensitive navigation services. 132Navigators highlighted the importance of their roles in the delivery of advanced care planning and symptom screening. 134They also associated sociodemographic-related (e.g., lower education and lower income), clinic-related (e.g., experiencing chemotherapy toxicities), psychological-related (e.g., high patient anxiety), and health system-related (e.g., longer diagnostic interval) factors with a greater need for navigation services. 135rriers and facilitators to the implementation of patient navigation Common barriers reported included the inconvenience of in-person support 136 ; limited experience using technology 95 ; hesitance to use a patient navigator 137 ; logistical-related, psychological-related, and knowledge-related barriers 138 ; the lack of social support and culturally and linguistically concordant patient navigators 139 ; limited regular feedback to stakeholders 98 ; and institutional barriers. 131cilitators included interpretation services, pre-prepared patients, high-quality flexible services, and highly accessible patient navigators. 98mmary of findings from primary studies as emerging literature Overall, the emerging evidence reinforced the findings of the overview of systematic reviews; however, three emerging themes of evidence with a focus on indigenous populations, digital health, and caregivers were identified. Indigenous populations Two studies 18,19 focused on patient navigation in indigenous populations in Canada.Various barriers to care were reported, including finances, transportation, distance from service providers, language barriers, lack of indigenous representation in the health care system, and lack of culturally safe care, with ongoing perceptions of paternalism in current health care models. 18,19Furthermore, distance and extended travel times were more than just a risk factor for delayed diagnosis and treatment because they represented a loss of income, extended isolation from community and family, as well as an interruption in the grief process. 18,19Participants suggested that indigenous navigators could potentially offer better culturally tailored support, linguistically tailored resources, and promote patientprovider trust. CHAN ET AL. Digital health Three primary studies included information on digital health.One study found that using caregiver navigators in combination with webbased tools could connect participants to existing social support services, resulting in valued discussions with patients. 99Another study indicated that using an online patient navigator tool to complement the information provided during a consultation with a health care provider resulted in increased patient satisfaction, with lower reported anxiety levels. 96Finally, a study on navigation delivered by telemedicine (with the mode of delivery depending on patient preference) was identified as feasible and useful in a resource-limited setting. 95Furthermore, study authors reported few barriers to implementation and delivery. Caregivers Three studies focused on patient navigation concerning caregivers. Caregivers from one study reported that their use of an online navigation tool was helpful, and they were satisfied, appreciating having someone focused on their unique needs. 99Another study found improvements in anxiety and depression in caregivers and patients who received navigation through telehealth. 121That study also indicated efficacy for patients, suggesting that the support provided to caregivers through navigation enabled them to care for patients more effectively to the benefit of patient outcomes. 121other study found that the perceptions of care coordination among family caregivers were poorer than among patients because of their previous experiences. 140 DISCUSSION Although several systematic reviews already existed in the literature, this umbrella review adds to the literature because it is the first to summarize and harmonize the existing reviews and emerging literature.This umbrella review approach is useful because it provides a helicopter view of the systematic reviews available concerning patient navigation.For example, the cancer types, demographics of populations, navigation types, and delivery covered by a wide array of systematic reviews varied significantly (see Table S4).Although the positive benefits and evidence gaps highlighted in this umbrella review do not differ from the conclusions made by previous systematic reviews, this consistency further adds to the confidence and robustness of the existing evidence that underpins patient navigation.gation that provides culturally appropriate and relevant education and assistance.In particular, patient navigation could provide information for these underserved communities that supports their understanding of the health system and facilitates the accessibility of the health system while simultaneously addressing any barriers to accessing quality health care. Workforce planning and preparation Although patients who have cancer and caregivers appreciate support and navigation at every step of the trajectory, it may not necessarily be beneficial or practical for navigation to be provided by the same person or type of professional throughout the cancer continuum.Accordingly, it is important for health service planners and policymakers to undertake robust workforce planning and determine how navigation support can be implemented across the continuum.Skill sets of the patient navigation workforce may be a key factor for consideration.For example, a recent mixed-methods study 145 Clarification of roles in patient navigation (for navigators) is of critical importance.In particular, there has been a recent call for actions to further differentiate patient navigators and community health workers. 146It is vital that the public has confidence in accessing a workforce that has a level of consistency in their service provision and outcomes.Wells and colleagues advocated for a consensus approach to determine core competencies of patient navigators 147 in which such competencies would be able to inform subsequent training curriculum and approaches. Another consideration relating to workforce planning and preparation is the effect of funding models and lack of reimbursement on job security and a sense of being valued as an individual and as a workforce more broadly. 146Indeed, Garfield and colleagues 148 report that nonclinical patient navigators in the United States described significantly lower levels of job security and stability because grant funding provides the main source of funding for this workforce, which highlights a potential for service disruption and a lack of personnel and workforce continuity during periods when grant funding is unable to be received. Navigation support throughout the cancer continuum In some countries, such as the United Kingdom, Australia, and Canada, there may already be professional workforces that provide some level of navigation support for people with cancer during the treatment, survivorship, and palliative or end-of-life care phases.Prominently, these professional groups may be specialist cancer nurses, care coordinators, and oncology social workers.Although navigation may not be the sole focus of their role, as discussed above, their day-to-day role is dynamic and may cover a range of patient navigation activities. 149 is also important to recognize that our overview of systematic reviews found very limited numbers of studies in the literature on the effectiveness of patient navigation in the palliative and end-of-life care phase.There are two potential implications.First, patient navigation as a model of care may have a limited role for people with cancer in the palliative and end-of-life care phase; and, second, specialist palliative care providers, primary care providers, or community programs may already be providing the level of support required by these people with cancer in the palliative and end-of-life care phases. Evaluation of cancer patient navigation effects Evaluating the effect of patient navigator services in cancer care is an important requirement within a value-based care system.Battaglia and colleagues 150 surveyed 538 patient navigation programs across the continuum of care in the United States, highlighting that only one half of these programs used data for reporting purposes.Of the 538 programs, 374 used electronic medical records, and only 25% of those 374 had an identifier for navigated patients using their service. Program funding was identified as the key limiting factor associated with data collection.Respondents participating in an oncology accreditation program were more likely to collect and use outcome data across the continuum.Lack of time (55%) and lack of support (50%) for complex data system/platforms were the most common barriers to outcome data collection/reporting.In the survey used by Battaglia and colleagues, 150 there were useful metrics for consideration in future data-collection and reporting activities.These metrics covered screening (eight items), cancer treatment (five items), survivorship (five items), and end-of-life care (five items). Future directions Based on the findings and lessons learned from the literature, a list of recommended considerations is outlined in -583 navigation in other common cancers, such as prostate cancer, lung cancer, and melanoma; rare cancer types; and hematologic malignancies.Furthermore, cancer stage was rarely reported in the literature, and the effectiveness of patient navigation interventions for patients with advanced or metastatic cancers and those in palliative care and end-of-life care settings needs to be explored. 151Third, there is also a lack of evaluation including solid clinical end points (such as survival).Although such clinical end points may be more distal outcomes of navigation, it is important for future evaluations to Primary research question: 1 . 2 . What is the effectiveness and cost-effectiveness of different cancer patient navigation models and programs?Secondary research questions: 1.What is cancer patient navigation, and what are the patient requirements and needs for navigation through the cancer care pathway?What are the key elements (domains) of patient navigation?Which components of navigation models and programs are effective?Which groups of individuals benefit from them? 3. What does the literature and evidence report in relation to cancer patient navigation, and what are the key gaps and limitations in the literature or evidence?4. What are the facilitators and barriers associated with implementation of cancer patient navigation? 5. What are the patient, caregiver, and provider experiences with patient navigation in cancer care? were searched for peer-reviewed, systematic reviews published in English from 2012 through April 19, 2022, including the Cochrane Database of Systematic Reviews (CENTRAL), PubMed, EMBASE, Cumulative Index of Nursing and Allied Health (CINAHL; on EBSCOhost [EBSCO Industries, Inc.]), Epistemonikos, and PROS-PERO databases.Searches were also conducted through the Turning Research into Practice (TRIP) and World Health Organization databases, Google Scholar, and the Agency for Health Research and Quality platform to ensure the retrieval of all relevant articles. was used, which refers to barrier-focused interventions that are harmonized by five key characteristics involving: 1. Provision of services for an individual for a defined episode of cancer care, 2. A defined end point at which provided services are complete, 3. A defined set of health services required to finalize an episode of cancer-related care, 4. The identification of individual patient-level barriers to accessing cancer care, and 5.The aim to reduce delays in accessing cancer care services (e.g., timelines of diagnosis and treatment) and in the number of patients lost to follow-up. with low incremental costs (<$650 US dollars on average) per additional person screened.Again, because of its limited focus in a specific population in the United States and because cost-effectiveness was assessed in terms of an interim outcome of additional persons screened (as opposed to the quality-of-life and/or survival effects of patient navigation in cancer care), it is not possible to draw any definitive conclusions about the cost-effectiveness of patient navigation in cancer care or the applicability of this study to other health systems.Three systematic reviews focused on the health economics evidence pertaining to patient navigation across the cancer care CHAN ET AL. provided insights into barriers and facilitators related to the implementation of navigation programs for cancer screening and early detection.System-level barriers to the implementation of navigation screening programs included inability to maintain an updated electronic medical records system and sustain funding to support a navigator position and the inability to bill (insurers or payers) for or to reimburse nonclinical navigators working in community settings.Provider-level barriers included the inability to contact individuals for follow-up.Individual-level barriers to uptake of navigation services included lack of referral from providers, distrust in the health care system, low health literacy, geographic isolation, and societal beliefs.Organization-level or system-level facilitators to navigation implementation included the development and integration of screening policies, clinic protocols, and tracking mechanisms and the establishment of partnerships between navigation services, screening clinics, and specialists.Provider-level facilitators included ensuring well developed training procedures, competency assessment, and proper supervision for navigators.Strategies that facilitated individual uptake of navigation services included education on early detection and access to care, using telecommunication, additional tailored phone calls to assess barriers and provide practical support, and incorporating culturally specific and sensitive content. Figure 5 .F I G U R E 5 In addition, this overview of systematic reviews identified multiple key evidence gaps in research.First, most of the research was conducted in the United States, and, although we acknowledge that the United States itself has different health care systems, there is limited evidence assessing the effectiveness of patient navigation in different countries.This includes health economic cost-effectiveness because the US data have limited applicability to other systems when evaluating this measure.Second, most patient navigation research has focused on cancer screening programs and prediagnosis phases, predominantly for breast, cervical, and colorectal cancers.Future research should dedicate focus toward evaluating the effectiveness of patient Implications for cancer patient navigation for providers, researchers and policy makers. CHAN ET AL. consider the inclusion of such end points to further support the sustainability of such programs.Fourth, a scarcity of evidence was published pertaining to technology-based patient navigation solutions, including the use of online bots or artificially intelligent systems.It is expected that electronic aids or tools can assist with enhancing the longer term efficiency, sustainability, and scalability of patient navigation interventions.Fifth, because of the heterogeneity and various practices in the vast literature, it was not within scope of this overview of systematic reviews to dissect the literature at the microlevel to differentiate navigation provided by various professional groups (e.g., social worker navigators vs. other social workers).Future policy research is needed to inform consensus best-practice standards (including standardized definitions and criteria) for cancer patient navigation that are specific to the context.Such a practice framework should seek to definitively clarify the work scope and training requirements of the patient navigation workforce in cancer care.Finally, research into indigenous populations worldwide is needed to understand the unique cultural factors facing indigenous people, including their pathways to health and well-being and their access barriers to cancer care.This information is vital for developing appropriate patient navigation services that support indigenous peoples.CONCLUSION To our knowledge, this review is the first overview of systematic reviews and emerging literature of patient navigation across the cancer continuum, highlighting patient navigation as effective for improving uptake of cancer screening programs for breast, cervical, and colorectal cancer as well as shortening time frames from screening to diagnosis and from diagnosis to treatment initiation.There is also some emerging evidence suggesting that patient navigation has positive effects on patients' quality of life, satisfaction with care in the survivorship phase, and hospital use from active treatment to survivorship.Economic evaluations from the United States suggest the potential cost-effectiveness of navigation in screening programs.Further evaluations outside the US context are required.Patient navigation interventions hold significant promise for optimizing cancer control.This review contains recommendations and future directions for consideration.Careful, context-specific planning that includes policy actions to facilitate funding models is required to maximize the consistency, sustainability, and effectiveness of patient navigation in cancer across various countries. Key domains of patient navigation care and intervention components in the included reviews.Coming up with individualized plans (e.g., action plans, return-to-work plans) -Reminders to patients and/or providers -Ensuring availability of medical records -Scheduling and arranging appointments -Facilitating linkages and/or providing referrals to follow-up services and support -Liaising/communicating with health care providers -Serving as primary contact person or reference of care -Providing a link between acute and community services -Assisting transitions across settings and providers -Monitoring and/or following up with patients Education/information provision -Education (including one-on-one or group education) -Information provision (which may also be tailored, include use of decision aids, involve clarifying doubts/providing explanations, or include information about available services and resources or test instructions) -Providing take-home learning materials -Use of media campaign or materials Providing assistance with financial and health insurance -Assisting in completing paperwork and/or making financial applications -Reducing out-of-pocket costs using vouchers and reimbursements -Assisting in sourcing for low-cost sources of care received training.Navigators could be members of same community as the target populations (e.g., indigenous or African American navigators).Navigators included multidisciplinary teams, psychosocial teams, cancer depression clinical specialists (e.g., psychologists or psychiatrists), physicians (e.g., general practitioners, primary care providers, or medical interns), nurses (e.g., nurse case managers, advanced practice nurses, specialized screening nurses, nurse specialists, nurse practitioners, advanced nurse practitioners, registered nurses, enrolled nurses, nurse coordinators [who may be clinic-based, community-based or home-based and who may be specializing in oncology], social workers, dental hygienists, case managers, 570 -PATIENT NAVIGATION IN CANCER CARE T A B L E 1 Empowerment -Problem solving with individuals (e.g., physical, psychological, and social issues) -Encouragement/motivation (e.g., using strengths-based approach or video testimonials) -Guidance (e.g., guiding patients to identify concerns/preferences, guidance in administering self-test, guidance on treatment and diagnostic tests) -Communication coaching (to facilitate communication between patients and health care providers) -Counselling (e.g., psychosocial, medical, or barriers counselling, motivational interviewing) -Promoting self-care (e.g., home-based exercise and relaxation) -Assisting in self-management (e.g., psychosocial stress management, symptom management) -Self-help group support -Caregiver support or family counselling -Fostering social interactions -Peer modelling (e.g., via survivor narratives) -Skills training/building Comfort/emotional support -Emotional, social, or psychosocial support (e.g., accompanying patients during appointments, providing practical advice or coping strategies, or providing an avenue for patients to get help or advice) -Providing culturally safe environment -Ensuring female physicians are available -Spiritual support Direct care provision -Provision of direct nursing care/services -Symptom management terventions that were delivered without a third-party navigator (i.e., digital or paper-based). 74,76,89Intervention duration and frequency were not well reported across reviews.Where reported, individual navigation sessions ranged from 5 minutes to 3 hours, and the period ranged from 1 week to 7 years.Navigation frequency ranged from a single contact to multiple contacts (i.e., 18 contacts), or navigation could be carried out weekly, monthly, or as needed.Of the N = 54 quantitative or mixed-methods reviews, N = 16 272ociation of Oncology Social Work, and the National Association of Social Workers,142as well as the recent American National Navigation Roundtable.27Becausepatient navigation is potentially being adopted in countries outside the United States, it is important for policy makers to be explicitly clear about the intent and conceptual definition at the start of any navigation program implementation., either face-to-face or by telephone, in group education sessions, and by means of written information and mass media. The systematic reviews and primary studies included in this review confirm that the operational definition used in this umbrella review is sufficiently inclusive in capturing navigation interventions.Although there is no specific reason to propose any drastic changes to the operational definition, after synthesizing the available evidence, we (e.g., lack health insurance or living in rural remote areas).In underserved groups, addressing access barriers, including transport and finance, were generally required in patient navigation programs, whereas all population groups found that the education, care coordination, and emotional support components of patient navigation programs were effective and beneficial.Ideally, individually tailored navigation programs are required to address personal barriers to accessing cancer care.The most common component of patient navigation was patient education and information, which was provided through one-on-one F I G U R E 4 Recommended definition of patient navigation across the cancer care continuum.CHAN ET AL. sessions 145ducted in the United States suggested that nonclinical navigators experienced more difficulties providing navigation activities for the treatment to palliative care phases of the continuum and that clinical navigators can be directed to support phases of the journey that require significantly more direct clinical oncology expertise.145Suchfindings give rise to the potential of a team-based
2023-06-27T06:17:03.784Z
2023-06-26T00:00:00.000
{ "year": 2023, "sha1": "4a04fba72fe939de0a0deb6020623fffdf4fee88", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.3322/caac.21788", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "5bdaa7e0283f1979a5d6991c31cd880ffb0fc361", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53103340
pes2o/s2orc
v3-fos-license
NLRP3/Caspase-1 inflammasome activation is decreased in alveolar macrophages in patients with lung cancer Lung cancer (LC) remains the leading cause of cancer-related mortality. The interaction of cancer cells with their microenvironment, results in tumor escape or elimination. Alveolar macrophages (AMs) play a significant role in lung immunoregulation, however their role in LC has been outshined by the study of tumor associated macrophages. Inflammasomes are key components of innate immune responses and can exert either tumor-suppressive or oncogenic functions, while their role in lung cancer is largely unknown. We thus investigated the NLRP3 pathway in Bronchoalveolar Lavage derived alveolar macrophages and peripheral blood leukocytes from patients with primary lung cancer and healthy individuals. IL-1β and IL-18 secretion was significantly higher in unstimulated peripheral blood leukocytes from LC patients, while IL-1β secretion could be further increased upon NLRP3 stimulation. In contrast, in LC AMs, we observed a different profile of IL-1β secretion, characterized mainly by the impairment of IL-1β production in NLRP3 stimulated cells. AMs also exhibited an impaired TLR4/LPS pathway as shown by the reduced induction of IL-6 and TNF-α. Our results support the hypothesis of tumour induced immunosuppression in the lung microenvironment and may provide novel targets for cancer immunotherapy. Introduction Lung cancer remains the leading cause in cancer-related mortality in both males and females. Approximately 85% of lung tumours are non-small cell lung cancer (NSCLC), including adenocarcinoma, squamous cell carcinoma and large cell carcinoma [1]. Although the majority of lung cancer patients are smokers, only a minority among smokers will develop this disease, strongly suggesting that additional environmental determinants including infections, in a background of genetic susceptibility, drive disease initiation and progression [2]. The crosstalk between inflammation and tumorigenesis is a field of active research and although cell-autonomous aberrations are considered the initiators of cancer, chronic inflammation has been shown to promote cancer initiation and progression [3]. Recent studies in a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 tumor immunosurveilance propose a strong relationship between lung cancer risk factors and alterations in inflammatory cytokine levels, oxidative stress markers and immune cell composition [4]. The interaction of cancer cells with their microenvironment, particularly with immune cells results in stimulatory or inhibitory effects that result either in tumor escape or elimination [5] [6]. Alveolar macrophages (AMs) play a critical role in lung immunoregulation and potentially in the prevention of lung diseases, including lung cancer. AMs regulate local inflammatory reactions via the release of cytokines and phagocytosis. Pro-inflammatory macrophages induce synthesis and upregulation of several pro-inflammatory cytokines and chemokines, through activation of the NF-kB and MAPK cascades. Key among these are TNF-α, IL-12, IL-6, CCL2 and interleukin-1β (IL-1β). At the other extreme, macrophages are alternatively polarized to the anti-inflammatory states by stimuli such as IL-4, IL-13, IL-10 or glucocorticoid hormones [7,8]. These macrophages upregulate IL-1 receptor antagonist and downregulate IL-1β and other pro-inflammatory cytokines. Studies investigating the role of AMs in lung cancer have provided inconsistent results. Whilst some studies report increased cytotoxic activity and antitumor effects after AMs activation, others have reported decreased cytotoxic activity and protumor effects [9] [10] [11]. A dual role for macrophages in lung cancer has therefore been suggested. A central mechanism driving inflammation in immune cells is orchestrated by the inflammasome. Inflammasomes are multi-molecular protein complexes responsible for Caspase-1 driven activation of the pro-inflammatory cytokine IL-1β and IL-18 [12]. They are involved in the innate immunity by recognizing pathogen-associated molecular patterns (bacteria, viruses and fungi) and intracellular and extracellular damage-associated molecular patterns [12]. IL-1β and IL-18 exert pleiotropic effects in inflammation and tumorigenesis. Mature IL-1β, primarily produced by monocytes and macrophages, albeit absent under normal conditions, shows an important inducible transcription during inflammation and stress [13]. The most well studied Inflammasomes are NLRP3, activated by various stimuli, NLRC4 activated by bacterial flagellin and AIM2 activated by cytosolic double stranded DNA [14]. NLRP3 is activated by the widest array of stimuli, microbial or sterile. In most cells, NLRP3 must be primed by a toll-like receptor (TLR)/nuclear factor-κB signal [15], to upregulate the expression of pro-IL-1β, pro-IL-18 and NLRP3. Once primed a second signal, by various pathogen-associated molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs), pore-forming toxins, adenosine triphosphate (ATP), and particulate crystals leads to the cleavage of active IL-1β and IL-18, by caspase-1 [14]. Inflammasomes can modulate tumor immunity during carcinogenesis playing a role in immunosuppression, and could also formulate the response to anti-tumor vaccines [9,16]. The effects of NLRP3 activation in neoplastic progression may be contingent on the differential roles played by immunity and inflammation in each specific malignancy [17]. In human lung cancer, the field of inflammasome mediators and inflammasome activation remains relatively unknown. Interestingly, exploratory data from a randomized trial proposed that IL-1β inhibition results in decreased incidence of Lung cancer in patients with high CRP [18]. In this view, we aimed to investigate an IL-1β specific pathway, the NLRP3 inflammasome, in lung cancer peripheral blood leukocytes and alveolar macrophages obtained from bronchoalveolar lavage and to characterize this inflammatory process in primary lung cancer in the periphery and locally in the lungs. Our results showed distinct NLRP3/Caspase 1 dependent IL-1β maturation in alveolar macrophages relative to peripheral blood cells with strong stimulation in the periphery and reduced stimulation locally in the lungs of NSCLC and small cell lung cancer (SCLC) groups. Patients Thirty-one (31) consecutive subjects were enrolled in this study in three groups: non small cell lung cancer (NSCLC) group (n = 15), small cell lung cancer (SCLC) group (N = 4) and Control group (n = 12). Lung cancer patients underwent bronchoscopy at the diagnosis of cancer and none received lung cancer therapy at the time of the experiments. The control group consisted of patients undergoing bronchoscopy for the investigation of haemoptysis. During the investigation of haemoptysis no underlying disease was discovered, the bronchoscopy findings and the cytology results were normal. No inhaled medications [19] were used by any of the participants before and at the time of sample collection. Subjects who had experienced respiratory infections 6 weeks prior to bronchoscopy were excluded. Informed consent was obtained from all patients. The study was approved by the Ethics Committee of the University Hospital of Heraklion, Crete, Greece (17517/19-12-2013). Biological samples and processing Bronchoalveolar Lavage Fluid (BALF) was obtained from all patients and controls and isolation of macrophage population was performed as previously described [20,21]. In brief, a flexible bronchoscope was wedged into a sub segmental bronchus of a predetermined region of interest based on radiographical findings. A BALF technique was performed by instilling a total of 240 mL of normal saline in 60-mL aliquots, each retrieved by low suction. Volume received was the same for all samples. The BALF fractions were pooled and split equally into two samples. One sample was sent to the clinical microbiology and cytology laboratory and the other was kept at room temperature (RT) and used for this research. BALF was passed through a 70nm filter (Millipore) to isolate cells in suspension from debris and mucus. To pellet cells, samples were centrifuged at 1,500 rpm for five minutes at RT. The supernatant was discarded and the cells were resuspended in 4 ml RPMI medium, 10% FBS, 10x penicillin/ streptomycin, followed by cell count in a Neubauer haemocytometer. Equal amounts of BALF sample cells were loaded onto six-well plates, using RPMI supplemented with 10% heat-inactivated FBS as culture medium. BALF sample cells were observed using an inverted microscope, to verify adherence of macrophages. Heparin-anticoagulated whole blood from patients and controls were obtained at room temperature. Whole blood was used after Red Blood Cell lysis (RBC lysis Buffer, eBioscience Product No 00-4333-57). A white blood cell count, in parallel with the BALF cell count, was performed using a Neubauer chamber. Equal amounts of blood and BALF sample cells were loaded onto six well plates, using RPMI complete as culture medium. Intracellular IL-1β quantification Intracellular IL-1β protein levels (normalized to beta actin) were assessed by immunoblotting, as previously described [21]. In detail, BALF macrophages and blood leukocytes from 19 lung cancer patients and 12 control subjects were homogenised in RIPA buffer, and 50 μg of protein sample were separated by 12.5% SDS-polyacrylamide gel electrophoresis. Protein were subsequently transferred to nitrocellulose membranes and mature IL-1β protein was detected with rabbit polyclonal antibody against IL-1β (17kDa protein) (Cell Signalling Technology cat#2022) and enhanced chemiluminescence. Mouse anti-actin antibody (MAB 1501, Chemicon, Temecula, CA) was used to normalize IL-1β expression. Films were scanned and the protein lanes were quantified using the Photoshop CS2 image analysis software (Adobe Systems Inc., CA). All reagents were pyrogen-free for all studies. No contribution of independent activation of macrophages s by contaminants was observed. Statistical analysis IL-1β and IL-6 secreted, IL-1β cytosolic levels as well as qRT-PCR relative expression results were evaluated using one-sample Kolmogorov-Smirnov goodness of fit test, Student's t-test and Mann-Whitney U test. Values reported are means ± SD (standard deviation). Statistical analysis was carried out using SPSS 17.0 Chicago IL, USA. Statistical significance was set at the 95% level (P-value < 0.05). Patient characteristics Demographics of patients and controls are shown in Table 1. Fifteen (15) patients were diagnosed with NSCLC and four (4) with SCLC. Patients with both lung cancer groups compared to controls showed no difference in age, smoking status, sex and BAL fluid cell populations ( Table 2). The control group included 4 current smoker females and 7 current smoker males, the NSCLC group included 1 former smoker female and 3 former smoker males as well as 2 current smoker females and 7 current smoker males, the SCLC group included 1 current smoker female and 2 current smoker male. Two lung cancer patients and two control patients had Chronic Obstructive Pulmonary Disorder (COPD) according to GOLD criteria[24], while no other overt pulmonary comorbidities were discovered. The main histological type of NSCLC was adenocarcinoma (11/15 patients) followed by squamous carcinoma (3/15) and large cell carcinoma (1/15). No differences arose in any of the studied cytokines based on histological subtype. IL-1β and IL-18 secretion is activated in PBMCs and alveolar macrophages in NSCLC and SCLC Initially, we measured the secretion of IL-1β and IL-18 by unstimulated peripheral blood leukocytes (PBMCs) and alveolar macrophages (AMs) as an indirect indication of inflammasome activity in the periphery and locally in the lungs of the patients respectively. PBMCs from NSCLC and SCLC groups released higher levels of IL-1β and IL-18 in the culture medium compared to controls (Fig 1A and 1B). Alveolar macrophages from the NSCLC and SCLC groups also produced higher amounts of IL-18 but not IL-1β (Fig 1C and 1D). NLRP3 requires a two-step process. In the priming step Il-1b and other key components of the inflammasome are overexpressed. Thereafter in order to determine that the first step occurred properly we measured cytosolic levels of IL-1b. Interestingly, only control PBMCs and AMs retained mature IL-1β in their cytoplasm whereas no intracellular levels of mature IL-1β were detectable in the cells from NSCLC and SCLC (S2A and S2B Fig). IL-1β and IL-18 secretion via NLRP3 stimulation is decreased in NSCLC and SCLC AMs Next, we assessed the production of mature IL-1β and IL-18 by PBMCs and AMs following NLRP3 inflammasome stimulation. We used a TLR4 agonist as priming signal, and ATP as the Defective NLRP3 pathway in lung cancer macrophages second "danger" signal as described in materials and methods and S1 Fig. To verify that IL-1β and IL-18 secretion was due to LPS/ATP activation of the NLRP3 inflammasome we also stimulated the cells in the presence of Caspase 1 inhibitor I. We observed that in PBMCs from NSCLC and SCLC, the already elevated IL-1β secretion, could be further stimulated in an NLRP3/Caspase-1 dependent fashion (Fig 2A) in contrast to the levels of IL-18 (Fig 2B). In AMs, we observed a different profile of IL-1β secretion than in PBMCs. IL-1β secretion did not increase upon NLRP3/Caspase-1 inflammasome stimulation in NSCLCs and SCLCs (Fig 2C) similarly to IL-18 ( Fig 2D). Overall, controls followed a canonical IL-1β and IL-18 secretion following NLRP3/Casp1 inflammasome profile in AMs. mRNA levels of NLRP3 are significantly reduced in AMs in NSCLC and SCLC To further investigate the reduced production of mature IL-1β and IL-18 by AMs we examined the expression of the NLRP3 by RT-PCR. Our results showed that NLRP3 mRNA was significantly downregulated in AMs from NSCLC and SCLC (Fig 3A). Interestingly, Caspase-1 mRNA was significantly upregulated in NSCLC and SCLC (Fig 3B). Pro-inflammatory cytokines IL-6 and TNF-α stimulation is also decreased in AMs from NSCLCs and SCLCs Next, we tested the levels of IL-6 and TNF-α in the unstimulated and the LPS stimulated cells. AMs from NSCLCs and SCLCs secreted significantly lower amounts of IL-6 and TNF-α than the controls (Fig 4A and 4B). In addition, LPS stimulation resulted in a significantly lower induction of IL-6 and TNF-α expression by AMs from NSCLCs and SCLCs relative to controls (Fig 4C and 4D). Unstimulated lung cancer peripheral leukocytes produced higher IL-6 levels than the controls (Fig 4E) while under the experimental conditions used the 2 h stimulation of PBMCs with a low dose of LPS did not induce IL-6 secretion in either the control or the patient groups ( Fig 4G). In contrast, TNF-α secretion by the unstimulated PBMCs was significantly lower in the lung cancer patients relative to controls (Fig 4F). TNF-α levels in the PBMCs could be further induced by low dose of LPS in all groups and although stimulation was stronger in both NSCLC and SCLC (Fig 4H), overall TNF-α levels remained very low in the lung cancer groups. No significant difference between the COPD patients and any other group (control or lung cancer) was found between baseline or after NLPR3 induction in any of the studied cytokines. Defective NLRP3 pathway in lung cancer macrophages Discussion The NLRP3 inflammasome expression and role in lung cancer is relatively unknown, while inflammatory reactions can exert a dual influence on tumor growth and progression (14). Recently, studies have proposed a strong relationship between lung cancer risk factors and alterations in inflammatory cytokine levels, oxidative stress markers and immune cell composition [4]. In this view, we aimed to investigate a central innate immune response pathway, the NLRP3 inflammasome, in lung cancer PBMCs and AMs, in order to gain insight on inflammatory processes related to LC. Our primary finding, that alveolar macrophages show decreased transcriptional and secretional TLR4 mediated NLRP3 activation markers, suggests that in human lung cancer innate immune responses in alveolar macrophages are compromised. Additionally, the LC group exhibited a pro-inflammatory state systemically, which could be attributed to the pro-inflammatory state of the patients that lead to the LC, supporting the inflammation induced cancer hypothesis [25]. It has been suggested that chronic inflammation can cause immune suppression in cancers [7], while differential expression of inflammasomes has been observed in various human lung cancer lines and tissues [26]. Variations in inflammasome response have been shown to occur depending on histological subtype, staging and invasive potential [27, 28]. Our results show that locally LC AMs were unable to activate the NLRP3 inflammasome, while at baseline they produced similar amounts of IL-1β and higher amounts of IL-18 compared to control. Additionally, AMs produced low levels of IL-6 and TNF-α which did not increase upon LPS stimulation. Our results suggest that AMs in lung cancer exhibit an altered polarization status[29], similar to tumor associated macrophages (TAMs). The phenotype of monocytes/macrophages plays a significant role in tumor progression. Depending on the type of the malignancy and the prevalent polarization status, macrophages are associated with favorable or unfavorable clinical outcomes [30]. Classically activated, M1 macrophages usually activate Th1 cells to provoke cellular immunity and cytotoxicity and their presence within tumor islets have been associated with improved outcomes [31]. Alternatively activated M2 macrophages tend to function with Th2 cells, leading to tumor progression and immunosuppression[32]. In the tumor microenvironment the most frequently found immune cells are TAMs[33], which promote tumor growth, invasion and metastasis. To note, the phenotype associated with cancer initiation is considered to be the activated M1 [8]. To the other extreme, in established tumors, the macrophage phenotype changes from the "inflammatory", to the alternatively activated/trophic M2 phenotype of TAMs[34]. Interestingly, M2 polarized macrophages tend to release lower levels of IL-1β after NLRP3 activation [35]. In contrast to TAMs, AMs in lung cancer progression are not well studied. A study has provided evidence that AMs in lung tumor bearing mice, favor Th2 responses and facilitate metastasis [36]. Our research group has shown that lung specific involvement of the NLRP3 inflammasome is also impaired in idiopathic lung fibrosis, in agreement with the anti-inflammatory/ pro-fibrotic profile of alveolar macrophages reported [22]. IL-6 and TNF-α secretion by AMs and PBMCs in NSCLC and SCLC relative to controls. Concentration of IL-6 (a) and TNF-α (b) levels measured in the supernatants of unstimulated cultures of AMs following a 2.3h period. Induction of IL-6 (c) and TNF-α (d) in AMs supernatants by LPS-LPS 250 pg/ml, 2hrs incubated at a 37oC/5% CO2 humidified incubator. Concentration of IL-6 (e) and TNF-α (f) levels measured in the supernatants of unstimulated cultures of PBMCs following a 2.3h period. Induction of IL-6 (g) and TNF-α (h) in PBMCs supernatants by LPS-LPS 250 pg/ml, 2hrs incubated at a 37oC/5% CO2 humidified incubator. Results in a, b, e and f are shown as mean+/-SD pg/ml, normalized per million cells cultured. Results in c, d, g and h are shown as mean+/-SD pg/ml, normalized to unstimulated. https://doi.org/10.1371/journal.pone.0205242.g004 Defective NLRP3 pathway in lung cancer macrophages Inducible inflammatory responses in AMs from LC were decreased, as shown by our study. This could be either attributed to cancer-induced chronic inflammation resulting in immunosuppression, or it could describe a pre-existing condition resulting in the emergence of cancer. In the context of chronic inflammation, it has already been suggested that in COPD, a global impairment in TLR signaling and phagocytosis is evident [37]. Underlying lung disease (as pulmonary function testing revealed in terms of FEV1) was scarce and statistically insignificant. Thus, we propose that in the present cohort such an established independent variable of alveolar macrophage dysfunction (FEV1) could not be taken into account. Furthermore, impairment of AMs function has been previously associated with disease severity [38]. In our experiments, we used solely TLR4 stimulus, thus we cannot provide evidence of a global TLR signaling impairment in AMs. Furthermore, it has been established that NLRP3 priming is an NF-kB dependent process [39] and chronic NF-kB signaling, leads to LPS tolerant macrophages [39]. Additionally, inhibition of NF-kB could reprogram TAMs towards an M1 phenotype [40]. Smoking can lead to NF-κB-induced chronic airway inflammation and accumulation of M2-polarized alveolar macrophages [41]. Smoking inhalation has been shown to increase serum TNF-α and IL-6 [42] and remain relatively undetectable in healthy non-smokers. Long-term exposure to tobacco smoke accounts for the majority of lung cancers. And individual chemical components of cigarette smoking, act as triggers for the inflammasome [43][44][45]. Furthermore ceramide, is highly induced due to cigarette smoking [46,47], and is significantly increased in lung tissue from patients with smocking induced emphysema [46]. Of note ceramide increases oxidative stress and induces caspase-1 dependent and independent cellular events [48]. Due to the pleotropic interplay of smocking and the Inflammasome the observed variances could be attributed to smocking itself. However, patients enrolled in our study were smokers in both control and LC groups and thereafter we could conclude that the observed differences could mainly be attributed to the presence of lung cancer, rather than smoking. It has been proposed that IL-1 mediates cytotoxicity and suppression of tumor growth [49,50] and TNF-α inhibits angiogenesis to prevent tumor growth. In contrast, IL-1β has been shown to play a role in the development of the premetastatic niche [51] and targeting the inflammasome-IL-1β pathway been proposed to provide a novel approach for the treatment of cancer [52]. Previous studies have shown IL-1, IL-6, and TNF-α increase in lung cancer patients and these cytokines progressively decrease as the clinical stage of cancer progresses [11]. The ability of IL-6 to inhibit tumor cell growth has been suggested, as IL-6 functions synchronously with IL-1 and TNF-α in order to support anti-tumor immunity [53] and has been shown to regulate the ability of AMs in lung cancer to be stimulated by IFN-γ and LPS [54]. It was suggested that targeting IL-6 could potentially improve lung cancer therapeutic techniques [55]. Conversely, in a mouse model, chemoprevention of lung cancer was associated with reducing IL-6 [56]. As previously suggested, decreased TNF-α and IL-1 secretion has been demonstrated in AMs from patients with both NSCLC and SCLC and reduced IL-6 secretion has been demonstrated from AMs derived from patients with large cell undifferentiated and small cell subtypes [54]. Importantly, a report in tumor-bearing mice, has suggested that AMs can produce higher levels of IL-1β, after LPS/ATP stimulation [57]. TNF-α, IL-1, and IL-6 promote the induction of Th1 cells, which enable macrophage-mediated killing. Reduced Th1 mediated cytokines may consequently limit the cytotoxic potential of AMs or TAMs and enable tumor progression [11]. Our results further add to the hypothesis that the deregulation of these cytokines may generate from a severely impaired NLRP3 pathway. To note, caspase-1 inhibition, resulted in partial reduction of IL-1b and IL-18 secretion in all groups. Recent advances in the inflammasomes research field, have provided evidence that LPS alone could alternately activate NLRP3 to secrete IL-1b. In our experiments we treated cells with caspase-1 the inhibitor following two hours LPS treatment, and the remaining IL-1β could be attributed to that effect of LPS in all groups. Furthermore, IL-1β was virtually undetected in the cytoplasm of any of the LC samples, while we hypothesize that cytoplasmic retention of IL-1β in controls may represents the collection time, rather than secretion being hindered. The role of high IL-18 secretion by lung cancer AMs and PBMCs as observed here, in carcinogenesis remains unclear, since it exerts both antitumor and protumor effects. IL-18 has the ability to inhibit the recognition of cancer cells by immune cells, increase cancer cell adherence to the microvascular wall, induce the production of angiogenic and growth factors, and promote a pre metastatic microenvironment [58]. We also noted a differential release of IL-1β and IL-18 at baseline from AMs. Evidence suggests that the expression of inactive pro-IL-1β and pro-IL-18 is differentially regulated by different stimuli [59] and our results at steady state, prior to activation could represent a pre-exposure to a certain stimulus. By contrast to the lung microenvironment, systemically we observed a typical pro-inflammatory pattern in LC. LC PBMCs produced high levels of IL-1β, IL-18, and IL-6, whereas low levels of TNF-α, and could be furthermore stimulated in the presence of LPS. In accordance with our findings, a recent research reported that IL-1β is elevated in the serum of patients with NSCLC and further in vitro studies suggested that IL-1β promoted the proliferation and migration of NSCLC cells [60]. Increased levels of IL-1 and IL-6 are noticed in lung cancer patients and progressively decrease as cancer progresses [11]. Of note, exploratory data from a large randomized trial has provided evidence that in subjects with high CRP, yet otherwise healthy, inhibition of IL-1β by canakinumab, reduced the incidence of lung cancer, as well as the lung cancer related mortality. These discrepancies in the periphery, observed by our study, could represent a pro-inflammatory state of individuals leading to the development of Lung cancer. By contrast, in the lung microenviroment the decreased TLR4/NLRP3 axis could represent a tumor mediated immunosuppression. In our experiments, although unstimulated PBMCs from LC patients surpassed controls in IL-6 levels, stimulation of the TLR4 pathway did not induce IL-6 secretion in either group. In contrast, TNF-α secretion by unstimulated PBMCs was lower in lung cancer than controls, while maintaining inducibility status via the TLR4/LPS pathway. TNF-α is a pro-inflammatory cytokine that is cytotoxic for tumor cells. It has been demonstrated that increased TNF-α levels in NSCLC correlate with improved outcomes, while TNF mutants have been proposed as anticancer chemotherapeutic drugs [61,62]. This dichotomous result in terms of antitumor immunity profiling could suggest a deregulation of the innate immunity mechanisms early on in the inflammasome process. Our study does not lack limitations. A direct comparison between AMs in lung microenvironment and PBMCs was not the scope of our study, but we rather aimed to characterize the inflammasome activity in the two distinct compartments (tumour lung environment and the periphery). Peripheral-Blood monocytes and macrophages have well established differences in TLR signaling and immune cell trafficking with AMs exist in both healthy [63,64] and COPD patients [65,66], which comes in agreement with our findings. Another limitation of our study is that, only TLR4 stimulation by LPS was used and a global TLR impairment in the LC AMs cannot be excluded, as already reported in COPD AMs[37] [38]. Additionally, COPD often underlies LC and as a disease is highly linked to macrophage dysfunction[37], hence the involvement of patients with COPD-comorbidity in our study is a further limitation that should be acknowledged. Apart from the COPD enrollment, the LC group was consisted from LC patients irrespective of the different histological subtypes and thereafter is very heterogeneous. However, from our analysis no differences arose in any of the studied cytokines based on histological subtype. Conclusion Through cytokine phenotyping, we suggest that alveolar macrophages, in lung cancer microenvironment, are impaired. In AMs the activation of TLR4/NLRP3 inflammasome was found severely decreased, adding to the hypothesis of cancer mediated immunesuppression. By contrast, systemically a pro-inflammatory state was observed, which may constitute a pro-inflammatory environment that leads to cancer incidence and progression. To date, several antagonists have being developed against components of the inflammasome, and have been proposed as anti-cancer therapy. Novel targeting of IL-1β could reduce the incidence of Lung cancer in healthy individuals, as already suggested [18], however, such treatment should be used with caution in established lung cancer, since our results suggest that NLRP3 inflammasome pathway is already impaired in lung cancer microenvironment. Given the ability of NLRP3 in promoting immunogenic cell death, the activation/ reprogramming of dysfunctional AMs, could be a novel add-on therapy in established lung cancer.
2018-11-11T01:39:44.745Z
2018-10-26T00:00:00.000
{ "year": 2018, "sha1": "3f730f81a66d29b80faee1e6b234ee36afce6d49", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0205242&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f730f81a66d29b80faee1e6b234ee36afce6d49", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252263046
pes2o/s2orc
v3-fos-license
A Global Review of the Woody Invasive Alien Species Mimosa pigra (Giant Sensitive Plant): Its Biology and Management Implications Populations of invasive alien plants create disruptive plant communities that are extremely adaptable, imposing severe ecological impacts on agriculture, biodiversity and human activities. To minimise these impacts, prevention and effective weed management strategies are urgently required, including the identification of satellite populations before they invade new areas. This is a critical element that allows weed management practices to become both successful and cost-effective. Mimosa pigra L. (Giant sensitive plant) is an invasive weed that has spread across various environments around the world and is considered one of the world’s top 100 most invasive plant species. Being adaptable to a wide range of soil types, in addition to its woody protective prickles and low palatability, M. pigra has quickly spread and established itself in a range of habitats. Current control methods of this species include biological, chemical and physical methods, together with attempts of integrated application. Reports suggest that integrated management appears to be the most effective means of controlling M. pigra since the use of any single method has not yet proved suitable. In this regard, this review synthesises and explores the available global literature and current research gaps relating to the biology, distribution, impacts and management of M. pigra. The contribution of this work will help guide land managers to design appropriate and sustainable management programs to control M. pigra. Introduction Mimosa pigra L. (Giant sensitive plant) is an erect, prickly shrub or small tree, which is native to Tropical America [1]. Outside of its native regions, M. pigra has been reported to cause significant economic and environmental impacts across various environments when appropriate long-term management is not implemented [2][3][4][5]. If left untreated, the species can quickly form dense, monospecific, leguminous stands that can spread over thousands of hectares [2][3][4][5]. As a result, M. pigra has been listed as one of the top one hundred most invasive plant species in the world and is of particular concern within Africa, Australia and Southeast Asia [5][6][7][8][9][10][11]. Mimosa pigra is commonly found growing along floodplains, irrigated landscapes, seasonally wet savannas and waterways where it strongly competes against native and pastoral species for resources such as light, water and soil nutrients [5,7,12,13]. Dense M. pigra infestations have also been known to significantly impact biodiversity, cropping systems and livestock production as they (i) can smother native and pastoral species, (ii) contain allelopathic properties that suppress the growth of adjacently growing species, (iii) have an aggressive and quick growth habit and (iv) limit water accessibility to livestock and people [5,[14][15][16][17]. In addition, M. pigra is capable of doubling its population size within 1.2 years when growing adjacent to a river system, although isolated populations away from large water sources may take up to 6.7 years to Bissau, Guinea, Ivory Coast, Kenya, Liberia, Malawi, Mali, Mauritania, Mozambique, N mibia, Niger, Nigeria, Rwanda, Senegal, Sierra Leone, Somalia, South Africa, Sudan, Ta zania, Togo, Uganda, Zambia and Zimbabwe, but is only described as invasive in some these countries [11,19]. This wide distribution indicates the species' ability to adapt to th warm African environment, as opposed to the tropical environments where it is foun within its native regions [11,19]. In particular, M. pigra is classified as a category-thre invader in South Africa, therefore the propagation of this plant is prohibited unless speci permission is granted by state law [10,11]. It can also be found on the islands of Madaga car and Mauritius [10,11]. Mimosa pigra has also been introduced into Asia, being foun in Cambodia, India, Indonesia, Laos, Malaysia, Myanmar, Sri Lanka, Thailand and V etnam, where it is now recognised as a widespread invasive weed [11]. It is noted that th species was introduced into Thailand as a green manure plant and a cover crop in 194 and was subsequently taken to Malaysia and nearby countries as a cure for snake bit [34]. In Australia, M. pigra was introduced sometime before 1891 as a seed contaminant o as a curiosity plant because of its sensitivity to human touch [35]. The plant was regarde as occasionally problematic until the late 1950s, whereupon its shift to open flood plain started to produce challenging monospecific stands [35]. Currently, in Australia, M. pig is abundant in the Northern Territory, the flood plains of Adelaide, and the Daly, Finnis Mary and East Alligator River systems and is listed as a noxious species across the countr [10,11]. Small infestations have also been observed in Western Australia and Queenslan where the plants have already been subject to eradication activities [19]. Habitat In its native region, M. pigra can be found either as a single plant or in thickets, b as a successful invader in other tropical regions of the world, it predominantly occurs robust thickets, particularly in disturbed lands where there is abundant water [7]. Tropic climates with distinct wet and dry seasons are favoured for its growth, and regions wi less than 750 mm of rainfall are not expected to provide suitable invasion sites, with th exception of areas directly around water bodies [25]. In rainforests where the typical rai fall is above 2250 mm, M. pigra establishment is relatively unlikely due to the prevailin high level of existing plant competition, and in the cooler subtropics, growth has bee observed to be shorter and less aggressive compared to growth in tropical areas [7]. Wi regard to supporting growth media, this woody shrub can thrive in a range of soil typ including heavy black cracking clays, sandy clays and coarse siliceous river sand [7,37 Habitat In its native region, M. pigra can be found either as a single plant or in thickets, but as a successful invader in other tropical regions of the world, it predominantly occurs as robust thickets, particularly in disturbed lands where there is abundant water [7]. Tropical climates with distinct wet and dry seasons are favoured for its growth, and regions with less than 750 mm of rainfall are not expected to provide suitable invasion sites, with the exception of areas directly around water bodies [25]. In rainforests where the typical rainfall is above 2250 mm, M. pigra establishment is relatively unlikely due to the prevailing high level of existing plant competition, and in the cooler subtropics, growth has been observed to be shorter and less aggressive compared to growth in tropical areas [7]. With regard to supporting growth media, this woody shrub can thrive in a range of soil types including heavy black cracking clays, sandy clays and coarse siliceous river sand [7,37]. In Australia, high seed production and greater life expectancy are observed when the species becomes established in black cracking clays, whilst high seed longevity is observed in sandy clays [7,37], but some variations across invasion sites have been noted [26]. Plant Morphology and Characteristics Mimosa pigra can grow up to four to six meters in height and create dense stands with an average density of one plant per meter squared [27,38]. The stems contain long broad-based prickles, which are approximately 7 mm in length [7]. Its leaves are bipinnate and edged with a setulose margin that has a parallel and mid-ribbed venation [7]. These leaves are also sensitive to touch through the pinnules, pinna rachises and the petiole [7]. The flowering period of M. pigra occurs mostly in late Spring to Autumn (Table 1), where it produces thousands of pink flowers, which are 2 cm in diameter and contain approximately 100 flowers per axillary head [27,39]. Upon successful pollination, each flower head can produce up to seven pods containing 21 seeds [27]. This process of ripened seed production from the flower buds typically takes five weeks [7], with the ripe seeds being oblong and brown to olive green in colour [7]. Differences in leaf and pod morphology are evident depending on the country it is found within and seasonal weather variation, with broader pods being observed in Thailand compared with comparatively slender pods found in Australia [7]. To aid dispersion, the hirsute pods break into single-seeded segments that are partially dehiscent, allowing them to remain afloat for an infinite period [38]. Research has also shown that plant morphology can differ when the species is under stress from abiotic and biotic factors [40]. Research by NurZhafarina and Asyraf [40] highlighted that M. pigra shows high morphological variation when faced with intraspecific competition. In fact, habitats with a high species density often result in M. pigra growing taller and producing less viable seeds, whereas a habitat with low species diversity results in M. pigra becoming sturdier and producing more viable seeds [40]. This suggests that competition can significantly influence the species' morphology and overall growth and competitive performance [40,41]. Mimosa pigra is a common invader in wetlands and flooded areas because it can produce adventitious roots near the soil surface as a defence against anaerobic waterlogged conditions [42], but it can also resist drought conditions, which increases its invasive ability [6]. Having a low nutrient requirement, M. pigra can grow in a wide range of soil types including sand, red and yellow alluvial soils, silty loams and heavy black cracking clays [43]. The average growth rate of M. pigra grown under optimum conditions is 1 cm per day and is predicted to double its presence in an infested area within a year [6]. Its rapid plant maturity and seed production during the first year of growth contribute to its invasive potential [38]. Depending on the environmental and growth conditions, the average seed production rate per year is estimated to range between 9000 and 12,000 seeds per meter squared [37], but plants with the highest productivity grown in the field are observed to produce over 220,000 seeds per year [44]. The longevity of M. pigra seeds within the soil can extend up to 23 years, although this is highly dependent and variable on the soil type and depth of seed burial [4]. The bristled seedpods can assist the seeds' dispersion over long distances by becoming attached to animals or humans, with seedpods also commonly transported by moving water bodies [42], where the buoyant pods are supported by the surface tension of water [7]. Other dispersal methods include seeds entrapped in soil or mud particles, which adhere to agricultural vehicles [38], and grazing animals passing dung containing M. pigra seeds [35]. The plant is capable of resprouting from remaining stumps after severe pruning [45]. It is also estimated that 90% of mature M. pigra plants and 50% of seedlings can regrow if exposed to moderate fire events [4]. This emphasises the need for ongoing and repeated management of the species as one control event such as fire may not be suitable. It would also be of value for future research to examine the seed germination requirements of M. pigra from a range of populations from different climatic regions. This information would be of value to land managers by allowing them to understand which factors facilitate the germination of M. pigra. This information would also help to guide them in making suitable and confident decisions regarding the control of the species in its early stage of development. Environmental and Social Impact Due to its invasive character, M. pigra poses a huge problem for the conservation of tropical ecosystems. Once it is established in the landscape, it becomes the dominant species and prevents the establishment of other species within the understory [2]. This dominance can severely alter the vegetation and structure of floodplains and swamps within the region [2]. These aggressive populations of M. pigra out-compete native herbaceous plants for light, moisture and nutrients, whilst dense stands grown under native tree canopies can also prevent the seedling establishment of these trees by limiting essential light penetration [2,4]. It has also been reported that M. pigra contains allelopathic properties including the phytotoxin mimosine, in addition to other phenolic, tannin and flavonoid compounds [13,14]. These compounds found within M. pigra cause inhibitory effects to adjacently growing vegetation, ultimately giving M. pigra a competitive advantage. Alternatively, research has suggested that these compounds could be utilised and manipulated as an aqueous solution and applied to control various other weed species such as Echinochloa crus-galli L. (Barnyard grass) and Lolium multiflorum L. (Italian ryegrass) [14,46]. Although this is suggested, further research is required to investigate these phytotoxic compounds produced by M. pigra on a range of native species. These allelopathic properties could also explain the successful invasion of M. pigra across various environments around the world. The altered floral diversity and hydrology caused by M. pigra-dominant areas also affect the living conditions for native fauna, and it has already been noted that losses of habitat, breeding sites and fruit trees have been negatively related to the number of native fauna in many M. pigra-affected areas [2,4]. Mimosa pigra not only affects the biodiversity of an area but is also seen to impact the socio-economy of a community [4,5,42]. Day-to-day human recreational activities and tourism opportunities that are dependent on accessible water bodies, in addition to agricultural requirements such as available drinking water for cattle and irrigation for crops, are greatly threatened by M. pigra invasion [4]. Dense stands of M. pigra can block roads and pathways, which can limit accessibility to croplands, water bodies and grazing areas [5,42]. Grazing animals rarely feed on M. pigra, and as a result, this contributes to its uncontrolled growth and spread into new areas [5,42]. It has even been reported that the establishment of this invasive species has significantly reduced the available grazing land in Zambia, and as a result, milk and livestock production has been heavily impacted [5]. Consequently, the disruption to livestock production in these communities has even contributed to significant economic loss, illness and increased death rates [4,5,42]. In Africa, the seasonal floodplains have traditionally provided many communities with essential services including fishing, seasonal cropping, renewable fuelwood supplies and rich grazing for livestock [5,47]. These services are expected to be heavily threatened by the increasing invasion of M. pigra [5,47], impacts that have also been evident in Australia [2], Cambodia [3,40] and Vietnam [48], which emphasises the extent and distribution of this problem. Control Measures Whilst controlling M. pigra is usually focused on dealing with highly infested areas, it is also recommended that management activities should also be centred around isolated or smaller populations [49]. Such activities will help to prevent the establishment of dense monocultures and reduce the future costs associated with its management. Although this is recommended, it is still critically important to control densely infested areas as they can be a large source point for new seeds, which are known to be long-lived and can remain viable for up to 23 years [4,26]. Whilst attempts to control M. pigra infestations have been centred on chemical and physical control approaches, the use of biological control has also shown promising signs in controlling the species [19,50]. In conjunction with these approaches, managers have also been searching for possible native plant species to create strong competitive interactions with M. pigra to reduce and suppress its growth [51,52]. It has also been suggested that vector control should be introduced to track and eliminate M. pigra seed dispersal as a method of prevention. Hence it is generally accepted that integrating existing control measures will result in greater efficiencies [13,20,53,54], and in Tables 2-4, the most commonly used control measures that can be conducted at different growth stages of M. pigra are shown. Ploughing N/A Ploughing uproots the whole plant or the remaining root parts resulting from stem cuts thus preventing regrowth. Provides better seedbed establishment for pasture. [55,57,58] Stick raking N/A Equipment attached to a bulldozer or tractor removes the entire stump and root system with minimal soil disturbance. [55] Chaining N/A N/A A heavy chain is pulled between two bulldozers, physically removing mature plants. Suitable for use during the wet season for dense infestations. [55,58] Chopper rolling N/A A dense drum equipped with blades is pulled behind a tractor, which knocks down and macerates the plants. [55] Burning Can either destroy or stimulate seed germination, therefore should be followed by herbicide treatments. N/A Difficult to burn when green. Burnt plants can regrow from buds at the stem base. [20,57,59] Triclopyr butoxyethyl ester N/A Effective when plants are young. [48] Metsulfuron methyl N/A Most effective when plants are young and when used as aerial control for infestations. [55] Glufosinate ammonium N/A Should be applied when the plant is actively growing. Foliage should be covered thoroughly. [60] Tebuthiuron N/A Should be applied before seed set when the plant is actively growing. Can be used as both a soil and aerial application. Higher rates are required on dense growth or heavy clay soils. [1,7,61] Fluroxypyr N/A Foliar application when actively growing and basal bark or cut stump application when mixed with diesel. [20,53,55] Dicamba N/A Aerial control when infested and foliar application for actively growing plants. [55] Hexazinone N/A Not recommended for continuous use in large areas. [55] Physical Control As reported in the study of Cook et al. [50], cutting, hand-pulling and burning can be usefully implemented as a physical control measure to control incipient outbreaks of M. pigra (Table 2). In the case of larger infestations, bulldozing, chaining and ploughing can be used [58], although native species and soil conditions may become altered by these actions, which should be considered. Notwithstanding the success of these physical methods, the implementation of follow-up control measures is strongly advised due to the regenerative success of produced fragments [72]. Moderate burning has been observed to be ineffective with green M. pigra plants, and if such a stand is subject to fire, it can regenerate from the bud regrowth at the stem base. In addition, mild fire is a seed germination stimulator for M. pigra seeds, and hence burning can enhance seed germination [59]. However, direct application of gelled gasoline, or in dense monospecific stands, aerial application of this intense-burning fuel, has been reported to result in the destruction of M. pigra [59]. It has been suggested that when conducting planned burning on M. pigra, the season of burning is a critical factor, as it has been shown that immediate floods after burning are favourable for preventing M. pigra regeneration [20]. If some leaves remain above the water level, and if the plant or plant remnants after a fire are fully submerged, they are drowned within three months [73]. If, however, the time between the burn and the flood occurrence allows the M. pigra plants to establish and grow beyond the heights of flooding waters, follow-up treatments will be required to assure successful control [59]. Chemical Control The primary method of controlling M. pigra is with the use of herbicides, and in Australia, Malaysia and Thailand, large numbers of herbicides have been tested against M. pigra, with many being effective [7,13,42,43,48,[74][75][76]. In Australia, more than 21 herbicides, representing different application strategies including spraying and stem injections together with soil application, have been tested [7]. Among those herbicides, 2,4,5-T, tebuthiuron, fluroxypyr, metsulfuron-methyl (74223-64-6), dicamba, glyphosate and hexazinone have been previously used [55]. Aerial herbicide spraying can also be reasonably effective when conducted in the wet season, but reports suggest that it might not result in 100% plant mortality [72]. In addition, large-scale herbicide application warrants careful consideration as it may contribute to further environmental pollution, especially near waterways and native species. In this regard, future research on the management of M. pigra should consider integrating a range of techniques, such as biological control, burning, herbicide application or manual removal to minimise chemical exposure to the environment and provide more confident control [20]. Applying dicamba, glyphosate, hexazinone, imazapyr, triclopyr, triclopyr + picloram and triclopyr + picloram + 2,4-D to the cut stumps of severed M. pigra, has also shown success in controlling M. pigra [56]. 2,4-D was the primary herbicide used in the 1960s and 1970s but required repeated applications to combat new regrowth [56]. When public health concerns arose in the mid-1980s [42], new herbicide options were explored with different rates and application methods; however, it is important to note that the effectiveness of control is known to be highly dependent on the season of application [56]. Dry-season application inevitably results in more regrowth, and thus higher concentrations are required for satisfactory control, but it has been observed that dicamba and hexazinone are highly effective on cut stump applications during both dry and wet seasons [7]. Targeting the plant's active growing season will also enhance herbicide uptake and efficacy, resulting in better control of the weed. For basal bark herbicide applications, triclopyr, triclopyr + picloram, dicamba and 2,4,5-T plus picloram, either as a diesel mix or in an aqueous solution, are recognised as potential herbicides [13]. Compared to the basal bark herbicide application, reduced efficacy was observed when herbicides were applied as stem injections into M. pigra [7]. It is also important to note here that cut and herbicide application methods can be costly and time-consuming for large infestations [15,48], therefore should only be considered in smaller or isolated populations. Additionally, it is also likely that seeds will regenerate from the seedbank following the removal of mature plants, therefore follow-up applications of either herbicide application or manual removal would be necessary. When herbicide control measures were first implemented in the Northern Territory in Australia, 2,4,5-T was applied as a foliar spray in a mix of either water or diesel [42]. Picloram plus 2,4,5-T mixed with diesel was also used as a basal bark spray or as a foliar application [42,77], and in 1980, glyphosate, delivered by a high-volume foliar sprayer, was introduced for control in town areas [42]. The residual herbicide hexazinone was also used as a soil application to reduce the emergence of M. pigra [25]. With respect to herbicide treatments in general, application time is regarded to be critical. In Australia, 2,4,5-T and tebuthiuron are traditionally applied before the floodplains are inundated. When this application defoliates the M. pigra plants, fluroxypyr is then applied to any surviving plants [78]. In Thailand, bromacil or bromacil + diuron is recommended as an application in non-agricultural lands and on dam walls, while Fosamine ammonium (25954-13-6) is used for roadsides, alongside canals and in water reservoirs as a foliar spray [7]. Foliar application of dicamba has also been recommended for non-agricultural lands, water canals with a water depth greater than 1 m and roadsides. Glyphosate is also recommended for application in all the above-mentioned M. pigra habitats, with necessary precautions taken when applying near water bodies. Glyphosate can also be used in agricultural lands before cropping takes place or after harvesting the crops [7]. Tebuthiuron is a residual herbicide that is absorbed through the roots, and hence, it is advised to apply this compound while the plant is in its actively growing phase [7,61]. According to the study by Lane [61], tebuthiuron has not been effective on M. pigra seedlings, evidencing a percentage survival rate of 43%. Fluroxypyr and metsulfuron-methyl are recommended to be used as aerial applications, with the large dense stands of M. pigra most likely to require the use of an aircraft to gain sufficient access [79]. The efficacy of fluroxypyr is evident in the study of Paynter and Flanagan [20], where its application resulted in significant control, and, in addition, its selective action on dicotyledons allowed monocotyledon species to compete favourably with any seedling survivors. Given the mediocre effectiveness of some herbicides and the aggressive nature of M. pigra, it is recommended that the infested area should be subjected to intense fire after the chemical treatment to minimise any regrowth from the remaining plants [79]. Biological Control Numerous natural enemies for M. pigra have been identified in its native range [62]. As with other attempts at biological control, significant attention has been given to each attacking agent's host specificity, and even in the face of significant evidence for a species' ability to attack and damage M. pigra plants, there must be compelling information regarding the unlikelihood of the agent to affect other vegetation before it is introduced and released into a new environment [62]. Once introduced, careful monitoring of the survival, distribution and abundance of the biological agent is a critical factor in the evaluation phase of biological weed control [64,80,81]. Notwithstanding these concerns, due to the high costs related to chemical and physical control of M. pigra, biological control is widely regarded as providing the most effective long-term control strategy in Australia [42,67]. The first exploration of natural enemies against M. pigra was conducted in 1980 in Brazil [19]. There have been many introductions of agents since this time due to this strategy's promising potential for controlling established stands [67]. Significant reduction of the M. pigra seed bank under thick plant cover [20,63,78,82] and a noticeable decline in density as a result of action by biological species have been observed [27]. According to these studies, Carmenta mimosa was identified as the most damaging biological agent for M. pigra [20,53,63], where evidence of the reduced seed production [20] and areas of canopy opening up caused by the high densities of C. mimosa has reduced the competitiveness of M. pigra stands with other vegetation, especially at the stand edges [63]. The first insects introduced for M. pigra biological control in Australia were Acanthoscelides quadridentatus and A. puniceus; these are Mexican seed-feeding beetles [40]. They were released in 1983 and 1984 in Australia and Thailand, respectively [40]. In addition, Chlamisus sp., which feeds on the leaf and bark of M. pigra, was introduced from Brazil and released in Australia and Thailand in 1985 [1]. The stem-boring moths, Neurostrota gunniella and Carmenta mimosa, were also released [7], and since then, several other biological agents for M. pigra have been introduced. The list includes the beetle species Acanthoscelides puniceus, Chlamisus mimosae, Malacorhinus irregularis and Coelocephalapion pigrae, and the moth species Neurostrota gunniella, Carmenta mimosa and Macaria pallidata [36,83]. It is noted that fungi such as Diabole cubensis and Phloeospora mimosae-pigrae and beetles such as Acanthoscelides quadridentatus, Coelocephalapion aculeatum, Chalcodermus serripes and Sibinia fastigiata have also been introduced as biological agents for M. pigra but, at this time, they have not succeeded in becoming established [20] (Table 4). It is also noted that future research on the biological control of M. pigra should consider integrating other control strategies such as burning, herbicide application and physical control for more confident control. Research has suggested that integrating a range of methods along with biological control helps to improve success [54], although, in the case of M. pigra, this requires further research to discover which combinations are most efficient. Future Management Considerations As emphasised above, M. pigra poses a significant impact on biodiversity and human socio-economic activities. In terms of control, addressing small, isolated outlier populations at their earliest detection and implementing integrated management strategies is currently suggested to be the most effective approach to controlling this woody shrub. In concert with these actions, given the high invasiveness of the species and its seed viability over a long period, continuous monitoring of any treated site is advisable. Of importance is the observation that M. pigra is susceptible to grass competition [72]. In this regard, planting native grasses or creating competitive pastures in areas at risk of invasion by M. pigra could be a viable option for suppressing its growth. For dense M. pigra stands, aerial herbicide application will open up the canopy, allowing competing herbaceous plants to grow. Intense fire can then be used to clear the area, followed by the introduction of competitive pasture species into the area. Even though the seed germination of M. pigra is known to be stimulated by fire, seedling growth will be suppressed by the growth of competitive pasture seedlings [72]. Implementing biological control measures has shown significantly promising results compared to other control measures, but notwithstanding these control measures, further research needs to be carried out with simulated abiotic and biotic environmental stressors to identify their influence on M. pigra growth and establishment [84,85]. Such studies will provide valuable information related to the optimum conditions for growth, and this will lead to new effective management practices since continuous, dynamic and focused management is required for the mitigation of impacts associated with M. pigra invasions. Despite the advances in management practices and awareness, M. pigra continues to remain a globally invasive species, and extensive and more recent research and control experiments need to continue to be conducted in order to suppress the impact caused by this tropical woody shrub. It is also noted that early detection protocols and the identification of isolated M. pigra populations are critical steps for assisting and planning the successful long-term control of the species [21]. This could be achieved using drone technology for mapping and identifying difficult-to-reach areas that are infested with M. pigra. Conclusions This review highlights that an integrated and long-term management approach is highly necessary to control M. pigra and reduce its economic, environmental and social impact. Due to this species commonly being found close to water bodies or in difficult-toaccess terrain, a significant financial and laborious investment is often required. However, this can be reduced if small, isolated populations are identified and immediately controlled before they form dense monocultures. Future lines of research should aim to focus on a greater understanding of the life cycle and susceptible growth stages of M. pigra since this is not yet at a satisfactory level. This could be achieved by a further investigation into the biology of the species across a range of climatic and environmental conditions. Such a level of detail will allow for greater confidence when designing long-term control approaches for the species at both localised and landscape scales. Given the scarcity of the available relevant global literature, this review is anticipated to provide the first step for future studies toward building a more comprehensive global M. pigra control schedule.
2022-09-15T15:21:38.383Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "52d683ecaaf450ca042e88bddc8c9dc8d549ba4d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/11/18/2366/pdf?version=1663052073", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a3ae1e22f1d45738a61fab27b23c316fa25269d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
252820114
pes2o/s2orc
v3-fos-license
Kinetics of blood cell differentiation during hematopoiesis revealed by quantitative long-term live imaging Stem cells typically reside in a specialized physical and biochemical environment that facilitates regulation of their behavior. For this reason, stem cells are ideally studied in contexts that maintain this precisely constructed microenvironment while still allowing for live imaging. Here, we describe a long-term organ culture and imaging strategy for hematopoiesis in flies that takes advantage of powerful genetic and transgenic tools available in this system. We find that fly blood progenitors undergo symmetric cell divisions and that their division is both linked to cell size and is spatially oriented. Using quantitative imaging to simultaneously track markers for stemness and differentiation in progenitors, we identify two types of differentiation that exhibit distinct kinetics. Moreover, we find that infection-induced activation of hematopoiesis occurs through modulation of the kinetics of cell differentiation. Overall, our results show that even subtle shifts in proliferation and differentiation kinetics can have large and aggregate effects to transform blood progenitors from a quiescent to an activated state. Introduction Because of their key role in tissue maintenance and the inherent risk associated with their unchecked proliferative capacity, stem cell behavior is tightly regulated (Klein and Simons, 2011;He et al., 2009). This regulation is often mediated by controlling the biochemical composition of the microenvironment surrounding the stem cells, including the presence of signaling molecules and metabolic cues (Jones and Wagers, 2008;Morrison and Spradling, 2008). Additionally, physical inputs, such as mechanical cues and the architecture and topography of the extracellular matrix, also play an important role in controlling stem cell behaviour (Chacón-Martínez et al., 2018;Ahmed and Ffrench-Constant, 2016;Díaz-Torres et al., 2021). The complex and precisely constructed in vivo microenvironment of stem cells can be very challenging to mimic in the laboratory and, when possible, it is best to study stem cells in their endogenous environment. There has been significant emphasis over the last decade on the development and optimization of imaging tools and methodologies to allow live imaging of stem cells in their in vivo environment (Park et al., 2016;Rompolas et al., 2012;Martin et al., 2018;Sheng and Matunis, 2011). Hematopoiesis, the production of the cellular components of blood, is a well-known paradigm for stem cell regulation through a niche Huang et al., 2007;Mandal et al., 2007;Tokusumi et al., 2010). Drosophila provides a powerful and genetically tractable model to study hematopoiesis (Banerjee et al., 2019). During fly hematopoiesis, a specialized population of blood progenitors gives rise to blood cells. Drosophila blood progenitors exhibit many stem-cell-like properties, such as being controlled by a specialized population of cells that act as a hematopoietic niche, the Posterior Signaling Centre (PSC) (Krzemień et al., 2007, Mandal et al., 2007Tokusumi et al., 2010;Cho et al., 2020). They can give rise to three highly differentiated blood cell types: plasmatocytes, lamellocytes, and crystal cells (Jung et al., 2005). However, whether they are true stem cells has not been conclusively resolved (Banerjee et al., 2019). Vertebrate hematopoietic stem cells (HSCs) have been associated with several well-defined attributes: self-renewal, the ability to differentiate into all blood lineages, and their dependence on a niche (Huang et al., 2007). Drosophila blood progenitors have been shown to exhibit most of these criteria but so far, there has been no clear evidence of either self-renewal or asymmetric cell division in the progenitors (Banerjee et al., 2019). Nevertheless, previous studies provided evidence for the existence of HSCs in the lymph gland (LG) (Cho et al., 2020;Dey et al., 2016;Minakhina and Steward, 2010). For example, a population of cells in first instar larvae were identified, which gave rise to progenitors in later larval stages and behaved in ways consistent with the idea that they were equivalent to vertebrate HSCs (Dey et al., 2016). Another study employed MARCM-based lineage tracing technique to identify distinct subpopulations of progenitors within third instar larvae, one of which exhibited characteristics such as 'persistence' that were consistent with the HSC fate (Minakhina and Steward, 2010). The main site of hematopoiesis in Drosophila larvae is the primary lobe of a specialized organ known as the LG (Banerjee et al., 2019). The primary lobe contains three distinct zones: the PSC niche, the medullary zone (MZ) which houses the blood progenitors, and the cortical zone (CZ) that contains differentiated blood cells (Banerjee et al., 2019;Mandal et al., 2007). The progenitors in the MZ express markers such as the JAK-STAT receptor Domeless (Dome), while differentiated blood cells express unique markers depending on their terminal mature blood cell fate. For example, plasmatocytes express the marker P1, while crystal cells express the marker Lozenge (Lz) (Banerjee et al., 2019;Jung et al., 2005). In addition, a small population of cells has been described that lack the expression of terminal differentiation markers such as P1 but express both the progenitor marker Domeless and early differentiating blood cell markers such as Hemolectin (Hml) and Peroxidasin (Pxn) (Blanco-Obregon et al., 2020). These P1 -, Dome + , and Pxn + /Hml + cells were typically found in the boundary between the MZ and CZ and were proposed to represent a separate population of cells, commonly referred to as intermediate progenitors (Blanco-Obregon et al., 2020;Sinenko et al., 2009;Spratford et al., 2021;Girard et al., 2021). Although this population is currently not well characterised, it is thought to contain cells in a transitional state as they go from a relatively quiescent multipotent state to a terminally differentiated state (Krzemien et al., 2010;Banerjee et al., 2019). More recent data has supported the view that rather than being a homogenous population, progenitors in the MZ are heterogeneous (Blanco-Obregon et al., 2020;Cho et al., 2020;Baldeosingh et al., 2018;Girard et al., 2021). Single-cell transcriptomic analysis of the LG revealed a surprising level of heterogeneity of the developing blood cells and uncovered novel blood cell types including adipohemocytes, stem-cell like blood progenitors, and intermediate progenitors (Cho et al., 2020). Moreover, a distinct subpopulation of progenitor cells in the LG has been recently identified and termed 'distal progenitors' (Blanco-Obregon et al., 2020). These cells, named after their location at the distal part of the MZ near the boundary with the CZ, express some progenitor markers (Dome) but not others (Tep4) (Blanco-Obregon et al., 2020). A further subpopulation of distal progenitors, known as 'committed progenitors' is distinguished by its expression of the plasmatocyte marker gene eater but not the mature blood cell marker Hml (Blanco-Obregon et al., 2020). The population of progenitors that are close to the heart tube additionally exhibit distinct features in terms of regulation by Hh signalling, a key regulator of blood cell differentiation in the LG (Baldeosingh et al., 2018). These data suggest that rather than being composed of a simple and clearly defined population of progenitors, the LG contains multiple subpopulations of progenitors at various stages and states of differentiation. These observations show the limitations of using fixed tissue approaches to study blood progenitor fate which is inherently a dynamic and evolving cell state. The main function of mature blood cells in Drosophila is to fight infection and assist in wound healing (Evans et al., 2003;Khadilkar et al., 2017;Vlisidou and Wood, 2015). It is therefore not surprising that in healthy intact flies, few mature blood cells are made in late larval stages (Banerjee et al., 2019). During early larval stages, blood progenitors undergo expansion and are typically found to be in S phase of the cell cycle (Dey et al., 2016). However, once progenitor expansion is completed in the late larval stages, they for the most part stay in the G2 phase through the action of dopamine (Sharma et al., 2019;Kapoor et al., 2022), although some intermediate progenitors remain in S phase (Sharma et al., 2019). Upon infection, there is a strong and rapid induction of mature blood cell production, which depends on the type of immune challenge and involves large scale differentiation of lamellocytes, crystal cells, or plasmatocytes (Banerjee et al., 2019;Khadilkar et al., 2017;Letourneau et al., 2016). How this induction is mediated, for example, whether it is primarily driven by changes in the cell cycle in the progenitors, predominantly by altered dynamics or patterns of differentiation, or by both factors in equal measure, is currently unclear. The key to answering these important remaining questions is to develop the ability to visualize and track fly hematopoiesis for extended periods in real time. Here, we describe analysis of proliferation and differentiation patterns observed during long-term live imaging of intact LGs in healthy and infected larvae. This analysis utilises whole organ culture methodology and quantitative imaging tools that we developed, optimised, verified, and applied. By tracking markers for cell proliferation and division in real time as well as cell fate and differentiation, we are able to confirm that blood progenitors undergo symmetric cell divisions. Using quantitative automated image analysis of progenitors in healthy and infected flies, we elucidate the dynamics and spatiotemporal patterns of blood cell differentiation and proliferation. We describe how the modulation of key differentiation and proliferation behaviours underlies the activation of mature blood cell production following infection. These results provide a novel system-level framework for understanding how Drosophila hematopoiesis is regulated in the context of the intact whole organ in real time. Results Development and optimization of a long-term ex vivo whole organ LG culture and imaging technique In order to image fly hematopoiesis in real time, we developed a whole organ culture system for the LG. A large number of protocols for culturing various organs were explored and, through trial and error, we found that optimal results were obtained with a modified version of protocols used for imaging whole testes, CNS, and wing imaginal discs (Fairchild et al., 2015;Zartman et al., 2013;Reilein et al., 2018;Sheng and Matunis, 2011;Anllo et al., 2019;Tsao et al., 2016;Martin et al., 2018;Kakanj et al., 2020;Morris and Spradling, 2011;Kiepas et al., 2020;Icha et al., 2017;Greenspan and Matunis, 2017;Zhang et al., 2018;Dai et al., 2020;Koyama et al., 2020). This method used Schneider's cell culture medium and relied upon three key features that we found to dramatically improve outcomes: (1) dissection methodology, wherein the LG was removed while maintaining its association with the CNS, ring gland and the heart tube with which it was then co-cultured, (2) the addition of intact larval fat bodies to the culture, and (3) the use of spacers to prevent mechanical force from being applied to the tissue by the presence of the cover slip and agar pad (see Materials and methods). With this technique, LG ultrastructure and proliferative capacity were maintained, and it was found we could successfully culture LGs overnight ( Figure 1A-D, Figure 1figure supplement 1, Video 1). We used a genetically encoded marker for oxidative stress, gstD-GFP (Sykiotis and Bohmann, 2008), to show that oxidative stress in the LG does not increase substantially over the course of 13 hrs of ex vivo culture and imaging ( Figure 1E). As a control and to illustrate the ability of gstD-GFP to detect oxidative damage, we omitted the fat bodies from the culture medium and observed a large increase in oxidative stress in the LGs over time ( Figure 1E; Figure 1- figure supplement 2A-B). Furthermore, direct comparison of LGs that were kept in ex vivo culture conditions and physiological in vivo conditions over the course of 12 hours showed comparable levels of Figure 1. Long-term ex vivo culture system for extended imaging of developing Lgs. (A) Quantification of the percentage of videos where proliferation was observed versus videos where no proliferation was observed under four different conditions: co-culture with or without the presence of fat bodies (percentage calculated from n=29 videos) and with or without placing a spacer (percentage calculated from n=19 videos). (B) Quantification of the duration of imaging (in hrs) for individual LG videos (n=25 videos, on average 15.96 hr per video; see Materials and methods). (C) Schematic showing the experimental setup for the multi-organ co-culture system in a glass bottom dish. Organs in the culture include the central nervous system (CNS), ring gland, LG, heart tube (or dorsal vessel), and fat bodies. (D) Representative DIC image of an ex vivo cultured LG (blood progenitors labelled with dome-MESO-GFP in green, mature hemocytes labelled with eater-dsRed in red). Genotype of the LG was dome-MESO-GFP; eater-dsRed. (E) Quantification of oxidative stress levels in whole LGs cultured overnight under two conditions: in Schneider's medium (SM) with fat bodies (n=13 primary lobes tracked from 8 videos [each 13 hr]) and in SM without fat bodies (n=7 primary lobes tracked from 4 videos [each 13 hr]). Genotype of the LG was gstD-GFP. (F) Quantification of blood progenitor viability during long-term live imaging. In total n=1109 progenitors (marked by Tep4-Gal4 driven dsRed) were tracked from 4 videos (each 12.5 hr). Genotype of the LG was Tep4-Gal4>UAS-dsRed. (G) Schematic of the Fly-FUCCI system used to track the cell cycle progression using distinct fluorescent markers in combinations (see Materials and methods). (H) An example showing G2 to M to G1 transition of a blood progenitor over the course of approximately 60 min. (I) Quantification from an example, using Tep4-Gal4 driven FUCCI, to visualize an S to G2 to M progression of a blood progenitor. Each dot represents a time point; decrease in the intensity during mitosis was caused by nucleus breakdown. Genotype of the LG was Tep4-Gal4>UAS-FUCCI. Scale bars in (D) and (H) represent 50 and 10 μm, respectively. Error bars indicate S.D from the oxidative stress (Figure 1-figure supplement 2C). Similarly, the cell death stain Sytox green was used to monitor cell viability (Martin et al., 2018) and showed that, while there is a very small baseline level of cell death in the progenitors and in the whole LG, this baseline did not increase in a substantial way over the course of LG culture ( Figure 1F; Figure 1-figure supplement 2D-E, Video 2). As a control and to illustrate the ability of Sytox green to monitor cell viability, we cultured LGs in PBS instead of Schneider's medium in the presence of fat bodies, which led to a marked increase in cell death over time ( (Khadilkar et al., 2020;Araki et al., 2019;Yu et al., 2021;Chiu and Govind, 2002;Mondal et al., 2011). Overall, we did not detect any harm or damage to the LG caused by the ex vivo culture technique. To test and demonstrate the ability of our culture system to allow tracking of cell behaviour in the LGs over extended periods of time, we utilised the Fly-FUCCI system (Zielke et al., 2014) to monitor cell cycle progression in the progenitors. Fly-FUCCI is based on the expression of fluorescent protein-tagged degrons from the Cyclin B and E2F1 proteins, which are degraded during Video 2. Long-term monitoring of blood progenitor viability during overnight ex vivo culture. Representative video showing only three blood progenitors undergoing cell death in a live LG cultured ex vivo over a period of 12 hr. The blood progenitors were marked by Tep4-Gal4-driven dsRed (red). Dying cells were marked by Sytox green dye (green). The LG was obtained from an early 3rd instar larva (of genotype Tep4-Gal4>UAS-dsRed) raised at 25 °C, dissected, immediately mounted and imaged. Scale Bar: 15 µm. https://elifesciences.org/articles/84085/figures#video2 mean. S medium in (A) and SM in (E) denote Schneider's medium supplied with 15% FBS and 0.2 mg/mL insulin (see Materials and methods). See also Videos 1-3. The online version of this article includes the following source data and figure supplement(s) for figure 1: Source data 1. Raw data of Figure 1A, B, E, F, I. LG at single cell resolution. Representative long-term live imaging video of a primary lobe from an ex vivo LG showing blood progenitor divisions (highlighted by yellow ROIs in the video) over a cultured period of 13 hr. Blood progenitors were marked by dome-Gal4driven membranous GFP (green). Mature hemocytes were marked by eater-dsRed (red). Part of the ring gland (labelled as RG in the video) was also captured. The LG was obtained from an early 3rd instar larva (of genotype dome-Gal4 >UAS-mCD8-GFP, eater-dsRed) raised at 25 °C, dissected, immediately mounted and imaged. Scale bar: 10 µm. mitosis or the onset of S phase, to distinguish the G1, S, and G2 phases of interphase (Zielke et al., 2014). Expressing Fly-FUCCI using a progenitorspecific driver (Tep4-Gal4) allowed us to track in real-time the cell cycle phase of individual progenitors using colour as an indicator: green during G1, red during S phase, and yellow during G2 ( Figure 1G-H; Video 3). We applied automated quantitative imaging tools (see Materials and methods) to track the trajectory of a single progenitor through the cell cycle in order to analyse the dynamics of the process ( Figure 1I). The Fly-FUCCI system allowed us to examine if our ex vivo culture methodology recapitulated the in vivo behaviour of LGs. Specifically, we compared the cell cycle profile of LGs in vivo (in intact stage-matched larvae) and in ex vivo culture. We Figure 1-figure supplement 3C). Notably, the cell cycle data we obtained using ex vivo cultured LGs showed ~20% of cells were in G2,~40% of cells were in S, and ~40% of cells were in G1, numbers that were similar to previous in vivo observations from mid-third instar larvae (Rodrigues et al., 2021). Finally, comparing proliferation in in vivo or ex vivo conditions by EdU labelling showed similar proliferative capacity in both conditions ( Figure 1-figure supplement 3D-E). Importantly, there was no reduction in proliferation over long-term ex vivo culture and our observation of ~100 EdU + cells per primary lobe was in line with previous data collected in fixed LGs (Milton et al., 2014). Further evidence for the health of the ex vivo cultured LGs over the course of long-term imaging was provided by noting that the duration of the mitosis remains largely consistent over time ( Figure 1-figure supplement 3F). Taken together, the data illustrate that we can track the cell cycle and proliferation in the LG using our culture technique and validate our approach as we did not detect any obvious cell cycle defects or variance from published data collected using fixed in vivo conditions (Rodrigues et al., 2021;Milton et al., 2014). Blood progenitors in the LG undergo symmetric cell divisions A key unresolved question about the fly blood progenitors is whether they undergo self-renewal or symmetric cell divisions (Banerjee et al., 2019). We used long-term LG imaging to address this question directly and found multiple lines of evidence that suggest that blood progenitors undergo symmetric cell divisions. Cultured LGs expressing both the JAK-STAT reporter and progenitor marker dome-MESO-GFP (Oyallon et al., 2016) as well as the early differentiation marker eater-dsRed (Kroeger et al., 2012) were imaged. We observed many examples in multiple videos from different LGs of symmetric cell divisions in progenitors (34 dividing progenitors from 7 videos). Since these progenitors are identified as dome-MESO-GFP expressing cells that do not express eater-dsRed ( Figure 2A; Video 4) they are either core progenitors or distal progenitors (Figure 2-figure supplement 1A). These dividing core or distal progenitors maintained their dome-MESO + eater-dsRedfate in both daughter cells after cell division. To confirm this observation, we employed another way to label progenitors by using the dome-Gal4 driver line (Jung et al., 2005) to drive the expression of membranous GFP ( Figure 2B; Video 1). We found that progenitor divisions are symmetrical with Video 3. Long-term tracking of cell cycle progression of blood progenitors in a wild-type LG. Representative video showing cell cycle progression of a blood progenitor (highlighted in a yellow ROI) in wild-type LG over a period of 1 hr. The cell cycle indicator FUCCI construct was expressed in blood progenitors using Tep4-Gal4. The video tracked a G2-M transition (G2 phase: have both GFP.E2f and RFP.CycB expressed) of a blood progenitor and the subsequent G1 progenies (G1: only have GFP.E2f expressed). The green and red channels were separately presented in the right side of the video to visualize GFP.E2f and RFP.CycB levels in individual blood progenitors over time. The LG was obtained from an early 3rd instar larva (of genotype Tep4-Gal4>UAS-FUCCI) raised at 25 °C, dissected, immediately mounted and imaged. Scale Bar: 10 µm. Figure 2D -D'). Another hallmark of progenitor fate is a high level of JAK-STAT signalling activity (Oyallon et al., 2016). Quantifying the intensity of dome-MESO-GFP in daughter cells as a readout for activity of the JAK-STAT pathway shows that following progenitor division the daughter cells exhibit similar levels of the signalling ( Figure 2E-E'; see Methods). To account for the possibility that it is due to equal inheritance of the protein from the mother, not an equivalent maintenance of a progenitor fate, these experiments were done in the presence of eater-dsRed to confirm neither of the daughter cells differentiated. Also supporting the equivalent maintenance model, we found that tracking dome-MESO-GFP levels in daughter cells over extended periods of time showed that the marker levels remained stable and did not diverge in both cells ( Figure 2-figure supplement 2). Taken together, the data provide compelling evidence that blood progenitors in Drosophila undergo symmetric division that produces two identical progenitor cells that are conserved in both cell size and JAK-STAT signaling pathway. Blood progenitor division is linked to cell size and is spatially oriented Since we were able to track a large number of dividing progenitors in real time, identified as dome-MESO + eater-dsRedcells, in the LG ( Figure 3A-B), we were able to quantitatively analyse the kinetics of cell growth and division. We found that most dividing progenitors complete cell division in 40-70 min ( Figure 3B; 57.74±27.58 minutes, n=63 progenitors). It has been shown that the cell division is often coordinated with cell size and can be initiated by cells reaching a so-called 'critical cell size' (Lengefeld et al., 2021;Ferrezuelo et al., 2012). For dome-positive progenitors, division occurred once cells reached an average size of 72 μm 2 ( Figure 3C left panel, 71.96±10.00 μm 2 , n=13 progenitors). Upon cell division, two similar sized progenitors are produced and undergo rapid growth such that their combined size exceeds the size of the original mother cell 3 hr after division, making cell division a potential driver for LG growth ( Figure 3C right panel). Analysis of the growth kinetics of the two daughter cells over time showed that the two daughter cells can grow 20-30% in the first 4 hr after division ( Figure 3D). As these experiments focused on the entire progenitor population, we sought to gain more detailed insight by labelling specific sub-populations of progenitors. First, we asked which progenitor sub-populations in the LG were mitotically active by constructing a fly line that carried the following markers: Tep4-QF>QUAS-mCherry, dome-MESO-GFP, and HmlΔ-dsRed. This allowed us to mark core progenitors (Tep4-mcherry + dome-MESO-GFP + ), distal progenitors (only dome-MESO-GFP + ) and intermediate progenitors ( . Next, we Figure 2 continued Video 4. Blood progenitors divide symmetrically in a wild-type LG. Representative video showing an example of a blood progenitor undergoing symmetric cell division over a period of 50 min (see also Video 1). The blood progenitors were marked by JAK-STAT signaling activity reporter dome-MESO-GFP (green). The daughter cells were marked by yellow ROIs. The LG was obtained from an early 3 rd instar larva (of genotype dome-MESO-GFP) raised at 25 °C, dissected, immediately mounted and imaged. Scale Bar: 10 µm. found that the critical size for distal progenitors (dome + Tep4 -) was on average 76.08 µm 2 , while for core progenitors (dome + Tep4 + ) it was on average 63.81 µm 2 (Figure 3-figure supplement 1B). Our analysis shows that distal progenitors have to reach a larger critical cell size than core progenitors before they can initiate mitosis (Figure 3-figure supplement 1B). As an internal control for changes in cell size, we selected a neighbouring cell that did not undergo mitosis as a comparison to the mitotic cell analysed (Figure 3-figure supplement 1B). In many niche-stem cell systems, the stem cells exhibit spatial polarisation in regards to the orientation of the dividing stem cells (Martin et al., 2018). We considered whether blood progenitor divisions were spatially polarised. In general, we described the anatomical planes of a LG and the polarity of cell division using anatomical axes of the LG coupled in relation to a cylindrical coordinate system ( Figure 3E-G), which describes organ anatomical structure in greater mathematical simplicity than other coordinate systems by allowing the radial direction from the dorsal-ventral axis to be defined explicitly with the use of trigonometry (Rood et al., 2019). The ρ-axis of cylindrical coordinate system corresponds to the radial axis, which is parallel to the plane formed by anteriorposterior and right-left axes of a larva and defines how far from the origin a given point lies, the θ-axis defines the absolute angle of the given point from the origin, while the z-axis corresponds to dorsalventral axis ( Figure 3E-G; see Methods) and defines the 'height' of the point on the now defined ρ-θ (radial length-angle) plane. This coordinate system was chosen as divisions were found to either always radiate out from the dorsal-ventral axis (ρ-mitosis) or run along the dorsal-ventral axis (z-mitosis; Figure 3F). Using the cylindrical coordinate system, all ρ-mitosis can be compared directly as the angle of the radial axis is a separate coordinate (compared to usual Cartesian coordinates, where radial measurements rely on both the x and y-axes). We found that approximately 90% of the divisions occurred along the ρ-axis, and that these divisions took a shorter time to complete (on average 53.90 min for ρ-axis divisions versus on average 75.04 min for z-axis divisions; Figure 3H-J). Notably, the overall shape of the LG which is 'longer (roughly 300 µm) and wider (roughly 150 µm)' than it is 'thick (roughly 40-60 µm)' is consistent with such a polarised orientation of division. Divisions along the ρ-axis tended to occur in dividing progenitors located further away from the heart tube or PSC when compared to divisions along the z-axis (Figure 3-figure supplement 1C-E). To statistically confirm that ρ-axis divisions are less likely to occur parallel to the heart tube (see wild-type control in Figure 4L), we used a quantile-quantile (Q-Q) plot to compare the distribution of division orientation we observed to a randomised normal distribution (Figure 3-figure supplement 1F; see Materials progenitors collected from 10 videos). The majority of blood progenitors spent an average of 33-73 min in mitosis. (C) Quantification of the cell area (μm 2 ) of mother blood progenitors 10 min before mitosis (data highlighted in the left panel, each data point represents a mother cell) and cell area summed up from two generated progenies 3 hr after mitosis (data highlighted in the right panel, each data point represents cell area summed up from two generated progenies; n=10 progenitors of genotype dome-Gal4 >UAS mGFP; eater-dsRed undergoing mitosis tracked over 3 hr). (D) Representative example selected from (C) showing the quantification of progeny growth (reflected by their cell area, μm 2 ). Left panel: real-time measurement of the cell area of the parent (or mother) cell, progeny 1, and progeny 2. Middle and right panels: changes in the ell area of progeny 1 and 2 over time, respectively. (E) Schematic showing anatomical axes of a 3D LG (A: anterior, P: posterior, L: left, R: right, D: dorsal, V: ventral; blood progenitors marked by dome-MESO-GFP in green, mature hemocytes marked by eater-dsRed in red). The LG is shown following a convention established previously for a 3D representation of the fly CNS (Zheng et al., 2018). (F) Detailed schematic showing mitotic events happening on the ρ and z axes with respect to the anatomical axes. Concept of ρ and z axes is derived from the cylindrical coordinate system (as shown in G; see Methods). The 3D cell matrix was built using codes from Geogram Delaunay3D. (G) Diagram showing the cylindrical coordinate system (ρ-, ϕ-, z-axes) compared to a Cartesian coordinate systems (x-, y-, z-axes). (H) Time-lapse images from representative videos of progenitor mitotic events occurring along the ρ-axis over 36 min (top panel) or along the z-axis over the course of 45 min (bottom panel). Blood progenitors labelled with dome-MESO-GFP (green, LG genotype: dome-MESO-GFP; eater-dsRed). (I) Quantification of the durations (in minutes) of blood progenitor mitotic events occurring along the ρ-axis (n=54 progenitors) and z-axis (n=6 progenitors). p-value = 0.022 was determined using Mann-Whitney-Wilcoxon test. * indicates p<0.05. (J) Pie graph showing the percentage of recorded blood progenitor mitotic events occurring along the ρ-axis and z-axis. The data in Figure 2A, C, E and (F-H) came from the same live imaging experiments but different cells were analysed and presented. The data in Figure and methods). This analysis showed that the orientation of ρ-axis divisions relative to the heart tube is biased or polarised ( Figure 3-figure supplement 1F). Overall, these observations uncover a previously unknown polarization in progenitor divisions in the LG as these divisions tend to occur more frequently along the plane formed by the anterior-posterior and right-left axes. Infection results in reduced cell proliferation in the LG Following infection, the LG undergoes a dramatic change as the cellular immune response is activated and there is a large-scale induction of differentiation of mature blood cells (Khadilkar et al., 2017). Increased production of mature blood cells can be achieved by a number of possible scenarios including: (1) increased proportion of progenitor cells undergoing cell division, (2) no change in the proportion of dividing progenitors but a faster cell cycle, (3) increased proportion of progenitors undergoing differentiation, (4) no change in the proportion of progenitors undergoing differentiation, but faster differentiation, (5) any combination of these options. We analyzed LGs from larvae infected with E. coli bacteria using a previously developed infection protocol (see Materials and methods; Khadilkar et al., 2017;Siva-Jothy et al., 2018). We labelled cell proliferation using EdU (a marker for proliferation; Figure 4A-E) or pH3 (a marker for mitosis; Figure 4-figure supplement 1A-D) and quantified proliferations and cell divisions in LGs from wild-type control and infected larvae. We simultaneously labelled different cell populations in the LG using dome-MESO-LacZ and HmlΔ-dsRed to identify blood progenitors (dome + Hml -), intermediate progenitors (dome + Hml + ), and mature blood cells (dome -Hml + ). We observed a reduction in the number of dividing progenitor cells following infection ( Figure 4A Figure 4F). This data from in vivo fixed tissues was confirmed by quantifying cell divisions in long-term live imaging experiments of dome + progenitors in the LGs from wild-type control and infected larvae which showed a reduction in the number of cell division events ( Figure 4H). In terms of the duration of mitotic events, there was no significant difference between progenitors in the LGs from wild-type control and infected larvae ( Figure 4I). There was, however, a slight change in the type of cell divisions observed in progenitors upon infection as we no longer saw divisions along the z-axis (or dorsal-ventral axis; Figure 4G). A custom quantitative image analysis algorithm was developed and used to explore spatial differences in the location and orientation of cell division between progenitors in the LGs from wild-type control and infected larvae (Figure 4-figure supplement 2; see Materials and methods). First, the LG was segmented into regions based on distance and location relative to the heart tube and (magenta). Genotype of the LG was dome-MESO-LacZ; HmlΔ-dsRed. (F) Quantification of the total number of cells per primary lobe from wild-type control (n=14 lobes) and E. coli infected larvae (n=11 lobes). p-value = 0.0872. (G) Quantification of the percentage of mitotic events occurring in blood progenitors along either the z-axis or the ρ-axis in LGs from wild-type control and E. coli infected larvae. (H) Quantification of the number of mitotic events in blood progenitors in LGs from wild-type control (n=10 videos) and E. coli infected larvae (n=6 videos). p-value = 0.0397. (I) Quantification of the duration of mitotic events in LGs from wild-type control (n=63 dividing progenitors) and E. coli infected larvae (n=15 dividing progenitors). p-value = 0.1365. (J) Heat maps summarising data from long-term imaging experiments showing the number of mitotic events recorded in distinct regions of LGs from wild-type control and E. coli infected larvae (see Materials and methods). (K) Schematic illustrating the orientation (θ 1 and θ 2 ) of the ρ-mitosis with respect to the heart tube. LGs from wild-type control and infected larvae ( Figure 4J; see Methods). This analysis showed a uniform reduction in progenitor cell divisions throughout the LGs following infection ( Figure 4J; correlation coefficient for changes in distribution upon infection compared to control = 0.44 consistent with weak correlation; see Methods). Analysis of the distribution of division frequency of progenitors and their relative angle to the heart tube showed that reduced progenitor divisions following infection were uniform across all angles within the LGs and there was no change in the duration of mitotic events at any division angle relative to the heart tube ( Figure 4K-M). The orientation of these divisions remained biased following infection (Figure 3-figure supplement 1G). Taken together, the data show that, when compared to uninfected controls, following infection: (1) There is a reduction in the number of progenitors undergoing division. (2) The division of progenitors is more likely to occur along the plane formed by the anterior-posterior and right-left axes. (3) The duration and orientation of mitotic events in progenitors are unchanged. Importantly, these results show that changes in proliferation are unlikely to account for the increased number of mature blood cells produced following infection. Consequently, changes in differentiation are likely the main driver for the increased mature blood cell production following infection. Quantitative imaging identifies two types of blood cell differentiation in the LG We analyzed mature blood cell differentiation in LGs from wild-type larvae in real time using quantitative imaging of genetically encoded fluorescent reporters for cell identity and signaling activity ( In a cell following a sigmoid trajectory, named after the shape of a sigmoid function curve ( Figure 5D), the initial level of dome-MESO-GFP is high while the levels of eater-dsRed is low ( Figure 5C; Video 6). As time passes in a cell following a sigmoid differentiation trajectory, the level of dome-MESO-GFP decreases and after a short delay the level of eater-dsRed increases ( Figure 5C). A key feature of cells undergoing a sigmoid differentiation trajectory is that the differentiation process is broken down into an initial slow phase and then a rapid fast phase ( Figure 5D). This is best visualised by Video 5. Long-term tracking of blood progenitor differentiation in a wild-type LG. Representative video of a differentiating blood progenitor (the cell was green at the beginning) turning into a differentiated mature blood cell (the cell became red in the end) in a live intact LG. Blood progenitors were marked by dome-MESO-GFP (green). Mature hemocytes were marked by eater-dsRed (red). The tracked progenitor was highlighted using a pink ROI by TrackMate throughout the recording. The LG was obtained from an early 3rd instar larva (of genotype dome-MESO-GFP, eater-dsRed) raised at 25 °C, dissected, immediately mounted and imaged. Scale Bar: 10 µm. The online version of this article includes the following source data and figure supplement(s) for figure 5: Source data 1. Raw data of Figure 5B, C, D, E, F and H. calculating the ratio of dsRed to GFP in the cell as a function of time. The graph shows the characteristics of sigmoid function shape from which the name of this type of differentiation trajectory is derived. Importantly, the sigmoid differentiation trajectory results in a rapid shift from a high dome-MESO-GFP and low eater-dsRed cell to a differentiated low dome-MESO-GFP and high eater-dsRed cell ( Figure 5C-D). In comparison, in a cell following a linear differentiation trajectory, the relative levels of both dome-MESO-GFP and eater-dsRed are high to begin with ( Figure 5E; Video 7). As time passes in a cell following a linear differentiation trajectory, the level of dome-MESO-GFP decreases while the level of eater-dsRed remains high. A key feature of cells undergoing a linear differentiation trajectory is that the differentiation process exhibits a uniform rate ( Figure 5F). This is best visualised by calculating the ratio of dsRed to GFP in the cell as a function of time. The graph shows the characteristics of linear function shape from which the name of this differentiation trajectory is derived ( Figure 5F). Importantly, the linear trajectory results in a gradual shift from a high dome-MESO-GFP and high eater-dsRed cell to a differentiated low dome-MESO-GFP and high eater-dsRed cell ( Figure 5E and F). The rate of differentiation from a progenitor to a blood cell can be quantified by calculating the slope from the graph of the ratio of dsRed to GFP in the cell as a function of time during the phase where differentiation occurs ( Figure 5G). This analysis shows that the sigmoid trajectory differentiation occurs at a rapid rate over a short time frame, while the linear trajectory differentiation is slower and more gradual ( Figure 5H). Importantly, the two differentiation trajectories appeared distinct and cells undergoing linear differentiation are not simply in the later phase of sigmoid type differentiation where eater-dsRed is high, but dome-MESO-GFP is already low. This is evident because there are features that distinguish the two trajectories in the later phases. Specifically: (1) A key characteristic of linear differentiation trajectory is that the dome-MESO-GFP declines throughout the process. In contrast, in the later phases of the sigmoid differentiation trajectory, dome-MESO-GFP levels become stable (comparing Figure 5C-E). The kinetics of linear type differentiation is therefore different from the later phases of sigmoid type differentiation. (2) From the middle to late phase of the sigmoid differentiation trajectory, eater-dsRed levels go up after dome-MESO-GFP levels are already low or still decreasing. In contrast, in the linear trajectory eater-dsRed levels are stable in the later parts of the trajectory. Video 6. Dynamics of sigmoid type differentiation in a wild-type blood progenitor. Real-time tracking of dome-MESO-GFP and eater-dsRed intensities in a wild-type blood progenitor undergoing sigmoid type differentiation over the course of 5~6 hr. Each dot represents a single time point. The LG was obtained from an early 3rd instar larva (of genotype dome-MESO-GFP, eater-dsRed) raised at 25 °C, dissected, immediately mounted and imaged. https://elifesciences.org/articles/84085/figures#video6 Video 7. Dynamics of linear type differentiation in a wild-type blood progenitor. Real-time tracking of dome-MESO-GFP and eater-dsRed intensities in a wild-type blood progenitor undergoing linear type differentiation over the course of 7~8 hr. Each dot represents a single time point. The LG was obtained from an early 3rd instar larva (of genotype dome-MESO-GFP, eater-dsRed) raised at 25 °C, dissected, immediately mounted and imaged. Moreover, spatiotemporal analysis of differentiation trajectories suggests they occur separately, that is not in a consecutive manner whereby cells undergo a sigmoid trajectory first and subsequently a linear trajectory. In particular, in the LG regions where we identified cells following different types of trajectories, tracking the trajectories of individual cells shows that 1. They exhibit a single distinct type of differentiation throughout ( Figure 5-figure supplement 1A) and 2. Cells undergo sigmoid or linear type differentiation either in parallel (see Cell1 and Cell2 in 1 of Wt in Figure 5-figure supplement 1A) or at different time points (see Box2 of Wt in Figure 5-figure supplement 1A). Taken together, these results identify two distinct types of differentiation events in the LG. Infection changes cell differentiation patterns in the LG Live imaging experiments were used to track and quantify differentiation events in LGs from wildtype control and infected larvae. First, we confirmed that the general differentiation trends following infection were similar between LGs in the ex vivo culture system and physiological in vivo conditions Source data 1. Raw data of Figure 6A and B. ( Figure 6-figure supplement 1). Second, we noted a significant (>2-fold) increase in the number of differentiation events observed in LGs following infection ( Figure 6A). Third, we determined the spatial distribution of differentiation events in the LGs from wild-type control and infected larvae. A general increase was seen in differentiation events, especially near the area that would correspond to the MZ-CZ boundary (around 60% distance to heart tube; Figure 6B-C; correlation coefficient for changes in distribution upon infection compared to control = 0.49 consistent with weak correlation; see Materials and methods). Finally, the spatial distribution of sigmoid and linear trajectory differentiation events was analysed separately ( Figure 6D-G). This revealed that the spatial distribution of sigmoid trajectory differentiation events was not greatly altered by infection ( Figure 6D-E; correlation coefficient between heat maps of 6D and 6E=0.14 consistent with no correlation). In comparison, the spatial distribution of linear trajectory differentiation events showed differences following infection ( Figure 6F-G; correlation coefficient between heat maps of 6 F and 6G=0.45 consistent with weak correlation). In particular, there was an increase in the frequency of differentiation events, especially near the area that would correspond to the MZ-CZ boundary. In addition, spatiotemporal analysis of the two types of differentiation trajectories following infection showed that they took place either in parallel (for example Cell 2 and Cell3 in Box 1 of Infection group in Figure 5-figure supplement 1B) or at different time points ( Figure 5-figure supplement 1B). Next LGs. However, the differentiation trajectories exhibited some variance in infected versus control larvae. For example, progenitor cells following the sigmoid trajectory in infected LGs exhibited a prolonged intermediary phase, during which both dome-MESO-GFP and eater-dsRed were expressed at low levels ( Figure 7A-A'' , Figure 7E; Figure 7-figure supplement 1A and C; Videos 6 and 8). Moreover, as a result of the prolonged intermediary phase, the average rate of differentiation for the sigmoid trajectory was lower upon infection ( Figure 7F). We also observed a slightly modified linear type trajectory in LGs from infected larvae compared to controls. In particular, while in controls the expression of eater-dsRed was relatively constant but that of dome-MESO-GFP declined with time ( Figure 2A and B, 2E and 2F). Importantly, upon infection, there is around 20% increase in the proportion of differentiation events that follow the linear trajectory and a corresponding decrease in the number of differentiation events that follow the sigmoid trajectory ( Figure 7H). Taken together, the data is consistent with a model whereby infection causes higher differentiation in the LGs not by increasing the rate of differentiation but rather by inducing a shift from one type of differentiation, the sigmoid trajectory, to another, the linear trajectory. and B-B'', quantified in To understand why there was a reduction in the number of sigmoid trajectory differentiation events, we applied a modified version of a technique known as histo-cytometry which presents in vivo derived data in a similar data format from flow cytometry (see Materials and methods; Stoltzfus et al., 2020). We imaged LGs expressing eater-dsRed and dome-MESO-GFP and performed automated image analysis to determine the relative amounts of these markers in individual cells in the LG (Figure 7I-L; in total 2500 cells captured from 6 primary lobes of LGs in both wt and infection groups; see Materials and methods). When compared to the wild-type control, the relative distribution of expression profiles of eater-dsRed and dome-MESO-GFP was greatly altered by infection. Specifically, there was an overall increase in the expression of eater-dsRed following infection in many cells in the LGs ( Figure 7K-L). This shift to higher eater-dsRed can indicate immune activation, as eater is transcriptionally activated as part of the immune response following infection (Kocks et al., 2005;Kroeger et al., 2012;Ye and McGraw, 2011), but the shift is also consistent with a greater proportion of progenitors undergoing differentiation. Cells were classified into four general categories based on their differentiation profile: GFP HIGH dsRed LOW (most stem cell-like), GFP LOW dsRed HIGH (most differentiated), GFP HIGH dsRed HIGH and Figure 7M). While a sigmoid differentiation trajectory proceeds as GFP HIGH dsRed LOW to GFP LOW dsRed LOW to GFP LOW dsRed HIGH , the linear trajectory proceeds as GFP HIGH dsRed LOW to GFP HIGH dsRed HIGH to GFP LOW dsRed HIGH ( Figure 7M). Notably, following infection, the relative overall population of GFP HIGH dsRed HIGH Blood progenitors are labelled with dome-MESO-GFP. Mature hemocytes are labelled with eater-dsRed. (E-G) Quantification of the duration of the fast differentiation phase in progenitors undergoing the sigmoid differentiation trajectory (E, n=6 and 8 progenitors from LGs of wild-type control and E. coli infected larvae; p-value = 0.0226), the differentiation rate measured in progenitors undergoing sigmoid type differentiation trajectory (F, n=6 and 8 progenitors from LGs of wild-type control and E. coli infected larvae, respectively; p-value = 0.2731), and the differentiation rate measured in progenitors undergoing a linear type differentiation trajectory (G, n=6 and 15 progenitors from LGs of wild-type control and E. coli infected larvae, respectively; p-value = 0.6613). (H) Quantification of the percentage of sigmoid or linear type differentiation trajectories observed in LGs from wild-type control and E. coli infected larvae. (I-J) Representative image (I) and scatterplot (J) of wild-type control LGs (n=2500 cells in total analyzed from 6 primary lobes of 3 LGs, see Materials and methods). (K-L) Representative image (K) and scatterplot (L) of LGs from E. coli infected larvae (n=2500 cells in total analyzed from 6 primary lobes of 3 LGs, see Materials and methods). (M) Schematic illustrating the two observed differentiation trajectories (sigmoid and linear). Based on their fluorescent intensities of GFP (progenitor fate marker) and dsRed (differentiated state marker), cells in the LG are categorized into 4 groups: GFP high RFP low , GFP high RFP high , GFP low RFP high , GFP low RFP low . P: blood progenitors. (N) Quantification of the total number of cells in each quadrant of (J) and (L) from the LGs of wild-type control and E. coli infected larvae. p Values in (E-G) were determined using Kolmogorov Smirnov test. * indicates p<0.05. ns indicates non-significant, p>0.05. Scale bars in (I) and (K) represent 50 μm. Error bars indicate S.D from the mean. Genotype of the LG was dome-MESO-GFP; eater-dsRed. See also Videos 6-9. The online version of this article includes the following source data and figure supplement(s) for figure 7: Source data 1. Raw data of Figure 7A, A', A'', B, B' cells (97 cells in wt LGs and 557 cells in LGs following infection; Figure 7N: data summarized from each quadrant of 7 J and 7 L; see Materials and methods). Overall, these observations suggest a possible link between the two main features seen upon infection, increased overall differentiation and an increased proportion of cells undergoing the linear trajectory of differentiation. In particular, these findings confirm the view that there are different subpopulations of progenitors in the LG (Cho et al., 2020;Blanco-Obregon et al., 2020), and raise the possibility that changes in the proportion of these different subpopulations are involved in the activation of mature blood cell differentiation following infection. Discussion By employing organ culture, using genetically encoded markers for cell cycle, proliferation and differentiation, and implementing quantitative image analysis, we are able to study the process of fly hematopoiesis at a single cell resolution in its endogenous context. This has allowed us to observe features of hematopoiesis that only become apparent upon system-and organ-level analysis. Our observations lead us to four main conclusions. First, our results illustrate that certain populations of blood progenitors in the fly can undergo symmetric cell divisions. Second, we find that the timing of blood progenitor division is more likely to occur once cells reach a specific cell size and the division is spatially oriented with respect to the heart tube and anatomical axes. Third, we identify and characterise two distinct modes of differentiation. Fourth, we show that the induction of mature blood cell production in response to infection is not achieved by modulating progenitor proliferation or speed of differentiation but by increasing the size of a population of progenitors that expressed high levels of both differentiated and progenitor cell markers. The ex vivo culture and imaging protocol we describe provides a powerful new way to study hematopoiesis over a prolonged time using quantitative imaging. Although such an approach holds much promise, its use calls for caution and for the consideration of certain caveats. For example, following infection, while LGs in vivo would be exposed to systemic immune signals, cultured LGs may not have access to such continuous extrinsic signals which may change their behaviour. During our work, we have looked at a number of parameters of differentiation, proliferation, and tissue homeostasis and did not see any strong differences between in vivo and ex vivo organ culture. Nonetheless such differences may exist in some circumstances. Another caveat is that due to the combination of tagged markers available for live imaging, we were only able to follow differentiation events leading to plasmatocytes and not crystal cell fate. Future work will focus on addressing this to provide a more complete view of blood cell differentiation. Finally, our observations show the existence of a large population of LG cells that expresses low levels of both dome-MESO-GFP and eater-dsRed. This would suggest that a substantial proportion of progenitors begin to differentiate but pause at this stage. However, we did not observe in our live trajectory tracking experiments this sort of paused trajectory. We can only speculate at this point that cells in the paused trajectory are produced before the early third instar larval stage, which is the stage we chose for live imaging. In our future work, we will extend our analysis to earlier larval stages, which we hope will test this hypothesis directly. Our work supports earlier studies that described the presence of distinct subpopulations of progenitors in the LG (Cho et al., 2020;Blanco-Obregon et al., 2020). According to this emerging model, the progenitor pool is not homogenous during hematopoiesis but rather contains subpopulations at different levels of differentiation. Importantly, these studies used single-cell transcriptomic, a different approach to ours, but also suggest that multiple paths exist for blood progenitors to differentiate into plasmatocytes (Cho et al., 2020). Specifically, they identified a path that contains an intermediate mixed lineage stage of differentiation and a more direct path that does not include intermediate steps (Cho et al., 2020). Additional subpopulations that have been proposed to exist in the LG include the intermediate progenitors (Krzemien et al., 2010;Sinenko et al., 2009;Sharma et al., 2019;Cho et al., 2020;Girard et al., 2021;Spratford et al., 2021) as well as the distal progenitors (Blanco-Obregon et al., 2020), which express a mixture of both progenitor and differentiated cell markers. Intermediate progenitors express the progenitor marker Dome and early differentiation marker Pxn but not mature blood cells markers like P1 or Lz (Sharma et al., 2019;Sinenko et al., 2009;Krzemien et al., 2010). Distal progenitors also exhibit a mixed fate: expressing the progenitor marker Dome, but also hallmarks of differentiated cells such as the expression of the plasmatocyte marker eater and absence of the progenitor marker Tep4 (Blanco-Obregon et al., 2020). Other studies suggested the existence of PSC-dependent and PSC-independent progenitors (Baldeosingh et al., 2018;Mandal et al., 2007). In our live imaging experiments, we observed a substantial population of cells in the LG that simultaneously express high levels of both progenitor and differentiated cell markers, which likely includes cells belonging to one or more of these mixed lineage subpopulations (Cho et al., 2020;Blanco-Obregon et al., 2020). Our studies suggest that these mixed lineage cells play a crucial role in hematopoiesis as they represent the linear differentiation trajectory which drives increased differentiation in response to infection. Consistent with this idea, spatial analysis of where progenitors that follow the different trajectories are located shows that the linear trajectory occurs mostly in the region thought to hold intermediate progenitors. Surprisingly, the linear differentiation trajectory is substantially slower than the sigmoid differentiation trajectory. This appears to be in contradiction to another one of our observations, that a very large proportion of the progenitors following the sigmoid trajectory are found in an intermediate state where both the progenitor and differentiation markers are expressed at low levels (see Figure 7J area #2). The accumulation of sigmoid-trajectory progenitors at the intermediate phase would suggest this is a long-lasting phase, but this is not what we saw in our direct tracking of differentiation trajectories. We propose that a possible explanation to resolve this potential contradiction is that only a small proportion of the cells that are found in the intermediate state where both the progenitor and differentiation markers are expressed at low levels go on to differentiate. Furthermore, it is unclear why the linear, slower trajectory, would be favored under conditions where we would expect a need for rapid production of immune cells. We can speculate the benefits of expanding the population of cells undergoing the linear trajectory exceed the disadvantages conferred by the slower differentiation time. We would propose, based on our observations, that the various subpopulations with mixed progenitor and differentiated cell fate have an important role during infection by acting as transit amplifying cells that allow rapid induction of the cellular immune response. Understanding the behavior of these intermediate state cells should be a focus of future investigation. Multiple systems and approaches have been used to track HSCs and blood progenitors during hematopoiesis in their native environment in real time. Key examples include studies in zebrafish that used intravital imaging (Zhang and Liu, 2011;Frame et al., 2017), studies in mice that combined diverse approaches such as inducible lineage tracing, flow cytometry, and single-cell RNA sequencing (Upadhaya et al., 2018), as well as mouse studies based on intravital imaging of the bone marrow (Christodoulou et al., 2020). Zebrafish have proven to be particularly useful for live studies of hematopoiesis, due to the relative ease of intravital imaging, and its wealth of transgenic tools (Zhang and Liu, 2011). Zebrafish have been a powerful system for studying the embryonic development of HSCs and the hematopoietic niche as well as for drug, chemical and genetic screening (Arulmozhivarman et al., 2016). In addition, zebrafish have been proven to be very useful for modeling blood malignancies and tracking their development and disease progression (Robertson et al., 2016;Gore et al., 2018). In the mouse system, which holds many challenges for intravital imaging, alternative approaches have been used to capture the dynamics of the process of hematopoiesis (Upadhaya et al., 2018;Grinenko et al., 2018). For example, Upadhaya et al., 2018 used a drug inducible HSC labeling technique to isolate HSCs and their progeny at set time points and follow the transcriptional landscape of the progenitors as they progress along their developmental trajectory (Upadhaya et al., 2018). This type of analysis yields several intriguing insights into hematopoiesis, such as the differences in the time it takes for various blood lineages to differentiate (Upadhaya et al., 2018). Moreover, despite the technical challenges, there have been several successful attempts to image the process of hematopoiesis in vivo by using the bone marrow of the mouse skull. While initially this approach was limited to the use of isolated, labelled, and transplanted HSCs (Lo Celso et al., 2009), more recent studies used an endogenously labelled HSC line (Christodoulou et al., 2020). Although they constitute important technical breakthroughs, these studies still suffer from several challenges and allow the visualization of a relatively short time window compared to the actual time it takes progenitors to differentiate in the mouse. Consequently, these studies were limited to describing the architecture of the bone marrow niche and the location of HSCs within it (Lo Celso et al., 2009), or to general descriptions of a small subset of cell behaviors such as HSC/progenitor motility and expansion (Christodoulou et al., 2020). Our approach offers the ability to perform real time functional studies that can complement observations from these other models of hematopoiesis. In particular, compared with these earlier studies our approach offers several key innovations. First is our ability to track multiple markers simultaneously in a quantitative way during long-term live imaging. Specifically, our approach allows us to quantitatively track, for 12 or more hours, markers of cell fate in combination with multiple other markers for proliferation, metabolism, cell signaling, and cell morphology. Moreover, the relatively short duration of the differentiation process in the fly, approximately 6-8 hr versus 1-3 weeks for various leukocyte lineages in the mouse (Upadhaya et al., 2018), allows us to observe differentiation in its entirety. Second, the ability to track a large number of progenitors and quantitate both their behavior and the expression of markers using imaging analysis tools allows the deployment of system-level approaches. This offers the capability to track hematopoiesis at a cellular and even subcellular spatial resolution and a temporal resolution of a few seconds, well beyond previous studies. Third, the ability to combine these powerful analysis tools with an infection model facilitates the ability to visualize the induction of the cellular branch of the immune response in real time in order to elucidate the underlying mechanisms. Fourth, the vast genetic toolkit and short generation time of the fly, the accessibility of the LG multi-organ co-culture system to drug and organ-organ communication studies, and the detailed and extensive transcriptomic analysis of blood cell differentiation (Cho et al., 2020;Girard et al., 2021) all make it a superb system for real time analysis of hematopoiesis. Specifically, a major goal of our future work will focus on combining the analysis pipeline we describe here with markers and tools to analyze and manipulate the various cell signaling pathways that have been implicated in the regulation of hematopoiesis under homeostatic, infection, and pathogenic conditions. Resource availability Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Guy Tanentzapf ( tanentz@ mail. ubc. ca). Materials availability This study did not generate new reagents. Data and code availability All raw data reported in this paper is deposited in the Source Data files in this study. MATLAB scripts used for counting total number of cells in a LG are publicly available (Khadilkar et al., 2017). All other custom-written scripts including R and MATLAB scripts used for analyses in this study are available on the Tanentzapf lab GitHub: (https://github.com/Tanentzapf-Lab/LiveImaging_HematopiesisKinetics_ Infection_Ho_Carr; copy archived at Ho et al., 2023). Long-term ex vivo organ culture and confocal imaging Early third instar larvae (84 hr after egg laying [AEL]) were chosen for all long-term live imaging experiments, washed using PBS for three times, followed by a quick rinse with 70% ethanol, then washed again using PBS for three times. Organs including larval LGs, fat bodies, ring gland, central nervous system, and heart tube were dissected in Drosophila Schneider's medium (ThermoFisher Scientific, Catalog number 21720001) in room temperature. The connection between CNS, ring gland, LG, and heart tube should be maintained during all steps from larval dissection, organ mounting and to confocal imaging. The dissected organs were then placed and mounted in the Schneider's medium supplied with 15% FBS (ThermoFisher Scientific, Catalog number 12483-020) and 0.2 mg/mL insulin (Sigma I0516) in a glass bottom dish. The medium was prepared fresh in 10 min prior to dissection in room temperature. The LG was mounted in such a manner to align the dorsal-ventral axis of the tissue with the z-axis of the confocal optical section. To stabilize the LG, the organs were covered with a 1% agar pad and spacers made from 1% agar were placed in between agar pad and glass bottom dish to shield the organs from mechanical force. Optimal moisture conditions during live imaging was maintained by the addition of 2 ml of the medium on top of the agar pad. All live imaging experiments were performed at 25 ο C in a microscope incubation chamber (TOKAI HIT, Catalog number: INU-ONICS F1). LGs were imaged using an Olympus FV1000 inverted confocal microscopy with a numerical aperture 1.30 UPLFLN 40 X oil immersion lens. Imaging duration varied due to movement caused by occasional heart tube contractions (see Figure 1B). The middle two planes of LGs spaced by 1.5 μm were imaged at a 15 seconds interval using lasers with the excitation wavelength at 488 nm (green laser) and 561 nm (red laser). The parameters were chosen to minimize phototoxicity, increase temporal resolution, and maximize the number of cells captured in each experiment. To avoid phototoxicity and photobleaching in the LGs, the laser was kept at 1% power (Icha et al., 2017), which is the weakest laser power that provides a good signal for live LGs in the FV1000 confocal microscopy. Using the laser power setting, no noticeable photobleaching (i.e. the signal levels did not drop down substantially over the course of imaging; see multiple videos and time-lapse images in this study as pieces of evidence) or phototoxicity (the main cause of which is an increased ROS level in a sample upon strong laser illumination Icha et al., 2017; see Figure 1E as an evidence showing the ROS level remained low and stable in LGs during imaging). No correction for photobleaching was performed. Time-lapse recordings of LGs and the resulting t-series images were processed using Fiji (Schindelin et al., 2012) and MATLAB software. All fluorescent intensity in this study are mean grey values measured in Fiji. EdU proliferation assay on LGs Click-iT EdU (5-ethynyl-2´-deoxyuridine) imaging kit (Life Technologies, Cat# C10337) was used to perform cell proliferation assay. Larvae were washed using PBS three times, quickly rinsed with 70% ethanol, and then washed and dissected in the Schneider medium in room temperature. The LGs (with CNS, ring gland, heart tube, and fat bodies) were cultured in the Schneider medium (with 15% FBS and 0.2 mg/mL insulin) supplied with EdU solution with the final concentration of 10 μM for an hour. Following incubation in the EdU solution, the LGs were fixed in 4% PFA for 15 min, rinsed with 16% Normal Goat Serum twice, washed with 0.1% PTX for 20 min, and then incubated in a Click-iT reaction cocktail (430 μl 1xClick iT reaction buffer, 20 μl CuSO 4 , 1.2 μl Alexa Fluor azide, and 50 μl 1xClick iT EdU buffer additive) for 30 min at room temperature in dark. After the incubation, the cocktail solution was removed and the LGs were washed twice using 16% Normal Goat Serum (each 10 minutes) and mounted in VECTASHIELD with DAPI in glass bottom dishes. The EdU signal from the LGs was imaged using a laser with the excitation wavelength at 488 nm. Number of EdU-positive cells per primary lobe were counted manually in Fiji using a Cell counter plugin. Cell death monitoring during live imaging Cell death during long-term live imaging was monitored using the nucleic acid stain Sytox Green (ThermoFisher Scientific, Catalog number S7020). The Sytox Green dye functions as an indicator of dying cells as the dye is impermeable to the plasma membrane of live cells. Dissected LGs were incubated and imaged in the Schneider medium containing 2 μM Sytox Green or PBS as a positive control. A stock solution of Sytox Green (5 mM in DMSO) was prepared and diluted in 1:2500 to a final concentration of 2 μM. The LG was imaged immediately after mounted in Sytox Green-containing medium. Cell death was assessed by counting all the progenitors (shown in Figure 1F and Figure 1-figure supplement 2D) or in the entire LGs (shown in Figure 1-figure supplement 2E). No particular subset of progenitors or portion of a LG was chosen to image. Real-time tracking of cell cycle phases during live imaging To track the cell cycle, a Fly-FUCCI system was used (Zielke et al., 2014). The Fly-FUCCI system consists of two major UAS transgenes carrying GFP or RFP-tagged degrons: a UAS-GFP.E2f1.1-230 (the N terminus amino acid 1-230 of E2f1 was fused to GFP) and a UAS-mRFP1.CycB.1-266 (the N terminus amino acid 1-266 of CycB was fused to RFP). E2f1 is degraded by the S phase-dependent ubiquitin ligase CRL4 Cdt2 while CycB is targeted by the APC/C for proteasomal degradation from midmitosis throughout G1 phase. By combining the two probes, cells that are in G1 phase are labelled in green (E2f1-GFP accumulation), in S phase are labelled in red (CycB-mRFP accumulation), and in G2 phase are labelled in yellow (presence of both E2f1-GFP and CycB-mRFP). The fluorescent intensities of GFP and RFP of individual cells in the LG in each time point were tracked and exported using the Fiji TrackMate plugin (Tinevez et al., 2017) and plotted using GraphPad Prism (Ver. 6). Heat map construction Spatial information of cellular events (including cell division and differentiation) from long-term LG videos was extracted using 2 MATLAB scripts and an image analysis workflow (Figure 4-figure supplement 2; Data and Code Availability; Tanentzapf lab GitHub). The workflow contained 9 steps: (1) The frame where cellular events were identified was saved as an image ( in. tiff) using Fiji. The heart tube (as a landmark structure) was annotated based on the well-defined location of it with respect to the two lobes of the LG (see step 1 in Figure 4-figure supplement 2, heart tube was highlighted in a white line next to the LG lobe). The image was then loaded into the first custom written script (Data and Code Availability; Tanentzapf lab GitHub). (2) The image was rotated to align the heart tube along the y-axis so that the heart tube was in parallel to the y-axis. (3) For later comparison, the image was then flipped so that the lobe was facing the right side and the heart tube was facing the left side. The step was designed to adjust all LG lobes facing the same direction with respect to the landmark structure. (4) The boundary of the lobe and the location where a cellular event was observed were manually selected. (5) A bounding box was created by the script and the width and height of the bounding box were defined. (6) The total width and height of the bounding box were divided equally into five segments (each as 20% of the total width and height, respectively) to create a grid. (7) A single heatmap showing the location of a cellular event was created. (8-9) Multiple heatmaps from different videos were combined as a final heatmap using the second custom script (Data and Code Availability; Tanentzapf lab GitHub). To statistically compare heat maps, correlation coefficients between heat maps were calculated using a corrcoef function in MATLAB. A correlation coefficient value from 0 to 0.25, 0.25-0.5, 0.5-0.75, and 0.75-1 was defined as no correlation, weak correlation, moderate correlation, and strong correlation, respectively. A weak correlation suggested that a shift but not a complete relocation of cellular events was observed. Spatiotemporal analyses of progenitor mitotic events Mitotic events were tracked in blood progenitors labelled by dome-Gal4 >UAS mGFP or dome-MESO-GFP in long-term LG videos using Fiji. The following quantitative analyses were performed on the mitotic events: (1) Duration of mitotic events was defined as the time a mother progenitor spent from the onset of mitosis throughout to the end where the nucleus of two progenies reformed and were clearly visualized (approximately in telophase; Video 1; Video 4; Figure 2A-B). The onset of mitosis was defined as 40 frames (roughly 10 min) before the nucleus breakdown (which happens in prophase) was observed. The same criteria were applied to all mitosis analysis in our study. (2) The cell size of individual daughter cells post-mitosis was tracked over 3 hr and measured in Fiji using a Polygon ROI Selection tool. The ROI was drawn along the cell membrane marked by dome-Gal4 driven membranous GFP. A z stack with 2 slices was projected in Fiji using maximum projection before the measurement. (3) The position of a contractile ring was inferred based on the location where the cleavage furrow occurred in dividing cells. The distance of the contractile ring to the two poles of a dividing cell was measured in Fiji using a Straight Line ROI Selection tool. A z stack with 2 slices was projected in Fiji using maximum projection before the measurement. (4) The JAK-STAT signaling activity (reflected by dome-MESO-GFP intensity) of daughter cells were measured in Fiji using a Circle ROI Selection tool. (5) ρ-mitosis: as illustrated in Figure 3F, a ρ-mitosis was defined as a mitosis occurs on the plane formed by right-left and posterior-anterior axes. Progenitors divide away or towards heart tube on this plane at any angles ranging from 0 to 180 degree are classified as ρ-mitosis. (6) z-mitosis: as illustrated in Figure 3F, a z-mitosis was defined as a mitosis occurs along the dorsal-ventral axis at any angles ranging from 0 to 180 degree. (7) The orientation of a ρ-mitosis relative to the heart tube was determined based on the angle between the mitosis direction (the direction that was perpendicular to cleavage furrow and parallel to the positions of two newly formed nuclei of daughter cells) and the heart tube. The newly formed nuclei of daughter cells were used to determine the relative position of the cells and infer the plane of division. The angle was manually measured in Fiji using the Angle tool function. To test if the orientation of mitosis follows normal distribution or shows a bias towards a certain direction in the LGs, a Q-Q plot statistical analysis was performed in R. The orientation data from individual progenitors undergoing mitosis were loaded into R and the Q-Q plot was produced using a qqnorm function in the R Stats package. To add a theoretical Q-Q line onto the plot, the QQline function was used. The linearity of the points lining along the Q-Q line suggests that the data follows normal distribution. (8) Distances of ρ-and z-mitosis to the heart tube and posterior end of the LG on the heart tube (a well-defined position where the PSC localized) was measured manually in Fiji using the Straight Line ROI Selection tool. (9) The mitotic index of progenitors was calculated by dividing the number of pH3 labelled progenitors (pH3 + dome + ) by the total number of progenitors (dome + ) in the LG. Number of pH3 + progenitors were counted in Fiji using the Cell Counter plugin. (10) The positions where blood progenitors divide inside a LG were recorded and the information from multiple long-term videos was then summarized in a heat map (as described above in the Heat map construction section) to visualize regions with different level of mitotic activities. Information of main markers used to track differentiation To track blood progenitor differentiations in real-time, dome-MESO-GFP and eater-dsRed were used in combination throughout the study to indicate the differentiation status of a cell ( (Oyallon et al., 2016;Hombría et al., 2005). The construct of the dome-MESO-LacZ transgene was not a complete enhancer trap containing the entire domeless gene promoter sequence but rather a 2.8 kb fragment from the first exon and first intron, containing multiple STAT binding sites. The expression of dome-MESO construct has been further shown to be dependent on JAK-STAT signaling, demonstrating that JAK-STAT signaling forms a positive feedback loop where the activity of itself can promote the expression of its own receptor Dome (Hombría et al., 2005). This indicates, together with the original study where the dome-MESO-LacZ was developed, that the dome-MESO-GFP/LacZ lines are reliable JAK-STAT signaling reporters to track progenitor cell fate. (2) eater-dsRed: The eater-dsRed line was chosen to track the differentiation status of a progenitor for the following reasons: (a) It was an enhancer-trap line made and verified to be able to accurately reflect spatial-temporal expression of the eater gene in the LG (Kroeger et al., 2012). (b) A further study confirmed that eater-dsRed marks both differentiated plasmatocytes (high eater-dsRed level) and distal progenitors that already commit to a plasmatocyte fate (lower eater-dsRed level than mature plasmatocytes Blanco-Obregon et al., 2020). By tracking eater-dsRed level in combination with dome-MESO-GFP, the full range of the transition of a cell undergoing differentiation can be captured (see Figure 5 and Figure Blanco-Obregon et al., 2020;Girard et al., 2021). Thus, using HmlΔ-dsRed as a differentiation marker brings up the possibility of mixing up crystal cells and/or plasmatocytes when cells were tracked in live imaging experiments, which can make analyzing and interpreting data of differentiation kinetics complicated. Spatiotemporal analyses of differentiation Differentiation events were tracked in videos of LGs carrying dome-MESO-GFP (a JAK-STAT signaling activity reporter that marks blood progenitors Oyallon et al., 2016;Krzemień et al., 2007) and eater-dsRed (a marker that starts to appear from distal committed progenitors to mature plasmatocytes Kroeger et al., 2012;Tokusumi et al., 2009). Blood progenitors stay in a steady state (cells expressing either dome high eater low before sigmoid differentiation or dome high eater high before linear differentiation) prior to the beginning of changes in dome and/or eater levels following differentiation. The time point where we can record such changes was therefore the exit point from the steady state and was denoted as 'frame 0 or 0 hr'. To quantify differentiation, the videos were saved as RGB stacks for the Fiji TrackMate plugin (Tinevez et al., 2017). All pixel intensities were preserved equivalent as original data without adjustments on brightness and contrast. A tracking function implanted in Track-Mate toolbox with a LoG (Laplacian of Gaussian) detector was applied to follow differentiating blood progenitors in a video. The raw intensity values of GFP and RFP of a cell at individual time points were exported to a spreadsheet (.xml format). The ratio of dsRed:GFP in each cell at individual time points was then calculated from raw dataset and plotted to visualize the curve shape. The dsRed:GFP ratio over time reflected how the two markers change over time relative to each other and how fast a blood progenitor loses its identity. Sigmoid or linear type of differentiation was categorized based on the shape of the ratio curve (see Figure 5C-F). The terms 'sigmoid' and 'linear' were used for descriptive purposes in this study but not mechanistic and are interchangeable with a more detailed description: a ratio curve in sigmoidal shape showed that at the beginning cells express dome high eater low following up by slow and fast phases of transition, while a ratio curve in linear shape showed that at the beginning cells express dome high eater high following up by a transition in a consistent rate. To normalize real-time fluorescent signals of dome-EMSO-GFP and eater-dsRed to be able to compare signals across samples/videos, a modified version of fluorescence normalization method for live fly guts was performed (Martin et al., 2018). The original normalization method required modifications since it was designed for a situation where one marker gradually changes over time while the other marker does not change over time. In comparison, the current study on LGs was dealing with a scenario where the two markers gradually change over time (dome-MESO-GFP and eater-dsRed) and the differences between the two markers at any time points are required to be preserved after normalization. The signals were normalized as follow: First, the RGB stack of a video was inputted into the Fiji TrackMate plugin to obtain the raw intensities of dome-MESO-GFP and eater-dsRed at individual time points. Second, the raw intensities were imported into MATLAB and normalized using the equations (Norm.G = (G t -min (G,R))/(max (G,R) -min (G,R)); Norm.R = (R t -min (G,R))/(max (G,R) -min (G,R))) where the difference between the fluorescent values at every time point (G t and R t representing dome-MESO-GFP and eater-dsRed, respectively) and the minimum fluorescent value was divided by the difference between the maximum fluorescent value and the minimum fluorescent value. Minimum and maximum fluorescent values were obtained across the two markers, as shown by the 'min' and 'max' functions of the equation. By using this approach, the patterns, trends, and relative differences between two markers were all preserved to make comparisons across the videos. Moreover, we confirmed that the normalization method preserved the trends of the markers during differentiation (Figure 7-figure supplement 2A-B) compared to the raw values before normalization. The customwritten MATLAB code used to perform normalization was deposited in the Tanentzapf lab Github (Data and Code Availability). To quantify the rate of each type of differentiation, a linear regression fit was applied to the fast phase of a sigmoid differentiation curve and to the entire linear differentiation curve using a custom written R script (Data and Code Availability, Tanentzapf lab GitHub). The slope of the fitted line was calculated as follows: Slope = Changes of dsRed:GFP ratio/Time. To analyse the spatial distribution of differentiations in the LGs, a heat map was constructed (as described above in the Heat map construction section). The locations where blood progenitors differentiate inside a LG were recorded and the information from multiple long-term videos was then summarized in a heat map to visualize hot and cold spots of differentiation events. In vivo analysis of LGs To perform in vivo analysis on LGs, a method of histo-cytometry (Stoltzfus et al., 2020) was applied to extract fluorescent signals of dome-MESO-GFP and eater-dsRed and the positional information of individual cells from the LGs of wild-type control and E. coli infected larvae using the TrackMate plugin. Three main steps of histo-cytometry were performed: (1) Imaging: Entire LGs of genotype dome-MESO-GFP; eater-dsRed were imaged by a FV1000 microscopy with a step size 1.5 μm using exactly equivalent laser settings across wild-type control and infection groups. Importantly, the TrackMate-based automatic method used to perform histo-cytometry analysis works best on single sections with cells clearly separated from each other. To unbiasedly select slices across samples and different groups, we took the slides that are located at 25%, 50%, and 75% of the total thickness (or z axis) of the LG. The imaged LG slides were saved as OIB files in Fluoview and inputted into Fiji as RGB stacks ( in. tiff) for the following analysis. (2) Segmentation: To segment individual nuclei in the imaged LGs, the automatic ROI selection tool implanted in the TrackMate plugin with a LoG detector was used. The Blob diameter was set as 13 and the threshold was set as 2.5 in TrackMate across all LG images to reliably select all nuclei. Using the method, in total 2946 cells (LG#1: 795 cells in total, LG#2: 923 cells in total, LG#3: 1228 cells in total) from the wild type LGs and 2985 cells from the LGs following infection (LG#1: 944 cells in total, LG#2: 871 cells in total, LG#3: 1170 cells in total) were captured. From these cells, an Excel-based method (using the rand() function) was performed to completely randomize their order and then took 2500 cells randomly from the two groups that were used for the Figure 7I-N (see Figure 7-source data 1 file). (3) Visualization: The fluorescent intensities of dome-MESO-GFP and eater-dsRed of individual nuclei were extracted and plotted as a scatter plot using R to visualize the distribution of cells based on the expression levels of dome-MESO-GFP and eater-dsRed markers. Oxidative stress measurement in ex vivo LGs A gstD-GFP reporter line (Sykiotis and Bohmann, 2008) was used to measure oxidative stress in ex vivo cultured LGs over 13 hr. GstD-GFP is a sensor designed to detect ROS levels in live tissues/ animals and is compatible with live imaging experiments. The GFP intensity from individual lobes of LGs was tracked in Fiji using the Polygon ROI Selection tool over 13 hr. The mean grey value of gstD-GFP intensity was measured. The obtained gstD-GFP fluorescent intensities at individual time points were normalized to the total ROI or primary lobe area (μm 2 ) and plotted in GraphPad Prism. Larval bacterial infection Ampicillin-resistant, GFP expressing E. coli (kind gift from Dr. Christopher Loewen, University of British Columbia, Canada) was used in this study. E. coli was grown in the LB medium (10 g Bacto tryptone, 5 g yeast extract, and 5 g NaCl were used to prepared 1 L LB medium) overnight in 37℃ for infection experiments. A larval oral infection protocol was applied with slight custom modifications (Khadilkar et al., 2017;Siva-Jothy et al., 2018). Larvae were collected and starved in a vial containing only 1% agar for 2 hr in room temperature. Post-starvation the larvae were moved into vials containing either regular fly food with LB medium (as a control, 5 g fly food mixing with 1 ml LB, 8-10 larvae per vial) or regular fly food with E. coli culture (as an infection group, 5 g fly food mixing with 1 ml LB containing E. coli, 8-10 larvae per vial) for 6 hr in 25℃. Larvae at 78 AEL were infected for 6 hr and dissected at 84 AEL for ex vivo live imaging. The infected larvae were first screened under a fluorescent stereo microscopy (Model: MAA-03/B, serial number: 06.07/07) to confirm that they ingested a large amount of GFP expressing E. coli (with clear GFP visualized in the intestine region). The larvae were then used for long-term imaging experiments or in vivo analysis. Statistical methods Statistics were performed using GraphPad Prism (ver. 6). p Values were determined using statistical tests that were detailed in all figure legends and Source data files. The sample size of each analysis was indicated in figure legends. No statistical method was performed to pre-determine sample size.
2022-10-12T13:27:18.869Z
2022-10-07T00:00:00.000
{ "year": 2023, "sha1": "03dc8b7f661f30a0d7d761dec7290afe0860a4e2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.84085", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "26b574d986a93fd12f533e55ba0d53b75bc8a849", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
12845432
pes2o/s2orc
v3-fos-license
The regulator of calcineurin 1 increases adenine nucleotide translocator 1 and leads to mitochondrial dysfunctions Abstract The over‐expression of regulator of calcineurin 1 isoform 1 (RCAN1.1) has been implicated in mitochondrial dysfunctions of Alzheimer's disease; however, the mechanism linking RCAN1.1 over‐expression and the mitochondrial dysfunctions remains unknown. In this study, we use human neuroblastoma SH‐SY5Y cells stably expressing RCAN1.1S and rat primary neurons infected with RCAN1.1S expression lentivirus to study the association of RCAN1 with mitochondrial functions. Our study here showed that the over‐expression of RCAN1.1S remarkably up‐regulates the expression of adenine nucleotide translocator (ANT1) by stabilizing ANT1 mRNA. The increased ANT1 level leads to accelerated ATP–ADP exchange rate, more Ca2+‐induced mitochondrial permeability transition pore opening, increased cytochrome c release, and eventually cell apoptosis. Furthermore, knockdown of ANT1 expression brings these mitochondria perturbations caused by RCAN1.1S back to normal. The effect of RCAN1.1S on ANT1 was independent of its inhibition on calcineurin. This study elucidated a novel function of RCAN1 in mitochondria and provides a molecular basis for the RCAN1.1S over‐expression‐induced mitochondrial dysfunctions and neuronal apoptosis. Mitochondria are the critical regulators of neuronal death which is a key mark in neurodegeneration. And dysfunctions of mitochondria have been suggested in aging and neurodegenerative diseases including Alzheimer's disease (AD) and Parkinson's disease (Lin and Beal 2006). Mitochondrial dysfunctions are one of the most early and prominent features in vulnerable neurons in neurodegenerative diseases, including impaired mitochondrial respiration, increased reactive oxygen species generation, mitochondrial DNA damage, decreased mitochondrial mass, and abnormal mitochondrial dynamics (de la Monte et al. 2000;Zhu et al. 2006;Querfurth and LaFerla 2010;Swerdlow et al. 2010). Adenine nucleotide translocator 1 (ANT1), or the ADP/ATP translocator 1, is the most abundant protein in the inner mitochondrial membrane. It forms as a homodimer, a gated channel by which ADP is brought into and ATP brought out of the mitochondrial matrix. In addition to the translocase activity, ANT has regulatory role in mitochondrial permeability transition pore (mPTP) function and is involved in mitochondria-mediated apoptosis (Kokoszka et al. 2004;Sharer 2005;Dahout-Gonzalez et al. 2006). mPTP, a nonspecific pore in the mitochondrial inner membrane, opens by the primary trigger of elevated matrix Ca 2+ , leading to permeability to any molecule of < 1.5 kDa (Halestrap et al. 2002). As the result of this pore opening, the mitochondrial electrochemical hydrogen ion gradient dissipates, the matrix is depleted of pyridine nucleotides, mitochondria swell because of the osmotic uptake of water, cytochrome c is released into cytosol, and eventually leads to cell apoptosis (Soane et al. 2007). mPTP opening and energy crisis have been considered to play important roles in acute and chronic neurodegeneration. There are two conformations of ANT1, matrix conformation (m-conformation) and cytosol conformation (c-conformation), which could be induced and stabilized by specific ligands bongkrekic acid (BKA) and carboxyatractyloside (CATR), respectively (Dahout- Gonzalez et al. 2006). Regulations of mPTP as well as the translocase activity of ANT1 likely lie on the conformational grounds, indicated by the fact that BKA can inhibit mPTP opening while CATR can facilitate it, and both the two ligands can inhibit ATP-ADP exchange (Dahout-Gonzalez et al. 2006). Many studies have demonstrated that appropriate ANT1 level was vital to mitochondrial function and cell survival (Sharer 2005;Kawamata et al. 2011;Liu and Chen 2013), implying that the expression of ANT1 must be tightly controlled to avoid any deleterious effect. Regulator of calcineurin 1 (RCAN1), also known as MCIP1, DSCR1, adapt78, and calcipressin, is located at 21q22.12 and consists of seven exons and six introns. Alternatively, splicing the first four exons generates four isoforms of RCAN1 that differ only in their N-terminal (Fuentes et al. 1997). Different usage of two translational start codon (AUG) resulted in two isoforms of RCAN1.1, RCAN1.1L and RCAN1.1S with 55 amino acids longer in RCAN1.1L (Wu and Song 2013). RCAN1.1L is the predominant form expressed in the brain. RCAN1.1S and RCAN1.4, differing in 28 amino acids in Nterminus, have tissue-specific expression pattern by usage of two alternative promoters (Sun et al. 2014b). RCAN1.1 isoform is primarily abundant in the fetal and adult brains. Previous data have shown that RCAN1.1 expression is elevated in the cortex of AD patients and the over-expression may contribute to AD pathogenesis (Ermak et al. 2001;Ermak and Davies 2013). We recently report that the degradation of RCAN1.1 is mediated by both chaperon-mediated autophagy and ubiquitin proteasome pathways (Liu et al. 2009); RCAN1.1 is elevated in the brains of AD and DS patients and RCAN1 over-expression facilitates neuronal apoptosis through caspase 3 activation (Sun et al. 2011); the transcription of RCAN1.4 can be activated by NF-jB (Zheng et al. 2014) and RCAN1.4 over-expression exacerbates calcium overloading-induced neuronal apoptosis (Sun et al. 2014b). Our recent paper also showed that RCAN1.1S and RCAN1.4 inhibited NF-jB and suppressed lymphoma growth that is independent of its inhibition on calcineurin (Liu et al. 2015). Our data also showed that RCAN1 was located in ER and promoted N-glycosylation via oligosaccharyltransferase . These studies suggested that RCAN1 is a multifunctional protein. The RCAN1.1-related mitochondrial dysfunctions include reduction of mitochondrial mass, decline of cellular ATP level, opening of mPTP, and activation of caspase signal pathway (Chang and Min 2005;Sun et al. 2011Sun et al. , 2014aErmak et al. 2012), but the underlying molecular basis remains to be discovered. A report from Drosophila species proposes that nebula (Drosophila homolog of RCAN1) can interact with ANT1 to modulate mitochondrial function (Chang and Min 2005); however, whether RCAN1 interacts with ANT1 in mammalian system remains elusive. Our study here showed that the over-expression of RCAN1.1S remarkably elevated ANT1 expression by stabilizing ANT1 mRNA. The increased ANT1 level led to abnormal mitochondrial functions including accelerated ATP-ADP exchange rate, more Ca 2+ -induced mPTP opening, increased Cyt c release, and eventually cell apoptosis in neuronal cell lines. Furthermore, knockdown of ANT1 expression by shRNA vector brought these mitochondrial perturbations caused by RCAN1.1S back to normal. The study here demonstrated that RCAN1 impeded mitochondrial functions through ANT1. Materials and methods The experimental protocols were approved by the Animal Care and Protection Committee of Shandong University and institutional Ethics Committees of Qilu Hospital, and in compliance with ARRIVE guideline. Cell culture YD2 cells were generated by stable transfection of pRCAN1.1S-6myc into human neuroblastoma SH-SY5Y cells as previously described (Liu et al. 2009). Rat primary neurons were isolated from E18 pregnant rats and cultured as previously described (Sun et al. 2011). Pregnant rats were bought from experimental animal center of Shandong University. All cells were maintained at 37°C in an incubator containing 5% CO 2 . Lentivirus construction and Infection The cDNA of RCAN1.1S and ANT1 were PCR amplified and cloned into lentivirus vector pWPXL. The lentivirus expression vector was co-transfected with psPAX2 and pMD2.G into HEK293T cells for lentivirus production. Lentivirus was harvested from the culture media 48 h after transfection and precipitated with PEG8000. The titer of lentivirus produced is about 10 7 pfu/mL. Rat primary neurons were infected with a MOI (multiplicity of infection) of 5 for 8 h at 37°C, followed by replacement of the infection media with conditioned culture media and incubated for 3 days. About 30-50% of neurons were transduced by visualizing GFP (green fluorescence protein) expression that was fused with RCAN1.1S or ANT1. Immunoblotting analysis For immunoblotting analysis, cells or isolated mitochondria were lysed in radio-immunoprecipitation assay buffer (1% Triton X100, 1% sodium deoxycholate, 4% sodium dodecyl sulfate, 0.15 M NaCl, 0.05 M Tris-HCl, pH 7.2) supplemented with protease inhibitors. The lysates were resolved by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and the immunoblotting was performed as described previously (Liu et al. 2009). The primary antibodies used were mouse 9E10 mAb, M2 mAb, anti-ANT1 mAb, anti-superoxide dismutase 2 (SOD2) mAb, and anti-b-actin mAb. The endogenous RCAN1.1 antibody DCT3 (a polyclonal rabbit against the last 20 aa in the RCAN1 C-terminus) was used to detect endogenous expression of RCAN1 (Sun et al. 2011). Detection and quantifications were performed with the Li-Cor Odyssey imaging system and its software (Liu et al. 2009). Mitochondrial isolation All steps were carried out at 0-4°C; mitochondrial isolation from YD2 and SH-SY5Y cells was performed as described previously with minor modifications (Sun et al. 2011). The cell homogenate was centrifuged twice at 800 g for 5 min in order to get rid of the nuclei and remaining intact cells; the supernatant was centrifuged at 8000 g for 15 min to pellet crude mitochondria. The supernatant was transferred to a new tube and centrifuged for an additional time in order to remove any remaining mitochondria, and collected as the cytosol fraction. The crude mitochondria were layered over a 1.0/1.5 M discontinuous sucrose gradient containing protease and phosphatase inhibitors and centrifuged at 100 000 g (Beckman Optima MAX-XP Ultracentrifuge MLS 50 rotor (Beckman Coulter, Krefeld, Germany)) for 1 h. The purified mitochondria were collected from the 1.0 to 1.5 M sucrose interface by pipetting. 9E10 mAb was used to detect exogenous myc-tagged RCAN1.1S of YD2 cells, and DCT3 to detect endogenous RCAN1.1 of SH-SY5Y cells. Several mitochondrial markers were used to quantify mitochondria: B-cell lymphoma-2 (BCL-2) for the outer membrane, Cyt c for the intermembrane space, SOD2 for matrix, and Cyt c oxidase subunit IV (COX IV) for the inner membrane. In the mPTP-related assays, metabolically active mitochondrial isolation from cultured rat primary neurons was performed according to the method previously described (Almeida and Medina 1998). RT-PCR RT-PCR was performed as previously described (Liu et al. 2009). Specific primers used to amplify RCAN1.1S gene are RCAN1.1F (5'-CGACTGGAGCTTCATTGACT) and RCAN1.1R (5'-CCCAA GCTTTCCGCTGAGGTGGATCGGCGTGTA). Specific primers used to amplify a 125-bp fragment of ANT1 gene are ANT1-202F (5'-CTCTCCTTCTGGAGGGGTAAC) and ANT1-327R (5'-GA ACTGCTTATGCCGATCCAC). A 141-bp fragment of human bactin amplified with primers actin-F (5'-GACAGGATGCAGAAGG AGATTACT) and actin-R (5'-TGATCCACATCTGCTGGAAG GT) was used as internal control. To verify that the read-out of RT-PCR results was linear, different amplification cycle numbers of RCAN1.1S, ANT1, and b-actin genes were performed. Samples were analyzed on 1.5% agarose gel. Image J software was used to quantify and analyze the data. Mitochondrial DNA (mtDNA) was determined to evaluate the mitochondrial content using quantitative real-time PCR as previously described (Bai and Wong 2005). RPPH1 gene was used as nuclear gene normalizers for the mtDNA content (TaqMan Copy Number Reference Assay from ABI (Waltham, MA, USA). Degradation of ANT1 mRNA and protein For the measuring of ANT1 mRNA degradation, SH-SY5Y and YD2 cells were treated with 1 lg/mL actinomycin D (Act D) to inhibit de novo RNA synthesis for 0, 3, 6, 9, and 12 h. ANT1 mRNA level was measured by RT-PCR: isolated RNA was treated with recombinant DNase I before reverse transcription to prevent contamination of genomic DNA, random primer (3801; Takara, Dalian, China) was used to synthesize the first strand of cDNA, and 18s rRNA was used as an internal control. The specific primers amplifying a 232-bp fragment of 18s gene were 18srRNA-F1464 (5'-CAGCCACCCGAGATTGAG CA) and 18s rRNA-R1696 (5'-TAGTAGCGACGGGCGGTGTG). The primers amplifying a 140-bp fragment of Dynamin 1 (DNM1) gene were DNM1-F237 (5'-ATATGCCGAGTTCCTGCACTG) and DNM1-R376 (5'-AGTAGACGCGGAGGTTGATAG). For the measuring of ANT1 protein degradation, SH-SY5Y cells were cotransfected with pcDNA3.1-RCAN1.1-6myc and p3 9 flag CMV10-ANT1; 48 h after transfection, cells were treated with 100 lg/mL cycloheximide (CHX) which could interfere with the translocation step to inhibit protein synthesis for 0, 12, 24, and 36 h, respectively; the level of ANT1 protein was measured by western blot and detected by M2 mAb. ATP-ADP exchange rate assay ATP-ADP exchange rate solely mediated by ANT was measured according to a method developed by Kawamata et al. (2010) by exploiting the differential affinity of ADP and ATP to Mg 2+ . A total quantity of 40 lg/mL digitonin was used to permeabilize cells; 2 mM ADP was added to start the mitochondrial phosphorylation; magnesium green fluorescence was recorded in, calibrated, and converted to [ATP]t appearing in the reaction medium using standard binding equations listed in the published method (Chinopoulos et al. 2009;Kawamata et al. 2010). The fitted slope obtained by the linear regression of this time course of [ATP]t appearing in the reaction medium reflects ATP-ADP exchange rate mediated by the ANT. The ATP-ADP exchange rate mediated by the ANT was validated in this assay by sequentially adding 4 nM CATR to the reaction medium, which resulted in a complete halt of ATP rise in the media after three additions. Determination of mPTP opening mPTP opening was determined as previously described using the calcein-CoCl 2 bleaching assay (Petronilli et al. 1999). Briefly, cultured cells (grown in wells of a 24-well plate) were washed in reaction buffer (140 mM NaCl, 5.0 mM KCl, 10 mM HEPES, 2.0 mM CaCl 2 , 1.0 mM MgCl 2 , and 10 mM glucose, pH 7.4), and pre-incubated in fresh buffer A containing 2 lM fluorescence dye calcein-AM and 1 mM CoCl 2 at 37°C for 30 min, then washed three times with buffer A, and the initial calcein fluorescent signals was recorded by Varioskan flash instruments (Thermo Scientific, Shanghai, China) at 25°C with the excitation wavelength of 494 nm and emission wavelength of 517 nm. mPTP opening was induced by adding 500 lM CaCl 2 along with the 5 lM Ca 2+ ionophore ionomycin in the presence or absence of ANT1 ligand, and the fluorescence signals were measured again. The added ANT1 ligand was 1 lM carboxyatractyloside which sensitized the mPTP to Ca 2+ , or 5 lM BKA which made the mPTP insensitive to Ca 2+ . The protein concentration for each well was measured by the Bradford protein assay. The fluorescent signals were normalized to total protein content. The decreased percentage of initial fluorescent signals could be interpreted as mPTP opening. Swelling of energized mitochondria The Ca 2+ -triggered mitochondrial swelling assay was performed as follows: isolated mitochondria (0.4 mg/mL) were incubated in KCl media (125 mM KCl, 2 mM K 2 HPO 4 , 1 mM MgCl 2 , 20 mM HEPES, 5 mM glutamate, 5 mM malate, and 2 lM rotenone, pH7.4) for 10 min in the presence or absence of ANT1 ligand (1 lM CATR or 5 lM BKA). Mitochondrial swelling was triggered by the addition of Ca 2+ (500 nmol/mg mitochondrial protein); the mitochondrial swelling caused by an influx of solutes across the inner membrane was observed by immediately and continuously recording the decrease in absorbance at 540 nm on Varioskan flash instruments (Thermo Scientific). Ca 2+ retention capacity of mitochondria Mitochondrial Ca 2+ retention capacity was determined under energized conditions. Isolated mitochondria (0.4 mg/mL) were suspended in 1 mL KCl media containing 0.5 lM Calcium green-5N in the presence or absence of ANT1 ligand (1 lM CATR or 5 lM BKA). Fluorescence changes were continuously measured at 25°C with the excitation wavelength of 506 nm and emission wavelength of 531 nm. A total quantity of 4 lL aliquots of a 20 mM CaCl 2 solution (20 mM CaCl 2 , 127 mM KCl, 1 mM MgCl 2 , 20 mM HEPES, 5 mM glutamate, 5 mM malate, pH 7.4) were added every 2 mins to introduce 200 nmol Ca 2+ /mg protein until mPTP opening indicated a double increase above the baseline reading of fluorescence. The total amount of Ca 2+ added can be interpreted as Ca 2+ retention capacity. Detection of Cyt c release and TUNEL staining For detection of Cyt c release, mitochondria and cytosol fractions were separated as mentioned above. Western blot was used to detect Cyt c release from mitochondria to cytosol. SOD2 was used as mitochondrial marker and GAPDH for cytosolic marker. For TUNEL staining, cells were fixed in 4% paraformaldehyde for 40 min, permeabilized with 0.1% TritonX-100 for 10 min, and stained with 1 lg/mL DAPI (4',6-diamidino-2-phenylindole) (D9542; Sigma-Aldrich, Shanghai, China) at 25°C for 10 min. TUNEL staining was performed using the Roche-In Situ Cell Death Detection Kit according to the manufacturers' instructions. Results were analyzed by fluorescence microscopy (Leica DMI4000B, Wetzlar, Germany). Statistical analysis All the experiments were repeated at least three times. One representative picture is shown in figures. Quantifications were from three or more independent experiments. Data were analyzed by Student's t-test. Values represent means AE SE; p < 0.05 was considered as statistically significant. Data accessibility The original unprocessed data are available in supporting information online. Materials The materials used were as follows: RCAN1.1S increased ANT1 mRNA and protein levels To reveal the mechanism of RCAN1-ANT1 interaction, we examined the expression of ANT1 in YD2 cells and SH-SY5Y cells transiently over-expressing RCAN1.1S. RT-PCR was used to examine the ANT1 mRNA expression. RT-PCR showed that ANT1 mRNA level in YD2 cells was significantly increased to 209.60 AE 6.94% of control cells (p < 0.01, Fig. 1a). ANT1 mRNA expression was also elevated to 256.80 AE 17.45% of control in SH-SY5Y cells transiently transfected with RCAN1.1 expression plasmid (p < 0.01, Fig. 1b). Furthermore, ANT1 mRNA was decreased to 51.79 AE 3.47% of control in cells with RCAN1 knocked down (p < 0.01, Fig. 1d). RCAN1 knockdown effect was also verified by western blot in Fig. 1(c). To further examine whether the increase of mRNA resulted in the increase of its protein expression, western blot assay was performed and showed that the overall levels of endogenous ANT1 (Fig. 1e) as well as the mitochondrial ANT1 (Fig. 1f) were increased by RCAN1.1S over-expression. The results showed that endogenous ANT1 protein in total cell lysates was increased to 149.70 AE 14.80% in YD2 cells (p < 0.05, Fig. 1e, lane 2), and 179.90 AE 14.14% of control in RCAN1.1S transiently transfected SH-SY5Y cells (p < 0.01, Fig. 1e, lane 4). Endogenous ANT1 protein in mitochondria was also increased to 142.50 AE 3.82% in YD2 cells (p < 0.01, Fig. 1f, lane 1S increased ANT1 protein levels. Endogenous ANT1 protein levels of total cell lysates (e) and mitochondria (f) were detected by anti-ANT1 mAb in YD2 cells and pcDNA3.1-RCAN1.1S-6myc transiently transfected SH-SY5Y cells. Over-expressed RCAN1.1S protein in YD2 and transfected SH-SY5Y cells was also confirmed by 9E10 (anti-myc) antibody. b-actin was used as loading control for whole cell lysate and superoxide dismutase 2 (SOD2) as loading control for mitochondria fractions. Values represent mean AE SE; n = 4 (a-d) n = 5 (e and f). *p < 0.05 by Student's t-test. (g) Mitochondrial DNA (mtDNA) was determined to evaluate the mitochondrial content using quantitative real-time PCR. RPPH1 gene was used as nuclear gene normalizers for the mtDNA content (TaqMan Copy Number Reference Assay from ABI). 2), and 130.30 AE 5.80% in the RCAN1.1 transiently transfected SH-SY5Y cells (p < 0.01, Fig. 1f, lane 4). To confirm that the RCAN1.1S effect was specific for ANT1, we also detected the level of COX IV which was also located at mitochondrial inner membrane and BCL-2 which could interact with ANT1 (Brenner et al. 2000) in total cell lysates and mitochondrial fractions ( Fig. 1e and f). Quantitative results showed that the levels of COX IV and BCL-2 were not altered by the over-expressed RCAN1.1S (p > 0.05; quantitative data in supporting information). In addition, mitochondrial content was associated with the level of mitochondrial protein, so we evaluated mitochondrial content by mtDNA copy number; the results demonstrated that the mtDNA content determined by normalized DLOOP copy number significantly decreased to 79.20 AE 11.16% in YD2, to 48.48 AE 8.84% in RCAN1.1S transiently transfected SH-SY5Y cells, and to 75.80 AE 14.07% in ANT1 transiently transfected SH-SY5Y cells (Fig. 1g). These data clearly indicated that ANT1 expression was up-regulated by RCAN1.1S at both the mRNA and protein levels. RCAN1.1S retarded the degradation rate of ANT1 mRNA To further elucidate the molecular mechanism of the ANT1 mRNA up-regulation by RCAN1, the degradation rate of ANT1 mRNA was measured by actinomycin D (Act D) chase assay. 18s rRNA was chosen as an internal control (Selvey et al. 2001). The data showed that ANT1 mRNA was more stable in YD2 cells compared with SH-SY5Y cells (p < 0.05 starting from 3 h point, Fig. 2a and b). To confirm that the increased stability is specific to the ANT1 transcript rather than a change in mRNA stability throughout the cell, we also detected the degradation of DNM1 mRNA whose half-life was approximately 3 h in Act D treated SH-SY5Y and YD2 cells. The results indicated that the stability of DNM1 mRNA was not altered by RCAN1.1 (p > 0.05; Fig. 2a and c), demonstrating that the increased stability is specific to the ANT1 transcript. CHX chase assay Fig. 2 Over-expression of RCAN1.1S stabilized adenine nucleotide translocator (ANT1) mRNA (a). SH-SY5Y and YD2 cells were treated with 1 lg/mL actinomycin D (Act D) for 0, 3, 6, 9, and 12 h. RT-PCR was used to detect the ANT1 mRNA and dynamin 1 (DNM1) mRNA. 18S rRNA was amplified as an internal control. (b). Quantification of ANT1 mRNA in (a). (c). Quantification of DNM1 mRNA in (a). Values represent mean AE SE; n = 5. mRNA and protein levels at 0 h were artificially set to 100%. (d). ANT1 expression plasmid p3 9 flagCMV10-ANT1 was transfected in SH-SY5Y and YD2 cells. A total quantity of 100 lg/mL cycloheximide (CHX) was added 48 h after transfection and cells were collected at 0, 12, 24, 36 h after treatment. Western blot was used to detect the ANT1 protein level using M2 mAb and b-actin was used as loading control. (e). Quantification of (d). Values represent mean AE SE; n = 5. mRNA and protein levels at 0 h were artificially set to 100%. was used to measure the degradation rate of ANT1 protein. The data showed that there were no significant difference in the ANT1 protein stability between RCAN1.1S overexpressed cells and control cells (p > 0.05, Fig. 2d and e), implying RCAN1.1S over-expression did not interfere with the degradation of ANT1 protein. The data here supported that RCAN1 up-regulated ANT1 expression by stabilizing the ANT1 mRNA. calcineurin. The FK506 effect was assessed both as chronic administration (24 and 48 h) and acutely (3 h) to distinguish between long-term effects and short-term effects. A higher dosage of 10 lM ( Fig. 3a and b) and lower dosage of 10 nM ( Fig. 3c and d) were used for FK506 treatment. The results showed that FK506 did not alter ANT1 level, no matter at chronic or acute administration (p > 0.05; Fig. 3a-d), demonstrating that the effect of RCAN1.1 on ANT1 is independent of calcineurin. The C-terminus 141-197 aa was sufficient for inhibition of calcineurin. To further verify that RCAN1's effect is independent of its effect on calcineurin, the two isoforms of RCAN1.1S and RCAN1.4 as well as two truncations RCAN1.1S 1-103 aa and 141-197 aa were transfected into SH-SY5Y cells. ANT1 mRNA and protein expression were assayed by RT-PCR and western blot. The results showed that RCAN1.4, RCAN1.1S, and RCAN1.1S 1-103 increased ANT1 protein level ( Fig. 3e and f) and mRNA level (Fig. 3g). The C-terminus 141-197 aa can inhibit calcineurin-NFAT signaling while it had no effect on ANT1 expression (lane 4 of Fig. 3e-h), indicating inhibition of calcineurin is not sufficient to increase ANT1. The Nterminus 1-103 aa increased ANT1 expression (lane 5 of Fig. 3e-h) while having no effect on calcineurin, indicating that N-terminus domain is sufficient for its effect on ANT1. The larger isoform 1 of RCAN1.1L also increased ANT1 expression (supplementary Fig. 1), implying that the effect of RCAN1 on ANT1 is a common effect among RCAN1 variants. These experiments demonstrated that RCAN1's effect on ANT1 is independent of its effect on calcineurin. RCAN1.1S accelerated ATP-ADP exchange rate via ANT1 The fundamental function of ANT1 is the exchange function of bringing ADP into the mitochondrial matrix and bringing ATP out to the cytosol (Dahout-Gonzalez et al. 2006). To find out whether ANT1 function was altered by RCAN1, we measured the ATP-ADP exchange rate in the digitonin-permeabilized cells. Data displayed that the ATP-ADP exchange rate (nmol/ s) was accelerated from 10.52 AE 0.56 nmol/s in SH-SY5Y cells to 12.79 AE 0.75 nmol/s in YD2 cells (p < 0.05, Fig. 4b, lane 1 vs. lane 2), and from 10.83 AE 1.04 nmol/s in control cells to 15.55 AE 1.87 nmol/s in RCAN1.1S transiently transfected SH-SY5Y cells (p < 0.05, Fig. 4b, lane 3 vs. lane 4), and ANT1 over-expression increased the ATP-ADP exchange rate from 10.43 AE 1.04 nmol/s in control cells to 14.57 AE 1.21 nmol/s (p < 0.05, Fig. 4b, lane 5 vs. lane 6). To further verify whether ANT1 mediated the effect of RCAN1 on ATP-ADP exchange, we knocked down the elevated ANT1 expression in YD2 cells using ANT1 shRNA vector. The ATP-ADP exchange rate was decreased from 13.14 AE 0.60 nmol/s to 9.44 AE 0.84 nmol/s (p < 0.01, Fig. 4c, black vs. gray), corresponding with decreased protein expression of ANT1. These data suggested that RCAN1 had an effect on ATP/ADP exchange rate via its interaction with ANT1. RCAN1.1S affected Ca 2+ -induced mPTP opening via ANT1 In addition to ATP-ADP exchange, ANT1 also plays a central role in modulating the sensitivity of mPTP to Ca 2+ (Kokoszka et al. 2004;Dahout-Gonzalez et al. 2006;Leung and Halestrap 2008). CATR stabilizes the c-conformation of the ANT1 and sensitizes the mPTP to Ca 2+ , while BKA stabilizes the m-conformation of the ANT1 and makes the mPTP insensitive to Ca 2+ (Dahout-Gonzalez et al. 2006). To further examine whether RCAN1.1S over-expression affected mPTP opening through its up-regulation on ANT1, mPTP opening was measured by a calcein fluorescence decrease. Though the basal level of calcein fluorescence showed no difference (Fig. 4d, lane 1-6), under the treatment of 5 lM Ca 2+ ionophore ionomycin that triggered mPTP opening, the calcein fluorescence was further decreased by RCAN1.1S or ANT1 over-expression (Fig. 4d, lane 7-12). The treatment of BKA alone had no influence on the basal level of calcein (Fig. 4d, lane 13-18), while treatment of CATR alone decreased the calcein fluorescence via inducing mPTP opening, and the fluorescence was also further decreased by RCAN1.1S or ANT1 over-expression (Fig. 4d, lane 19 -24). The effect of RCAN1.1S on mPTP opening was abolished by BKA (Fig. 4d, lane 25-30) and intensified by CATR (Fig. 4d, lane 31-36). Furthermore, knockdown of ANT1 by shRNA vector in YD2 cells brought back the mPTP opening induced by RCAN1.1S (Fig. 4e, lane 6, 12, 18 compared to lane 5, 11, 17, respectively). The data here demonstrated that RCAN1 affected mPTP opening through ANT1. RCAN1.1S exacerbated Ca 2+ -induced mitochondrial swelling and compromised Ca 2+ retention capacity via ANT1 mPTP opening leads to a series of consequences, including Ca 2+ retention incapacity, massive swelling of mitochondria, rupture of the outer membrane, release of Cyt c or the apoptosis-inducing factor, and eventually cell death (Halestrap et al. 2002;Schwarz et al. 2007;Tsujimoto and Shimizu 2007;Leung and Halestrap 2008). Mitochondrial swelling caused by influx of solutes across the inner membrane could be detected by measuring a decrease in the absorbance at 540 nm. Mitochondria Ca 2+ retention capacity was reflected by the total Ca 2+ injection pulses in the reaction before mPTP opening. The OD540 showed that YD2 cells had a larger degree of swelling than SH-SY5Y cells (Fig. 5a, curve 2 vs. curve 1 p < 0.05). The exacerbation of mitochondrial swelling was abolished by BKA and further amplified by CATR. Similar results were observed in rat primary neurons infected with RCAN1.1S expression lentivirus (Fig. 5b). Also, the difference can be abolished by the knockdown of ANT1 with shRNA vector (Fig. 5c). In addition, RCAN1.1S expression reduced Ca 2+ retention capacity (Fig. 5d, p < 0.05). And, the effect on Ca 2+ retention capacity by RCAN1.1S was abolished by BKA and further amplified by CATR (Fig. 5e). Knockdown of . The calculated ATP-ADP exchange rates were the slopes of the regression lines of the data in (a). Values represent mean AE SE; n > 5. *p < 0.05 by Student's t-test. (c). Knockdown of ANT1 in YD2 cells brought the accelerated ATP-ADP exchange rate back to normal. Values represent mean AE SE; n = 14. *p < 0.01 by Student's t-test. Western blot was performed to confirm the knockdown effect of pSuper-siANT1. RCAN1.1S protein was detected with 9E10 (anti-myc) antibody and endogenous ANT1 was detected with anti-ANT1 antibody. Superoxide dismutase 2 (SOD2) was used as mitochondrial loading control. (d). Over-expression of RCAN1.1 and ANT1 resulted in more Ca 2+ -induced mPTP opening. Assay of mPTP opening was determined in SH-SY5Y, YD2, SH-SY5Y transfected with RCAN1.1 and ANT1 expression vectors. mPTP opening was indicated as a decrease in the initial calcein fluorescence (fluo.). The cells were treated with 5 lM ionomycin, 5 lM bongkrekic acid (BKA), 1 lM carboxyatractyloside (CATR), 5 lM ionomycin + 5 lM BKA, 5 lM ionomycin + 1 lM CATR, respectively. Values represent mean AE SE; n = 5. *p < 0.05 by Student's t-test. (e). Knockdown of ANT1 in YD2 cells reduced the Ca 2+ -induced mPTP opening. The increase of calcein fluorescence indicated the reduction of mPTP opening. Values represent mean AE SE; n = 6. *p < 0.05 by Student's t-test. ANT1 in YD2 cells brought back the effect on calcium retention capacity (Fig. 5g). Furthermore, similar results were obtained in metabolically active mitochondria from cultured primary rat neurons infected with lentivirus expressing RCAN1.1S and ANT1 (Fig. 5f). The data here demonstrated that RCAN1 affect mitochondrial swelling and calcium retention capacity via its interaction with ANT1. RCAN1.1S induced Cyt c release and cell apoptosis via ANT1 To further verify whether mPTP opening induced by RCAN1 would result in Cyt c release and cell apoptosis, we examined the translocation of Cyt c from mitochondria to cytosol using western blot. A decrease of Cyt c in mitochondria and an increase of Cyt c in cytosol were observed in SH-SY5Y cells stably or transiently overexpressing RCAN1.1S ( Fig. 6a and b). ANT1 overexpression displayed similar pattern of Cyt c release from mitochondria to cytosol (lane 5 and 6 of Fig 6b), while knockdown of ANT1 using shRNA vector abolished the translocation of Cyt c induced by RCAN1.1S (Fig. 6c). To further investigate the outcome of Cyt c release, TUNEL was used to monitor cell apoptosis. Over-expression of RCAN1.1S in YD2 cells compromised Ca 2+ retention capacity. Extra-mitochondrial Ca 2+ was measured fluorometrically with the calcium indicator Calcium green-5N in the absence (d) and presence (e) of ANT1 ligands CATR and BKA. Traces of Ca 2+ retention by isolated mitochondria from YD2 (red line) and SH-SY5Y cells (blue line) were shown. The total amount of Ca 2+ added until mitochondrial permeability transition pore (mPTP) opening indicated by a double increase above the baseline fluorescence was interpreted as Ca 2+ retention capacity. (f) Over-expression of RCAN1.1S and ANT1 compromised the calcium retention capacity of primary rat neurons. Neurons were infected with lentivirus expressing RCAN1.1S and ANT1, respectively. Y axis indicated the total amount of Ca 2+ added until mPTP opening. (g). Knockdown of ANT1 in YD2 cells abolished the effect of RCAN1.1S on Ca 2+ retention capacity. Values represent mean AE SE; n = 4. *p < 0.05 by Student's t-test. Consistent with our previous reports (Sun et al. 2011), TUNEL assay showed more cell apoptosis in cells overexpressing RCAN1.1S (lane 1-4 of Fig. 6e). Similar results were obtained with ANT1 over-expression (lane 5 and 6 of Fig. 6e), and ANT1 knockdown protected cells from RCAN1.1S induced apoptosis (Fig. 6f). The data here suggested that RCAN1 induced Cyt c release and neuronal cell apoptosis through mPTP opening via its interaction with ANT1. Discussion Our study here showed that RCAN1.1S elevated ANT1 expression, which resulted in mitochondrial dysfunctions and ANT1 increased release of Cyt c from mitochondria to cytosol. Mitochondria and cytosol fractions were isolated from SH-SY5Y, YD2 (top blots of a), and SH-SY5Y cells transiently transfected with RCAN1.1S (middle blots of a) and ANT1 (bottom blots of a). The cell lysates were separated on a 16% Tris-tricine sodium dodecyl sulfate-polyacrylamide gel electrophoresis gel. Cyt c was detected by the anti-Cyt c antibody. Anti-superoxide dismutase 2 (SOD2) mAb was used to detect SOD2 in mitochondria. (b) Ratio of Cyt c in cytosol to mitochondria of (a). Cyt c was normalized to GAPDH in cytosol fraction and normalized with SOD2 in mitochondria fraction. Values represent mean AE SE; n = 3. *p < 0.05 by Student's t-test. (c) Knockdown of ANT1 in YD2 cells reduced the release of Cyt c. Values represent mean AE SE; n = 3. *p < 0.01 by Student's t-test (d). Over-expression of RCAN1.1S in YD2 cells induced cell apoptosis. Apoptotic cells were measured by TUNEL staining. Nuclei were counter-stained with DAPI. Apoptotic cells were indicated by a magenta color, which corresponded to an overlap of red TUNEL and blue DAPI staining. The results were analyzed by a Leica fluorescent microscope. Scale bars: 100 lm (e). Relative apoptosis ratio of SH-SY5Y, YD2, and SH-SY5Y cells transiently transfected with RCAN1.1S and ANT1 expression vectors. Values represent mean AE SE; n = 4. *p < 0.05 by Student's t-test. (f). Knockdown of ANT1 in YD2 cells reduced apoptosis. Scale bars: 100 lm. Values represent mean AE SE; n = 6. *p < 0.05 by Student's t-test. including ATP/ADP exchange rate, mPTP opening, mitochondrial swelling, and Cyt c release. Furthermore, knockdown of ANT1 in cells over-expressing RCAN1.1 brought back these perturbations to normal. The study suggested that ANT1 and mitochondrial perturbations contributed to the cell apoptosis induced by RCAN1 over-expression. Our previous study has shown that RCAN1 over-expression in primary neurons induced neuronal apoptosis, mediated by Cyt c release, and activation of caspase 9 and caspase 3 (Sun et al. 2011). The study here further provided the molecular mechanism which RCAN1 interacted with ANT1 in mitochondria and facilitated mPTP opening, resulting in Cyt c release and apoptosis. RCAN1 can physically interact with calcineurin subunit A through its C-terminus aa 141-197 and inhibit calcineurin-NFAT signaling. The effect of RCAN1.1S on ANT1 is independent of its effect on calcineurin-NFAT since ANT1 did not respond to calcineurin inhibitor FK506. And, the RCAN1 C-terminus that inhibited calcineurin-NFAT signaling did not affect ANT1 expression. Furthermore, we identified that the aa 1-103 of RCAN1.1S was responsible for its effect on ANT1 in mitochondria. RCAN1.4 isoform has similar effect on ANT1 with RCAN1.1S isoform. We also had data showing that RCAN1.1L isoform had similar effect on ANT1 in mitochondria. Our data suggested that the effect of RCAN1 on ANT1 in mitochondria is a common function of RCAN1 isoforms and independent of its inhibition on calcineurin. The abundance of mRNA in the cell is a function of not only its synthesis, processing, and nuclear export rate but also of its degradation rate in the cytoplasm. mRNA degradation rates often change in response to stimulus. RNA binding proteins or noncoding RNAs bind to their cis-acting elements in mRNA and affect the mRNA degradation rates via their ability to recruit or exclude mRNA degradation machinery. The 28-103 aa of RCAN1.1S, a common domain among the isoforms RCAN1.1S, RCAN1.4, and RCAN1.1L, was predicted to be an RNA binding domain by RCSB PDB database. It would be of great interest to examine whether this RNA binding activity of RCAN1 would contribute to the stability of ANT1 mRNA in the future. ANT1 missense mutations have been found in autosomal dominant progressive external ophthalmoplegia with mitochondrial DNA deletions-2 (Kaukonen et al. 2000). Senger's syndrome and Senger's-like syndrome are also associated with severe depletion of ANT1 protein and absence of ANT1 function, but no ANT1 gene abnormality has been identified (Jordens et al. 2002;Morava et al. 2004). It would be interesting to investigate whether RCAN1 played some role in regulating the ANT1 protein expression in these diseases.
2018-04-03T00:34:02.832Z
2016-12-20T00:00:00.000
{ "year": 2016, "sha1": "10c6a48867f11b63abfe76271e10808c5487764a", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jnc.13900", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "10c6a48867f11b63abfe76271e10808c5487764a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
151524712
pes2o/s2orc
v3-fos-license
Development Versus Coastal Protection : The Gold Coast Case Study ( Australia ) Développement touristique vs protection du littoral : le cas de Gold Coast ( Australie ) The Gold Coast in Australia is one of these coastal places, which developed through taking advantage of its environmental assets, such as direct access to the sea, a white sandy shoreline, an extensive and naturally protected broadwater and several large accessible rivers. While many other coastal cities relied on port facilities to develop commercial and naval activities, the City of Gold Coast emerged and grew as a tourism destination. Largely because of this phenomenon, the pattern of settlement and subsequent development of the city differs from most traditional Australian settlement and development patterns. Today, the Gold Coast is one of the most famous tourist cities in Australia and it accommodates more than ten million visitors annually. In the wider Australian context, 85% of the population lives within 50 km of the beach, evidencing popular lifestyle cultural preferences of many Australians. Given this preoccupation with the coast, one may expect that Australia would be at the forefront of coastal tourism developments and coastal protection. There is, however, no overriding jurisdiction covering planning law enforcement in maritime areas and, this situation has led to many social and environmental conflicts. The City of Gold Coast is a case in point and no more so than currently (2017) with proposals to build a cruise terminal or/and a casino, and high rise residential towers on its protected coastal strip (the Spit). This paper demonstrates how the evolution and resolution of development conflicts on the Spit (Gold Coast) are symptomatic of the evolution of place values and the national coastal management and how, this informs a shift towards coastal protection. La ville côtière de Gold Coast, en Australie, s’est développée grâce à un environnement naturel exceptionnel, qui comprend un accès direct à la mer, un littoral de sable blanc, une large baie naturellement protégée et plusieurs rivières. Si la plupart des autres villes du littoral australien se sont appuyées sur leur port pour développer leurs activités commerciales et navales, la ville de Gold Coast est, depuis ses débuts, une ville touristique. De ce fait, son mode et ses formes de Development Versus Coastal Protection: The Gold Coast Case Study (Australia) Études caribéennes, 36 | Avril 2017 17 Introduction 1 In recent years the body of literature on the City of Gold Coast, Australia has grown substantially.This growth has been driven, primarily, by the city's features and urban characteristics (Bosman et al., 2016, Hundloe et al., 2015;Burton, 2012;Longhurst, 1993), the impacts of its tourism activities (Scott et al., 2016;Dredge, 2011) and its governance structures (Dredge & Jamal, 2013;Burton, 2009).Besides, largely because of the city's beach environment, a lot of literature has focused on coastal ecosystems and on the impact of human activity on both land and sea.Yet as a coastal city characterised by extensive residential canal estates (400 kilometers of canals, ten times more than in Venice), there is limited scholarship investigating local coastal management activities within the dominating and overarching context of tourism development.This paper goes some way to address this gap.Although some attempts to a better coastal policy and management were tried here and there (e.g. the New South Wales State Coastal Protection Act 1979), coastal management in Australia has not been a priority for any government at any level: national/ commonwealth, state/territory or local.As an island-country where 85% of the population lives within 50 km of the coast (ABS, 2015), one could easily expect that Australia would be at the forefront of coastal planning, management and protection.However, historically, the Australian Constitution decreed that the planning and management of crown land was the responsibility of state and territory governments (Wescott, 2008).With no overriding jurisdiction covering the enforcement of planning law in coastal areas, many social, environmental and development conflicts arose.It was not until the late 1990s when some noticeable changes started to occur.For instance, in 1999, the Environment Protection and Biodiversity Conservation Act introduced an important change as it gave the Australian Government significant powers to influence coastal environmental policy and planning (George, 2009).Since then, coastal management in Australia has been undergoing a transformation that reflects broader governance shifts, as well as raising awareness about global issues such as climate change and pollution.Despites this shift in national government approach to coastal management, conflicts still regularly make the headlines regarding the management and development of coastal areas in Australia. The notion of conflict has been well documented since Marx (1959) first wrote about it in 1844 (for example see Kriesberg, 1982;Austin et al., 2011).Clark (1995) explains the intensity of conflicts in coastal management since "its process operates at the interface between land and water, between private and public stakeholders, as well as between private (or quasi-private) property-based operations in shorelands and public (common) property-based activities in the tidelands and coastal waters".Others emphasise that conflicts frequently emerge as a result of change, and as meanings, values and attachments to places alter (Mitchell, 2011).For instance, Dekker et al., (1992) recognise that conflict often arises due to the differing interests of the 'new', pro-development, growth-oriented players or stakeholders and the 'old' players or local communities who value the urban environment in its current and historic context, and who seek to preserve these characteristics. As the landscapes of the City of Gold Coast have been subject to continual reinvention, change and transformation (Wise & Breen, 2004;Wise, 2006;Griffin, 2006), patterns of conflict, change and continual re-adjustment have become ingrained in the lived experience of the city and its development. This article shows how the evolution and resolution of development conflicts on the city's protected coastal strip (the Spit) are symptomatic of the evolution of place values and how, this informs a shift towards better coastal protection. The Gold Coast: context and value of place Like all cities in Australia, the City of Gold Coast was developed with the European colonisation of the country and the successive waves of migration.However, unlike other Australian cities, the City of Gold Coast emerged and grew as a tourist destination that reached a peak in the 1960s and continue to grow rapidly.Today the city welcomes more than 12 million visitors annually (ABS, 2015).The city is renowned for its natural environment, with 57 kilometres of coastal strip with pristine beaches and unique hinterland landscapes featuring several national parks.The City of Gold Coast is also dissected by numerous rivers and creeks that have largely been reconfigured and developed into prime real estate around artificial canals (Figure 1).Although there is only one marina owned by a yacht club (Southport Yacht Club), there are about twenty other privately owned marinas, slipways and boatyards within the jurisdiction of this unique city, which offers the proximity to two major airports, tourism attractions, theme parks and unique hinterland (Tenefrancia, 2016).The Gold Coast coastline has, not surprisingly, been significantly impacted by the rapid urbanisation (Figure 2); specifically during the post-war period (the glorious Fifties), and continuing into the 1960s and again in the early 1990s.During this time, the Gold Coast transformed from small resort town to become an international tourist city (Dedekorkut-Howes & Bosman, 2015).This transformation, combined with an extraordinary increase in population (8,400 inhabitants in 1947;almost 70,000 in 1991almost 70,000 in and over 555,000 in 2017almost 70,000 in , ABS, 2017) ) resulted in urban densification of the coastline, which became the hub for a range of services, tourist attractions and housing.There was little or no planning legislation in place to protect the coastline.Planning legislation in the City of Gold Coast has historically favoured development rather than environmental conservation and heritage preservation (Bosman et al., 2016).Besides, with a population that could also triple in size over the holiday season in selected precincts, the city has historically been challenged by finding a balance between financial interests, community cohesion, and identity.With the reputation of the city "as a symbol of excess, extravagance, tackiness, and placelessness" (Weaver and Lawton, 2004) along with the pro-development attitude of the state government and the abundance of entrepreneurial initiatives (Dedekorkut-Howes and Bosman, 2015), it raises many questions about city governance (Dredge and Bosman, 2011;Wise, 2006), and its planning strategies and instruments (Dredge and Jamal, 2013;Griffin, 2006).The image of the City of Gold Coast also poses questions regarding the value of place.The process of constructing place meanings, values and attachments is the result of a multitude of influences and factors (Dovey, 1999;Creswell, 2004;Massey, 1994;Carter, Dyer and Sharma, 2007;Vanclay, Higgins and Blackshaw, 2008).Place meaning and values emerge out of everyday activities and are produced through and by global and societal influences.Place is also read and understood as a physical site in relation to both built and natural environments, as well as through written, verbal, visual and non-verbal media and marketing.Language, and in particular advertising, is a key constructor of place, especially with regard to tourist places.For tourist areas such as the City of Gold Coast, place is not simply a location -it is a culmination of social processes along with tourist perceptions, or an "intersection of various global flows, not just of money or capital, but of visitors" (Urry, 1995).9 Donning the social constructionist goggles allows us to observe the built and natural landscape as a social-spatial framework within which people, from different cultural, social and economic groups, interact and create a shared sense of place (Greider and Garkovich, 1994;Mangun et al., 2009).The feeling of attachment that is produced from knowing a place comes from living that place.Lynda Schneekloth and Robert Shibley (2000) suggest that "[p]lace making is the way all of us as human beings transform the places in which we find ourselves into places in which we live".In doing this, however, different people, or different groups of people, often come to value places in different ways to one another (Cheng et al., 2003).Previous research has identified the importance of understanding the way in which people interact with the landscape and how they develop place-based values (Jorgensen and Stedman, 2001;Cheng et al., 2003;Yung et al., 2003). 10 The failure, by planners and urban designers, to take into account local everyday meanings and values can result in the alienation of residential subjects "from each other and from their own place" (Cartier and Lew, 2005).The result is a 'risky' place that holds little meaning for local people and fails to capture and hold the interest of tourists. Essentially the place becomes vulnerable as local everyday activity nodes move elsewhere and tourists do not return.Gordon Holden (2011) writes: "While having a strong 'sense of place' may be seen as a lower priority than safe drinking water or sewerage systems for the health of a city it is widely accepted that a holistic approach to city planning includes encouraging a recognisable 'sense of place'.'Sense of place' strengthening is key objective for contemporary planning strategies in Australia". One challenge for planners is to find the balance between fostering new development for a rapidly growing population and preserving the heritage and character of the existing urban realm.For many coastal cities such as the City of Gold Coast, the challenge is compounded, as activities in the coastal zone (land and water) significantly contribute to creating a sense of place.Place values participate to city branding ("come to surf to Kirra", "whale season!", etc.).The coastline often becomes synonymous with the identity of the city and a key ingredient to its growth and prosperity, yet it is also a highly contentious place where numerous conflicts are rife. What is now certain is that the constantly evolving urban landscape on the Gold Coast has come about through a pattern of conflict, change and adjustment to a new 'norm'.This recurring cycle is initiated by the arrival of new players (stakeholders) into the development arena.New players invariably bring with them new ideas, concepts, beliefs and place values.Conflict then potentially occurs as a result of the difference in place values between the 'new' and 'old' players/community members.This pattern is immersed within the history of Southport as discussed below and the Gold Coast as a whole (see for example Whelan, 2006).The cycle can be broken up into five, often difficult to define, phases (see figure 3).The first major conflict in the now City of Gold Coast followed the arrival of European settlement in the region leading up to the mid-1820s.During this time the 'old' players were the Aboriginal groups in the area, collectively known as the Yugambeh people, whose place values revolved around the land being sacred, rather than a resource to be exploited.On the other hand, the 'new' players, the European settlers, saw the region as one of plentiful resources, and good farming potential.This resulted in the region's most horrific and infamous development conflict, a conflict that was mirrored in nearly all regions across Australia throughout the 18th and 19th Centuries.Much of the literature on the history of Aboriginal-European conflict in Australia is written with a Euro-centric perspective (Anderson, 1983;Best, 1994).For example, Taylor (1967) resources, to "share in the superabundance of goods and stock that had suddenly descended upon them".Best (1994) offers a somewhat less Euro-centric, perspective, stating "it could be argued that Aborigines were fighting to save their economic resources, that is, the water-holes, demanding that the land and the people be respected". No matter which perspective is more accurate, the fact remains that this conflict of interests resulted in Aborigines dying in large numbers, some shot by Native Police, some poisoned by settlers (Moore, 1990).As with most serious development conflicts, this remains unresolved, although it has taken on a very different form, moving from physical altercations into the political realm.Best recognises that the Yugambeh people, the traditional owners of much of South East Queensland, "continue to fight a battle both social and environmental, to ensure that their cultural heritage is respected and not exploited" (Best, 1994). 14 One place that epitomises the challenges regarding development conflicts and place value in the City of Gold Coast is the Southport Spit.The Spit is located at the northern end of the city (see Figure 1), across the Broadwater from the early (1880s) settlement of Southport, and is the result of the reconfiguration of the sand dune at the mouth of the Nerang River in the late 1800s following a series of storms (see GCCM, 2006).It is today one of the last significant undeveloped public green spaces in the city.The Southport Spit is also an important place within the rapidly changing landscapes of the City of Gold Coast, since it is "land that constitutes the last genuine ocean-side parcel of undeveloped real estate on the Gold Coast" (Condon, 2006) and it has significant social and cultural meanings and attachments for many Gold Coast residents (see SOSA a; Condon, 2003;Lazarow and Tomlinson, 2009).Yet the Spit is also a well-known and targeted place of conflict with pro-and anti-development stakeholders vying for opposing outcomes for the place, and often running parallel debates to the national agenda.This is the focus of the next sections. Local freewheel ride development and the national rise of environmental focus 15 Despite multiple community values attached to the Southport Spit, it has nonetheless been dogged with development proposals since the early 1960s.This is not surprising within the Australian national context as the federal government have historically left the responsibility of coastal zones to local government authorities to care for, manage and maintain.On the Southport Spit, one of the first to object to development on this prime beachfront dune was the local National Party Member of Parliament at the time, Doug Jennings.Jennings's last fight to save the Spit was instigated in 1979 when the Queensland National/Liberal State Government, under the Premiership of Sir Jon Bjelke-Petersen (see Wear, 2002;Whitton, 1989), established the Gold Coast Waterways Authority to address tidal inundation and the impacts of storm surges in the Broadwater and the erosion of the Spit.As a result, by the 1980s the Broadwater and the Spit were 'secured' by the construction of groins, channel dredging and a sand bypass system.The Waterways Authority were frequently involved in controversy over commercial development rights on public land in the city (Condon, 2006).In one case a prominent Board member obtained 64 hectares on the western side of the Spit for tourism urbanisation (now the theme park Sea World).Other tourism related developments on the Spit were also approved during this time and were subsequently built, renovated and extended: an exclusive shopping precinct, a commercial fishing wharf (now also accommodates super yacht berths), an exclusive resort complex and an international hotel and apartment complex (Figure 4).Other development proposals that did not get off the ground included an 'amusement oasis', a mini city comprising 8000 permanent residents and a golf course (Condon, 2006).17 The Gold Coast City Council released the Gold Coast Harbour Study Issues Paper for public comment in 1998.The intent of this Paper was to produce an "integrated and coordinated land use and management plan for the Gold Coast ... Broadwater" (Whelan, 2006).The Issue Paper and public consultation associated with it was essentially about the making of places; viable places that were valued by 'new' and 'old' players alike.The outcome of the community consultation process however produced instead a "strong picture of people's dissatisfactions" (Whelan, 2006).This was partly because, as Urry (1995) One thing that did emerge from the 1998 Gold Coast Harbour Study was that the Gold Coast City Council agreed that no development (private or commercial) would occur on the remnant of public land at the northern end of the Southport Spit and that the open space character of the area would be retained and enhanced (Gold Coast City Council, 2003). 18 Notwithstanding, the Gold Coast City Council's planning regulations, nor the lengths to which previous National Party Government officials had defended the Spit against development, nor the fact that the Government had specifically set up the Gold Coast Harbours Authority as a local approach to the management of the Broadwater and Spit environs, on 15 September 2005 the Queensland Labor Government announced its intention of developing an international cruise ship terminal and related services on this valued and valuable piece of public open space.In December 2005 the Queensland State Government created a Gold Coast Marine Development Project Board to act as the proponent for the Spit development.The Board was set up to advise the Premier and the Coordinator General to undertake tasks as required by the Government.In effect the State Government created its own proponent for the project, a proponent that was also to advise the Government.All decisions taken by the Government were to be, and in fact were, based upon the advice of the Board.To heighten this inbred decision making process, the State Government called for expressions of interest from developers at the same time as it commissioned an EIS for the site (Bligh, 2005).The supposition being that the advice from the Board would be in favour of development.In addition the Government sought direct control over the proposal, feasibility and development of the project.In order to bypass local Government planning restrictions (and we argue the views and input of local communities) the State sought absolute control over the planning and development processes by declaring the project a 'Significant Development'.This declaration triggered State legislation that called for an Environmental Impact Study (EIS) which meant the Government had direct control over the way the EIS was developed, the criteria by which it was to be assessed and it enabled other legislation to be bypassed if necessary.Importantly, by declaring the project as a 'Significant Development' the local planning Authority, The Gold Coast City Council, and significantly local communities ('old' players), were positioned as observers with no authority to input into the project other than decreed and regulated by the State Government ('new' player).This situation also reflected the wider commonwealth disengagement from state planning concerns and the failure of all levels of government to implement policy (Vince, 2008;Wescott, 2002).This was despite the existing framework that was established under the "National Cooperative Approach to Integrated Coastal Zone Management (ICZM)" and the Australia's Oceans Policy.One of the major contributing factors in this decision-making process was the lack of national approach (TFG, 2002): the Gold Coast Spit clearly demonstrated the conflict embedded in development in coastal zones and, the local-centered approach without the consideration of any wider context in term of coastal management.Yet, things were about to change. Local resistance and the shift to local environmental awareness 19 To provide effective opposition to the state government and its plans for the Southport Spit a consortium of community groups joined to form the Save Our Spit Alliance (SOSA).This energetic and dedicated group organised a number of rallies and delegations and petitions over the next two years (Figure 5) and maintained (and continue to maintain) an evocative and resourceful web site.By July 2006 (just ten months into the feasibility studies) the SOSA had collected over 20,000 signatures as part of their petition to the state government to stop development on the Spit (SOSA d). 1.The economic benefits to the community, the City and the state were marginal because SOSA research indicated that cruise liner passengers spent more money on board the than they did on shore. 2. The loss of public open space in the face of rapid population and urban growth.This was given support from Methven Sparkes, President of the Nerang Community Association, who said (SOSA b) "On any weekend the Spit is filled with thousands of picnickers, walkers, runners, cyclists, divers and snorkelers, fishers, surfers, dog walkers, and exercise enthusiasts, all of whom value the opportunity to access such a beautiful area so close to the CBD." 3. Safety issues relating to the use of seaway. 4. The negative impact the development would have on existing tourism operators on the Spit, namely the dive industry, surfing industry, fishing industry, charter boats and kayaking. 5. Environmental impacts, including dredging, erosion, flooding and air and water pollutants from the cruise liners. 21 The SOSA mounted their campaign based upon these five factors.A few months after a well-attended and enthusiastic protest, and in response to a continued barrage of criticism about the development proposal (see SOSA, a), the then Deputy Premier, Anna Bligh, herself a Gold Coaster by childhood experiences, summed up the situation.She said: "it would be great if [the Spit] was less environmentally sensitive, if people had less emotional attachment to it -that would make it a lot easier."(Courier Mail, 2006).We suggest that in this statement the Deputy Premier was casting local place attachment as an obstacle in the development process.The government perceived the Spit to be, and valued the site as, a space of economic opportunity.A member of Parliament at the time in support of the government's Spit development proposal argued that "The Beattie Government has a duty to provide, amongst other things, economic stability and employment opportunities for the people of this State ..." (Smith, 2006). Nevertheless, on Friday 03 August 2007 (just over two years from the first public announcement) the Premier Peter Beattie proclaimed that the Cruise Ship Terminal on the Spit would not proceed.The Premier did not directly acknowledge that this decision reflected the views of over 22,000 local residents (SOSA, d).Instead the augment put forward by the government was that the decision not to proceed was based on the cost to taxpayers; an economic, rationale not an environmental, nor a cultural, and certainly not a social or community rationale.It is important to note however, that the decision by the government not to proceed was taken at the height of a state government election campaign.At the time a Gold Coast channel nine TV news program (SOSA, c) conducted a poll with the question: 'Will the Beattie Government lose your vote over its push for a cruise ship terminal at The Spit?' and the published result showed that 86.4 percent of respondents said YES... In order to determine what place values, meanings and attachments users of the Spit held, we carried out 88 intercept surveys on the Spit between February and September 2007.The surveys were done at various times of the day and on different days of the week throughout the survey period.While we acknowledge that survey data is problematic (Hay, 2000;Law, 2006) nonetheless the data collected offers some important insights into the meanings and value the Spit had for many local users.The majority of survey respondents were employed (non-professional) Australian males aged between 25-54, which corresponds with the major activities of surfing and diving.An analysis of the survey data indicated that 73 percent of the respondents had been visiting the Spit for three or more years, with 28 percent of respondents visiting the Spit for over 16 years.Not surprisingly most respondents indicated that they spent over three hours at the Spit at any one time.This corresponds to the activities of surfing, diving, fishing and dog walking; the four primary everyday activities that take place on the Spit.It is important to note that the Spit is one of three (and it is the primary) off-leash dog exercise beach. Given the population figures of the City and the number of dog owners, these beaches are highly valued by the users thereof.Importantly, the survey data indicated that the Spit environs were perceived as a 'safe' and valuable community asset.Memories and frequency of visits contributed to the high value attributed to these two indicators. It is of interest to note that 50 percent of users surveyed were not aware of the development proposals for the Spit and only 15 percent were or had been involved in community action against the proposed development that threatened the Spit environs.This suggests that the people who signed and attended the Save Our Spit rally were not necessarily the ones who visited the Spit on an everyday, regular basis.Those that did use the Spit regularly, as the surveys testify, perhaps took the Spit for granted or felt disempowered.One thing that did emerge from the data was that all respondents who indicated that they were unaware of the development proposals also indicated that they were against development on the Spit, but not necessarily opposed to the upgrade of facilities.Indirectly, taxpayers (who were also petition signers) changed the course of history, place-making and tourism futures on the Gold Coast.Interestingly, the Gold Coast circumstances also paralleled the emergence of an environmental prerogative in the interpretation and implementation of the Commonwealth Oceans Policy (Vince, 2008).This trajectory did not, at the time, succeed in obtaining jurisdictional powers that would require State/Commonwealth cooperation in resolving complex, place-based conflicts.It did however mark a change in decision-making processes, as did the (temporary) securing of the Spit as a free, undeveloped; public open space. 4. And life goes on...These 'new' players, not surprisingly, reinforce the cycle of development conflict on the Gold Coast.The intent of the 'new' players is to disrupt the existing 'norms' as understood by the 'old' players.This 'norm' has been established over time and is embedded in the value of, and the attachment people have to, the place.The tensions and differences between the two groups of players seem irreconcilable (see table 1). Conclusion The big question remains: will the proposed casino and/ or towers go ahead?Even if it is not successful this time around, the Spit is an asset of significance value (however that is measured) and as such it is unlikely to be ignored by the development industry on the Gold Coast.New players and new ideas are likely to result in proposals for the development of the Spit environs time and time again in the future.And if the Spit is developed will this be just another link in a long chain of development conflict cycles, none ever completely resolved?Will we, one day, develop place-attachment for a cruise ship terminal or casino on the Spit?Will this be the new norm, accepted and valued by local communities, the 'old' players? The battle between 'old' and 'new' players and their place-making practices is ongoing.This does not mean that one is more important, nor necessarily excludes, nor has to be dominant over the other.Since the 1950s the histories of the City of Gold Coast have shown little responsibility for the past and scant obligation to future generations.As such the production, sale and consumption of goods and services providing pleasure has become so deftly woven into the economic landscape of the City that it is not easy to isolate them in policy or practice.This condition has raised concerns and excited resistance around "democratic participation in the local politics of place, contestations over ecological space, and decisions about land use" (Stratford 2009), concerns that are central to the Southport Spit.In the case of the Southport Spit local place-making practices and local communities succeeded in achieving (for now) a local outcome, valued and upheld by many local people. If we analyze the history of the development of the Spit in relation to the conflicts cycle mentioned in the introduction, it appears that this site remains a major object of desire for any new players on the Gold Coast.The old players continue to advocate for more transparent governance and to protect the quality of life that has become intrinsic to a valued, renowned and iconic Gold Coast lifestyle (Bosman, 2016) and identity (Potts et al, 2013).This lifestyle and identity have been produced from and are synonymous with place features and characteristics of the Spit: undeveloped, 'natural' beachside, free open and accessible public space.To this end, it is interesting to note the emergence of arguments in the spit conflict, which are now giving more weight to coastal conservation which only tentatively existed existed ten years ago.Although one could argue that these circumstances might reflect as well the maturity of the national Australian Oceans Policy and a good diffusion among the public (hence increased awareness), Vince et al., (2015) have demonstrated that the policy did not lived up its promise as the major instrument driving oceans management in Australia, and that sector based management remains the main modus operandi.At last, the cycle of conflicts seems, currently, to have stalled with the repetitive impetus of new players (develop the Spit!), opposed by the steady resistance of the old players (save our Spit!).This holding pattern is perhaps the new norms.In part, by recognizing the importance of the coastline and its natural features, the debate has shifted slightly from a focus on economic to one that acknowledges some 2 Figure 1 . Figure 1.map of the Gold Coast showing waterways and ocean Figure 2 . Figure 2. Changes in the coastline at the Surfers Paradise district Figure 3 . Figure 3. Cycle of conflict Figure 4 . Figure 4. Picture and map of the Spit Figure 6 . Figure 6.Artistic impression of the Cruise Ship Terminal Development Versus Coastal Protection: The Gold Coast Case Study (Australia) 25 (Weston 2013) Spit continues to ride a wave of development abuse.On the 11 February 2010 the local Federal Member of Parliament send out an email survey asking his constituents if they wanted "a cruise ship terminal on the Spit, the Broadwater or neither?"This email followed in the footsteps of a previous announcement by the state government, in mid-2008, of their (renewed) intention of developing a cruise ship terminal in the vicinity of the Southport Spit.In addition, other smaller private and commercial development proposals continue to be lodged for this section of prime public undeveloped, somewhat raw, open space.The most significant of these was yet another cruise ship terminal proposal in mid-2012; this time emanating from the City of Gold Coast Council Mayor, Tom Tate (Figure6).Mayor Tate, backed by the newly elected Newman Liberal National Party State Government put out a call for expressions of interest to develop a range of tourist infrastructure including a casino, hotels and cruise ship terminal on the Spit (See SOSA, a).26In an effort to save the Spit from major development a second rally took place in November 2012 and studies indicated that the Spit environs have been "identified as a key environmental asset worth more than $611 million for the city"(Weston 2013).By June 2013 the development project was in doubt, primarily on account of fiscal arrangements.Yet by early 2014, the Newman State Government nominated the ASF China Consortium developer to build a cruise ship terminal in the vicinity of the Spit.Despite the fact that the project was eventually abandoned one year later at the favor of the new state election, ASF is still in the starting blocks to deliver mostly a casino under the official form of an integrated resort development.The most recent intent (December 2016), as part of the integrated resort development proposal (with the support of the state government) is for the development of five high-rise towers on the Spit.This type of development and land use contravenes the newly legislated Gold Coast planning instrument, The City Plan (2015), which prohibits residential land use and has a building high restriction of three floors.These restrictions were put in place to ensure the existing character and amenity of the Spit, as a place communities valued, was preserved and maintained.For seven decades now, local communities have fought to keep The Spit for low rise, low impact, marine based and tourist activities.They have rallied, formed community groups, undertaken voluntary tree planting and encouraged councils and governments to see the place as the Gold Coast's Central Park.The conflict between new and old values, interests and land uses of the different players in this game have not abated, nor is resolution any closer. Table 1 . Tensions and differences in the cycle of conflict on the Southport Spit. Development Versus Coastal Protection: The Gold Coast Case Study (Australia) Études caribéennes, 36 | Avril 2017 of the critical environmental issues related to coastal development and the local, national and global importance of coastal conservation.
2019-05-10T13:09:57.631Z
2017-04-15T00:00:00.000
{ "year": 2017, "sha1": "1a327332d374e6a5b906bd44a102dbbb09b7208f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4000/etudescaribeennes.10496", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4b31850e69a6e80fc79080311914ba5554e30818", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Geography" ] }
236087242
pes2o/s2orc
v3-fos-license
Face.evoLVe: A High-Performance Face Recognition Library In this paper, we develop face.evoLVe -- a comprehensive library that collects and implements a wide range of popular deep learning-based methods for face recognition. First of all, face.evoLVe is composed of key components that cover the full process of face analytics, including face alignment, data processing, various backbones, losses, and alternatives with bags of tricks for improving performance. Later, face.evoLVe supports multi-GPU training on top of different deep learning platforms, such as PyTorch and PaddlePaddle, which facilitates researchers to work on both large-scale datasets with millions of images and low-shot counterparts with limited well-annotated data. More importantly, along with face.evoLVe, images before&after alignment in the common benchmark datasets are released with source codes and trained models provided. All these efforts lower the technical burdens in reproducing the existing methods for comparison, while users of our library could focus on developing advanced approaches more efficiently. Last but not least, face.evoLVe is well designed and vibrantly evolving, so that new face recognition approaches can be easily plugged into our framework. Note that we have used face.evoLVe to participate in a number of face recognition competitions and secured the first place. The version that supports PyTorch is publicly available at https://github.com/ZhaoJ9014/face.evoLVe.PyTorch and the PaddlePaddle version is available at https://github.com/ZhaoJ9014/face.evoLVe.PyTorch/tree/master/paddle. Face.evoLVe has been widely used for face analytics, receiving 2.4K stars and 622 forks. achieve the same performance when algorithms and models were reimplemented from the details released in the papers. Although some researchers release codes, it is still inconvenient to reproduce the experiments for fair comparisons as they often have been "cooked with recipes", e.g., tricks in architectures, losses, data processing, training and evaluation. Hence, developing a comprehensive face recognition library, with all alternating backbones, loss functions and bag of tricks incorporated, is vital for both researchers and engineers. To this end, we develop a relatively comprehensive deep face recognition library named face.evoLVe to meet the goals above. In this paper, we present the features of the developed face.evoLVe library in details. In summary, our main contributions could be categorized in three folders as follows: (1) We develop a comprehensive library, namely face.evoLVe, for face-related analytics and applications, including face alignment (e.g., detection, landmark localization, affine transformation, etc.), data processing (e.g., augmentation, data balancing, normalization, etc.), where various backbones (e.g., ResNet [22], IR, IR-SE [25], ResNeXt [80], SE-ResNeXt, DenseNet [28], LightCNN [79], MobileNet [24], ShuffleNet [89], DPN [10], etc.) with alternating losses (e.g., Softmax, Focal [37], Center [75], SphereFace [40], CosFace [70], Am-Softmax [69], ArcFace [13], Triplet [55], etc.) and bags of tricks (e.g., training refinements, model tweaks, knowledge distillation [23], etc.) for improving performance have been provided in standard implementations. (2) Face.evoLVe supports multiple popular deep learning platforms including both PaddlePaddle [45] and PyTroch [51]. On top of the native platform, i.e., PyTorch, face.evoLVe provides necessary facilities to support parallel training with multi-GPUs, where users could enjoy the computation power of massive GPUs using few lines of codes/configurations. Note that the parallel training scheme in face.evoLVe not only supports the training of backbones [51], but also accelerates the training of fully-connected (softmax) layers to fully scale-up the parallel training based on multi-GPUs and large datasets over distributed storage. (3) Face.evoLVe can help researchers/engineers develop highperformance deep face recognition models and algorithms quickly for practical use and deployment. Specifically, all data before and after alignment, source codes and trained models are provided, which reduces the efforts required for reproducing the existing methods, facilitates the development of new advanced approach, and provides training and evaluation environments for fair comparisons. We have used face.evoLVe to participate in a number of face recognition competitions and secured the first place. In addition, the library is well designed and evolving vibrantly with a group of active contributors. New face recognition approaches can be easily plugged into the face.evoLVe framework. RELATED WORK 2.1 A Brief Review of Face Recognition Normally, a face recognition system consists of face detection, facial landmark localization, face alignment, feature extraction and matching [31,73]. Each part of the face recognition system can be [17] is the official implementation of Arcface [13]. an individual research area and recently, a wide range of approaches have been proposed not only for the feature extraction module to obtain better representations of faces but also for other modules, such as loss functions. Deepface [64] employs deep neural networks (DNNs), such as Alexnet [35] to extract face features, which is much more powerful than using Eigenface [19,53,54]. A marginalized CNN is proposed by Zhao et al. [94] to achieve more robust face representations. In terms of the loss function, Deepface [64] adopts softmax, which is widely used for classification [22,35,57]. In contrast, Sun et al.employ contrastive loss [59]. However, either softmax loss or contrastive loss is not sufficient to learn discriminative features for face recognition. Also, Alexnet cannot obtain satisfactory representations of faces, hence triplet loss is applied in [39,50,55] to learn more discriminative features. GoogleNet [61] and VGGNet [57] are then adopted to learn better representations. The problem of using triplet loss is that the training process is not stable. To mitigate this problem, Wen et al. [75] propose a center loss function, which learns a center for each class and penalizes the distances between the deep features and their corresponding class centers. More recently, angular loss functions [13,70] and ResNet [22] dominate face recognition. Researchers have realized that to accurately classify the queries, a face recognition model should strictly separate faces in the feature space. A number of approaches based on angular distance are proposed to achieve the goal, such as Cosface [70], Arcface [13], Regularface [97], Adaptiveface [38] and Adacos [88]. Another challenging direction is age-invariant face recognition [49,96,99], i.e., to learn representations of faces that is robust to appearance changes caused by facial aging. However, it is difficult to obtain sufficient well-annotated facial images with different age ranges, Zhao et al. [93] propose an age-invariant model to learn disentangled representations while synthesizing photorealistic crossage faces for age-invariant face recognition. Pose variants are also challenging for robust face recognition in the wild. 3D vision techniques have been used to estimate the pose and aid face recognition [95] under such settings. In this paper, the developed face recognition library covers most state-of-the-art loss functions and backbones to achieve discriminative yet generative face representations for high-performance face recognition, which is also flexible for users to design their own loss function and backbone. Face Recognition Toolboxes Many widely used face recognition approaches have released their source codes, however, the platforms are different and the data processing method varies, which is inconvenient for the community of face recognition, resulting in unfair comparison. Therefore, using the same pipeline can be more convenient and flexible for face recognition. Guo et al. [17] develop a toolbox named InsightFace for 2D and 3D face analysis, which supports two platforms: PyTorch [51] and MxNet [9]. However, InsightFace only implements a few popular deep face recognition models, such as Arcface [13] and Subcenter Arcface [12]. FaceX-Zoo [71] is a relatively comprehensive face recognition library, however, it only supports PyTorch platform [51], which is inconvenient for the users that use other platforms, such as Tensorflow [2] and PaddlePaddle [45]. By contrast, the proposed face.evoLVe library is highly flexible and scalable, which implements the complete face recognition pipeline, most face recognition models and supports both PyTorch and PaddlePaddle platforms. In addition, distributed training is well supported in face.evoLVe, which can be easily applied to large-scale datasets composed of millions of images. Tab. 1 presents the comparison among different libraries, we can see that face.evoLVe is more comprehensive and supports more platforms. Another feature of face.evoLVe is that few-shot learning is supported. FACE.EVOLVE LIBRARY In this section, we present the developed face.evoLVe library. Pipeline To be convenient and flexible, face.evoLVe library is well designed. We split the face recognition pipeline into four modules: face detection & alignment, data processing, feature extraction and the loss head. Fig. 2 demonstrates the pipeline of the developed face.evoLVe library. face.evoLVe first detects faces in images and localize the facial landmarks for alignment, where MTCNN [84] is adopted. The aligned faces are fed into the backbone networks to extract features, in face.evoLVe library, we implement a large number of backbones. Finally, face representations are used to compute the loss in the head block, where various loss function are implemented in the library. More details can be found in the following sections. Data Processing Given a dataset composed of images of classes, to balance the distribution of classes, face.evoLVe library first removes the low-shot classes that occurs less than num_min times in the training dataset. It is easy for users to define their own num_min for a specific dataset. Apart from removing low-shot classes, face.evoLVe library also provides data augmentation approaches, e.g., flip horizontally, scale hue/satuation/brightness with coefficients uniformly drawn from [0.6, 1.4], add PCA noise with a coefficient sampled from a normal distribution N (0, 0.1), etc.. Moreover, to alleviate the problem of long-tailed distribution, we also implement weighted random sampling in face.evoLVe library, which is user-friendly, i.e., users can also define a specific sampler instead of using the original weighted sampling implementation during training. Implemented Models One of the advantages of face.evoLVe library is that face.evoLVe contains many existing deep face recognition models. Generally, a model can be separated into two parts: (1) backbone -to extract face features and (2) loss function -to train a model and learn better representations of faces. Face.evoLVe library also designed in a highly-modular manner, i.e., backbones and loss heads are separately implemented. 3.3.1 Backbone. We implement many popular backbones in face.evoLVe library and users can easily change the configuration of backbones to specify the architecture of the backbone. In addition, the highlymodular design allows users to conveniently plug their own backbone networks into the library. The implemented backbones in face.evoLVe library are as follows: • ResNet [22] -a deep convolutional neural network with residual connections, which has various versions, e.g., ResNet18, ResNet34, ResNet50, ResNet101 and ResNet152. ResNet has been widely applied to many vision tasks, such as classification [22], segmentation [21] and object detection [52], achieving state-of-the-art performance. • IR -an improved version of ResNet. Similar to ResNet, IR also has various versions with different numbers of layers. • IR-SE -applies squeeze-and-excitation (SE) blocks [25] to improve IR. • ResNeXt [80] -similar to ResNet, but splits channels into different paths, which achieves better performance on most vision tasks. • SE-ResNeXt -uses squeeze-and-excitation (SE) blocks [25] to improve ResNeXt. [4,68] into ResNet [22] to learn the attention-aware features. • EfficientNet [65] -a series of models that consider width scaling, depth scaling, resolution scaling and compound scaling, resulting in lower volume size but comparable performance compared to the counterparts. • GhostNet [20] -a model that uses cheap operators to generate more features. Head and Loss Function. A large number of heads and loss functions are implemented in face.evoLVe library, which covers most published works. • Softmax -a simple and naive loss function for face recognition, which is popular in the earlier works. • Focal [37] -is used to alleviate the problem of class imbalance, which is first applied to object detection. Note that focal loss [37] can be applied to all softmax-like functions, such as softmax, Arcface, and Cosface, which is able to mitigate the problem of class imbalance. Bag of Tricks Instead of implementing the existing works, such as backbones and loss functions, we also implement a bag of tricks for face recognition, which can be helpful for both researchers and engineers. Learning rate adjustment -at the beginning of the training process, all parameters are typically random values, therefore they are far away from the optimal. Using a large learning rate normally results in an unstable training process. One solution is using warmup [15]. In the warmup stage, we use a small learning rate in the beginning and then switch it to the initial learning rate when the training process is stable [16]. After warmup, we use learning rate decay -the learning rate gradually decreases from the initial value. Cosine annealing strategy [43] is also implemented in the library. An simplified version is reducing the learning rate from the initial value to 0 by following the cosine function. The cosine decay reduces the learning rate slowly in the beginning, and then becomes almost linear decreasing in the middle, and slows down again at the end. Compared to the step decay function, cosine decay is much more smooth and the learning rate is larger than using step decay in the middle training stage, resulting in faster convergence, which potentially improves the final performance. Label Smoothing -label smoothing was first proposed to train Inception-v2 [62], which changes the labels from one-hot vectors to some distributions. We empirically compare the outputs of two ResNet50 models that are trained with and without label smoothing respectively and calculate the gap between the maximum output value and the average of the rest values. Under = 0.1 and = 1000, the theoretical gap is around 9.1 and using label smoothing, the output distribution center is close to the theoretical value and has fewer extreme values. Model tweak -a model tweak is a minor adjustment to the network architecture, such as changing the stride of a particular convolution layer. Model tweaks highly depend on the experience and knowledge of researchers. Generally, a tweak barely changes the computational complexity but might have a non-negligible effect on the model accuracy. Many works have been done on ResNet tweak [16,92]. E.g., changing the downsampling block of ResNet. Some tweaks [8,25,44] replace the 7 × 7 convolutional kernel in the input stem with stacking three conventional 3 × 3 convolutional kernels. Knowledge distillation -existing models suffer from noisy identity labels, such as outliers and label flips. It is beneficial to automatically reduce the label noise for improving recognition accuracy. Self-training is a standard approach in semi-supervised learning, which is explored to significantly boost the performance on image classification [63,78]. Based on our estimation, there are more than 30% and 50% noisy labels in MegaFace2 [32,48] and MSceleb-1M [18]. Since the datasets are relatively large and to reduce label noise and learning better representations of faces, we can apply self-training and knowledge distillation [23,33], i.e., learning the knowledge of the previous trained model.In Fig. 3, we use T-SNE [67] to visualize the distribution of the learned features with and without using knowledge distillation. Obviously, using knowledge distillation learns better representations since the features are more discriminative, i.e., it is much easier to classify the faces. Training and Evaluation Face.evoLVe supports most public datasets and users can directly download the datasets from the links provided in the library. Tab. 2 presents the statistics of the datasets that supported by face.evoLVe library. For most datasets, we provide multiple versions, including the raw version and aligned version. Note that large-scale datasets are more popular in recent years, hence distributed training is vital. Fortunately, face.evoLVe library well supports distributed training, thus users are able to train the model using relatively large datasets, such as MS-Celeb-1M [18] and WebFace260M [101]. In terms of evaluation, we provide many trained models, so that researchers can easily evaluate the models for comparison and engineers can also quickly deploy a face recognition model. Moreover, we use face.evoLVe library to participate in many face recognition competitions, achieving the fist place and SoTA performance. E.g., we achieve the fist place on ICCV 2017 MS-Celeb-1M Large-Scale Face Recognition Hard Set/Random Set/Low-Shot Learning Challenges by using the face.evoLVe library and No.1 on National Institute of Standards and Technology (NIST) IARPA Janus Benchmark A (IJB-A) Unconstrained Face Verification challenge and Identification challenge in 2017. In addition, face.evoLVe also achieves SoTA performance on other datasets, e.g., on MS-Celeb-1M hard set, using face.evoLVe obtains 79.10% of coverage at precision=95% and 87.50% of coverage at precision=95% on the random set. Table 4: The performance of the implemented models in face.evoLVe library on different Head and Backbone. We train the models using Web260M dataset [101]. Figure 4: The visualization of the training process. We train the model on MS-Celeb-1M [18], using IR152 as the backbone, Arcface [13] as the head and focal loss. We validate the model on seven datasets and present the accuracy. USING CUSTOMIZED DATASETS AND MODELS As we have mentioned, it is relatively easy and convenient to use the developed face.evoLVe library for both training and evaluation. Generally, researchers also requires the modularity of a toolbox, thus they are able to easily plug their proposed models into the toolbox with changing only a few codes. Fortunately, face.evoLVe is a highly modular library. Apart from the datasets provided by the library, users can also their own datasets. Face.evoLVe provides data pre-processing SDK, including face detection and alignment, so that users can first use the pre-processing SDK to obtain the images of faces, which is relatively convenient. In terms of the model, which is designed in a modular and extensible manner, e.g., the backbones and heads are independent to each other, thus, users can easily plug either customized backbones or heads without changing the architecture of the library. Also, during training users can easily change the hyper-parameters, such as learning rate, batch size and momentum. CONCLUSION In this paper, we have presented a comprehensive face recognition library -face.evoLVe, which is composed of necessary components covering the full pipeline of face recognition practices, including the alternating backbones and loss functions for face detection, alignment, feature extraction and matching. The goal of face.evoLVe is to lower the technical burden for researchers in the community to reproduce existing algorithms and models for comparisons and benchmark. In addition, face.evoLVe is designed in a highly modular and extensible manner, where users could easily implement and plug their own models into the library for potential extension and contribution. Also, the developed library provides a bag of tricks to improve the performance and stabilize the training process. Note that we have used face.evoLVe to achieve SoTA performance and secured the first place for a series of open competitions. Currently, face.evoLVe is still evolving with a group of active contributors. Commitments with novel models, tricks and datasets are welcome. ACKNOWLEDGEMENT This work was partially supported by the National Science Foundation of China 62006244.
2021-07-20T01:16:14.061Z
2021-07-19T00:00:00.000
{ "year": 2021, "sha1": "0cc674a0091fa1841ee3ed50b9cc3b07acae69d5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "614f04318e6c66463620be148a152fdc896987b1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
56130469
pes2o/s2orc
v3-fos-license
Simulated hydrologic response to projected changes in precipitation and temperature in the Congo River basin Despite their global significance, the impacts of climate change on water resources and associated ecosystem services in the Congo River basin (CRB) have been understudied. Of particular need for decision makers is the availability of spatial and temporal variability of runoff projections. Here, with the aid of a spatially explicit hydrological model forced with precipitation and temperature projections from 25 global climate models (GCMs) under two greenhouse gas emission scenarios, we explore the variability in modeled runoff in the near future (2016–2035) and mid-century (2046–2065). We find that total runoff from the CRB is projected to increase by 5 % [−9 %; 20 %] (mean – min and max – across model ensembles) over the next two decades and by 7 % [−12 %; 24 %] by mid-century. Projected changes in runoff from subwatersheds distributed within the CRB vary in magnitude and sign. Over the equatorial region and in parts of northern and southwestern CRB, most models project an overall increase in precipitation and, subsequently, runoff. A simulated decrease in precipitation leads to a decline in runoff from headwater regions located in the northeastern and southeastern CRB. Climate model selection plays an important role in future projections for both magnitude and direction of change. The multimodel ensemble approach reveals that precipitation and runoff changes under business-as-usual and avoided greenhouse gas emission scenarios (RCP8.5 vs. RCP4.5) are relatively similar in the near term but deviate in the midterm, which underscores the need for rapid action on climate change adaptation. Our assessment demonstrates the need to include uncertainties in climate model and emission scenario selection during decision-making processes related to climate change mitigation and adaptation. Introduction Sustainable management of water resources for food production, supply of safe drinking water and provision of adequate sanitation present immense challenges in many countries of central Africa where the Congo River basin (CRB) is located (IPCC, 2014;UNEP, 2011;World Food Program, 2014).The economies of the nine countries that share the waters of the CRB are agriculture based (World Bank Group, 2014) and therefore are vulnerable to the impacts of climate change.Despite the abundant water and land resources and favorable climates, the basin countries are net importers of staple food grains and are far behind in achieving Millennium Development Goals (Bruinsma, 2003;Molden, 2007;UNEP, 2011).Appropriation of freshwater resources is expected to grow in the future as the CRB countries develop and expand their economies.At the same time, climate-change-related risks associated with water resources will also increase significantly (IPCC, 2014). Historical, present and near-future greenhouse gas emissions in the CRB countries constitute a small fraction of global emissions; however, the impacts of climate change on water resources are expected to be severe due to the region's heavy reliance on natural resources (e.g., agriculture and forestry) (Collier et al., 2008;DeFries and Rosenzweig, 2010;Niang et al., 2014).The limited adaptation capacity in the CRB region is expected to cause water and food security Published by Copernicus Publications on behalf of the European Geosciences Union.challenges, which, in turn, can lead to ecosystem degradation and increased greenhouse gas emissions (Gibbs et al., 2010;IPCC, 2014;Malhi and Grace, 2000). Strategies for addressing stresses on CRB water resources, including revival of rural economies (largely agriculture based), achieving millennium development goals and environmental conservation, would benefit from detailed information on the spatial and temporal variability of water balance components under different climate projection pathways.The effect of climate change on water resources can be investigated by incorporating climate change projections (e.g., precipitation and temperature) in simulation models that reliably represent the spatial and temporal variability of the CRB's hydrology.Such a framework could be applied to project changes in storage and runoff, and hence freshwater availability, under different socioeconomic pathways that affect climate trajectories. A predictive framework of the CRB's hydrology is hindered by insufficient data and too few evaluations of models against available data (Beighley et al., 2011;Wohl et al., 2012).Basin-scale water budgets estimated from landbased and satellite-derived precipitation datasets reveal significantly different results, and modeled runoff shows only qualitative agreement with corresponding observations (Alsdorf et al., 2016;Beighley et al., 2011;Lee et al., 2011;Schuol et al., 2008).Tshimanga andHughes (2012, 2014) recently developed a semi-distributed hydrologic model capable of simulating runoff in CRB.This work crucially identified approaches suitable for approximating runoff generation at the basin scale, although the spatial resolution of the model predictions is rather coarse for supporting regional water management and regional planning efforts.These regional planning efforts must take into account variability and uncertainties stemming from climate model selection and projected greenhouse gas emissions, but, with respect to freshwater runoff projections for the CRB, these issues have been inadequately addressed. The goals of this study are to (i) develop a spatially explicit hydrology model that uses downscaled output from general circulation models (GCMs) and is suitable for simulating the spatiotemporal variability of runoff in the CRB; (ii) test the ability of the hydrological model to reproduce historical data on CRB river discharges using both observed and GCM-simulated climate fields; (iii) quantify the sensitivity and uncertainty of modeled runoff projections to GCM selection; (iv) use the hydrologic model with individual GCMs and multi-GCM ensembles to project near-term (2016-2035) and midterm (2046-2065) changes in runoff for two greenhouse gas emission scenarios.We focus on the runoff projections because streams and rivers will serve as the primary sources of freshwater targeted for human appropriation (Burney et al., 2013;Molden, 2007). The Congo River basin The Congo River basin, with a drainage area of 3.7 million km 2 , is the second largest river basin in the world by area and discharge (Fig. 1, average discharge of ∼ 41 000 m 3 s −1 ) (Runge, 2007).The basin extends from 9 • N to 14 • S, while the longitudinal extent is 11 to 35 • E. A total of nine countries share the water resources of the basin.Nearly a third of the basin area lies north of the Equator.Due to its equatorial location, the basin experiences a range of climate regimes.The northern and southern parts have strong dry and wet seasons, while the equatorial region has a bimodal rainy season (Bultot and Griffiths, 1972).Much of the rain in the northern and southern CRB occurs in June-July-August (JJA) and December-January-February (DJF), respectively.The primary and secondary rainy seasons in the equatorial region are September-October-November (SON) and March-April-May (MAM; see Bultot and Griffiths, 1972 and Fig. S1 in the Supplement).The mean annual precipitation is about 1500 mm.Rainforests occupy nearly 45 % of the basin and are minimally disturbed compared to the Amazon and southeast Asian forests (Gibbs et al., 2010;Nilsson et al., 2005).Grassland and savannah ecosystems, characterized by the presence of tall grasses, closed-canopy woodlands, low trees and shrubs, occupy another 45 % (Adams et al., 1996;Bartholomé and Belward, 2005;Hansen et al., 2008;Laporte et al., 1998).Water bodies (lakes and wetlands) occupy nearly 2 % of the area and are concentrated mostly in the southeastern and western equatorial parts of the CRB (Fig. 1).Soils of the CRB vary from highly weathered and leached Ultisols to Alfisols, Inceptisols and Oxisols (FAO/IIASA, 2009;Matungulu, 1992).Most soils are deep and well-drained, but they are very acidic, deficient in nutrients, have low capacity to supply potassium and exhibit a low cation exchange capacity (Matungulu, 1992). In order to compare regional patterns in precipitation and runoff, we divided the basin into four regions: (i) northern Congo (NC), (ii) equatorial Congo (EQ), (iii) southwestern Congo (SW) and (iv) southeastern Congo (SE).The EQ region covers most of the rainforest.The SE region consists of numerous interconnected lakes and wetlands.Most of the CRB's population is concentrated in the NC, SE and SW regions (Center for International Earth Science Information Network (CIESIN) Columbia University et al., 2005). Hydrologic model for the Congo River basin We used the Soil Water Assessment Tool (SWAT), a physically based, semi-distributed watershed-scale model that operates at a daily time step (Arnold et al., 1998;Neitsch et al., 2011).The hydrological processes simulated include evapotranspiration, infiltration, surface and subsurface flows, streamflow routing and groundwater recharge.The model has been successfully employed to simulate river basin hydrology under a wide variety of conditions and to investigate climate change effects on water resources (Faramarzi et al., 2013;Krysanova and White, 2015;Schuol et al., 2008;Trambauer et al., 2013;van Griensven et al., 2012). We delineated 1575 watersheds within the CRB based on topography (Lehner et al., 2008).Watershed elevations varied between 15 and 2700 m with a mean value of 680 m a.m.s.l.(above mean sea level).Each watershed consisted of one stream section, where near-surface groundwater flow and overland flow accumulated before being transmitted through the stream channel to the watershed outlet.Watersheds were further divided into hydrologic response units (HRUs) based on land cover (16 classes; Bartholomé and Belward, 2005), soils (150 types; FAO/IIASA, 2009) and topography.The runoff generated within each watershed was routed through the stream network using the variable storage routing method.The average watershed size and the number of HRUs within each watershed were 2300 km 2 and 5, respectively.We also included wetlands and lakes as natural storage structures that regulated the hydrological fluxes at different locations within CRB (Fig. 1).Detailed information was not available for the all the lakes; therefore, we incorporated the largest 16 lakes (Table S1 in the Supplement).Simulated runoff, estimated for each HRU and aggregated at the watershed level, was generated via three pathways: overland flow, lateral subsurface flow through the soil zone and release from shallow groundwater storage.The curve number and a kinematic storage routing method were used to simulate overland and lateral subsurface flows, and a nonlinear storage-discharge relationship was used to simulate groundwater contribution (see Arnold et al., 1998;Neitsch et al., 2011 and the Supplement).A power-law relationship was employed to simulate the lake area-volume discharge (see the Supplement and Neitsch et al., 2011).The potential evapotranspiration was estimated using the temperaturebased Hargreaves method (Neitsch et al., 2011).The actual evapotranspiration was estimated based on available soil moisture and the evaporative demand (i.e., potential evapotranspiration) for the day.Additional details on model development and calibration are provided in the Supplement. Model simulation of historical hydrology with observed climate data We ran the hydrology model for the period 1950-2008.Estimates of observed daily precipitation, and minimum and maximum temperatures needed to calculate potential evapotranspiration were obtained from the Land Surface Hydrology Group at Princeton University (Sheffield et al., 2006).In addition, measured monthly streamflows were obtained at 30 gage locations (Fig. 1) that had at least 10 years of records (Global Runoff Data Center, 2011;Lempicka, 1971;Vorosmarty et al., 1998). The model was calibrated using observed streamflows for the period 1950-1957 at 20 locations.The locations of streamflow gages and time period were chosen such that they adequately captured climatic, land cover and topographic variability within the CRB.The number of model parameters estimated by calibration varied from 10 to 13, depending on the location of flow gages (e.g., gages with lakes within their catchment area have more parameters).The calibration involved minimizing an objective function defined as the sum of squared errors between observed and simulated monthly average total discharge, baseflows (estimated by the baseflow separation method of Nathan and McMahon, 1990) and water yield.The Gauss-Marquardt-Levenberg algorithm, as implemented in a model-independent parameter estimation tool (Doherty, 2004), was used to adjust the fitted parameters and minimize the objective function.Parameter estimation was done in two stages.First, parameters for the watersheds in the upstream gages were estimated.Then, the parameters for the downstream gages were estimated.To test the calibrated model, simulated streamflows were compared to streamflows measured at the same 20 locations but during a period outside of calibration (i.e., 1958-2008), as well as at 10 additional locations that were not used in the calibration. Hydrologic simulations with simulated climate Historical climate simulations for the period 1950-2005 and climate projections to 2065 for two greenhouse gas emission scenarios (Representative Concentration Pathways -RCPs), mid-range mitigation emission (RCP4.5)and high emission (RCP8.5),were used to drive the hydrologic model.The RCP4.5 scenario employs a range of technologies and policies that reduce greenhouse gas emissions and stabilize radiative forcing at 4.5 W m −2 by 2100, whereas the RCP8.5 is a business-as-usual scenario, where greenhouse gas emissions continue to increase and radiative forcing rises above 8.5 W m −2 (Moss et al., 2010;Taylor et al., 2012).We used monthly precipitation and temperature outputs provided by 25 GCMs (Table 1) for the Fifth Assessment (CMIP5) of the Intergovernmental Panel on Climate Change (IPCC). GCM outputs may exhibit biases in simulating regional climate.These biases, which are attributable to inadequate representation of physical processes by the models, prevent the direct use of GCM output in climate change studies (Randall et al., 2007;Salathé Jr. et al., 2007;Wood et al., 2004).Hydrological assessments that use GCM computations as input inherit the biases (Salathé Jr. et al., 2007;Teutschbein and Seibert, 2012).To mitigate this problem, we implemented a statistical method (Li et al., 2010) to bias correct the monthly historical precipitation and temperature data.In brief, the method employs a quantile-based mapping of cumulative probability density functions for monthly GCM outputs onto Table 1.Global climate models whose outputs are used in this study.Further details about comparison of model outputs and key references for GCMs are given in Aloysius et al. (2016). NorESM1-M * These climate models provide outputs from three different physics ensembles.We treat each as a separate model.those of gridded observations in the historical period.The bias correction is extended to future projections as well.The observed data used in the modeling and bias correction have some limitations.That is, the number of precipitation gages decreased over the period from 1950 to 1990, and the density of the gages is sparse compared to the size of the river basin (see Sect. 3.4 and the Supplement).However, we assumed that the available ground-based observations combined with satellite-based and reanalysis data adequately captured the spatiotemporal variability in precipitation.Studies by Munzimi et al. (2014) and Nicholson (2000) draw similar conclusions. The simulated monthly precipitation and temperature values were temporally downscaled to daily values for use in the CRB hydrology model.We used the 3-hourly and monthly observed historical data developed for the Global Land Data Assimilation System (Rodell et al., 2004;Sheffield et al., 2006) and the bias-corrected monthly simulations to generate 3-hourly precipitation and temperature data, which were subsequently aggregated to obtain daily values (see the Supplement).The hydrological model was forced with the biascorrected and downscaled daily climate for the period 1950-2065.Due to the lack of information on the effect of CO 2 on the 16 land cover classes simulated, the ambient CO 2 concentration was maintained at 330 ppm throughout the simu-lation period.A recent study suggests that, in tropical rainforest catchments, elevated CO 2 has little impact on evapotranspiration but results in increased plant assimilation and light use efficiency (Yang et al., 2016). A total of 50 projections (25 RCP4.5 and 25 RCP8.5 projections; see Table 1) were compiled and analyzed.Results of individual and multimodel means (unweighted average of all models (MMs) and an average of select models -SMs) for the near-term (2016-2035) and midterm (2046-2065) projections are presented. Accessible flows (AFs), which exclude surface runoff associated the storm events, were estimated by applying a baseflow separation method described in Nathan and McMahon (1990). 3 Results and discussion Historical simulations Historical observations of average annual precipitation vary from 1100 mm in the southeastern portion of the CRB to 1600 mm in the CRB's equatorial region.We compared the GCM-simulated annual precipitation and its interannual variability during the historical period with observations from 30 locations within the CRB (Fig. 2).The simulated interannual variability among the climate models (vertical bars in Fig. 2) lies within the range of the observed variability (horizontal bars in Fig. 2).The linear regression slope of 1.16 (p < 0.001; Fig. 2) between the annual observed and the multimodel means shows that bias-corrected precipitation is slightly overestimated but not significantly.Observations of seasonal precipitation are reproduced similarly well by the GCM models (Fig. S2 and Table S2).The good agreement between GCM-simulated and observed rainfall is expected given our bias correction of the GCM output. We compared the simulated monthly runoff at 30 locations with observations (Fig. 3a and Table S3).The colored points compare observed mean annual runoff at the 30 gage locations with historical simulations (hydrological model forced with observed climate), while the vertical and horizontal bars show the modeled and observed interannual variability, respectively.The shades of colors (from light green to yellow and red) reveal the model's skill in simulating the monthly flows in the historical period.The Nash-Sutcliffe coefficient of efficiency (NSE), a measure of relative magnitude of residual variance compared to the monthly observed streamflow variance (Legates and McCabe Jr., 1999;Nash and Sutcliffe, 1970), varies between 0.01 and 0.86 (color scale in Fig. 3a).The NSE can range from negative infinity to 1, with values between 0.5 and 1 considered satisfactory (Moriasi et al., 2007).A total of 17 of the 30 gages show NSE greater than or equal to 0.5.Higher NSE values at locations on both sides of the Equator, particularly at major tributaries (NSE ∼ 0.60, gages 1 to 8 in Figs. 1 and S3) suggest that the model re- liably simulates streamflows under different climatic conditions.High NSE values also indicate that the seasonal and annual runoff simulations, including the interannual variability in the historical period, are in good agreement with observations.The catchment areas of the 30 gages vary between 5000 and 900 000 km 2 (excluding the last two downstream gages; Table S3) and encompass a range of land cover and climatic regions on both sides of the Equator; thus, the hydrology model exhibits reasonable skill in simulating runoff over a wide range of watershed conditions. Comparison of modeled runoff forced with GCMsimulated and observed climate (Fig. 3b) reveals generally acceptable runoff simulations in the CRB.The black dots and red (blue) vertical bars in Fig. 3b show multimodel mean and maximum (minimum) range of interannual variability in the 25 historical GCM simulations.The results suggest that model-data agreement in precipitation translates to similarly acceptable runoff simulations. Runoff patterns reflect seasonal rainfall that varies asymmetrically on either side of the Equator (see Fig. S1).For example, the observed peak runoff at streamflow gages 2 and 6 located north and south of the Equator (see Fig. 1) occur near the end of the rainy seasons -during September-October and March-April, respectively (Fig. 4).Augmented by flows from northern and southern tributaries (e.g., gages 1, 2, 4 and 6) and by high precipitation in the tropical equatorial watersheds during the two wet seasons (MAM and SON), the main river flows (downstream of gage 3 in Fig. 1) show low variability (Fig. 4).Differences in streamflow variability between the main river and its tributaries are illustrated through comparison of the coefficient of variation, which equals only 0.23 at the basin outlet (gage 8), but is 0.77 and 0.40 along the northern tributary (gage 2) and southern tributary (gage 4), respectively. Runoff in the northern (NC) and southern (SW and SE) watersheds is strongly seasonal with long dry seasons, but this is not the case in the equatorial region (Fig. 5).Average watershed runoff varies between 20 and 70 mm during dry seasons to 100-140 mm during wet seasons in the NC, SW and SE.In the equatorial region, seasonal runoff varies between 100 and 150 mm with the highest values in SON.Overall, the precipitation-runoff ratio is about 0.30 in the CRB.The AF that can be appropriated for human use, and hence excludes runoff associated with flood events, is about 70 % of the total runoff.et al. (2016) showed that GCM projections of temperature generally increase under both emission scenarios, in line with the historical upward trend for Africa (Hulme et al., 2001); however, precipitation projections contain large uncertainties.The modeled near-term (2016-2035) precipitation projections in the CRB vary between −4 and 6 % with a multimodel mean (MM) change of 1 % under the two emission scenarios relative to the reference period (1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005).Regionally, the northern CRB shows the largest annual increase in precipitation, followed by the southwestern and equatorial regions.However, the intermodel variability is larger than the MM in all regions, indicating greater projection uncertainties in both emission scenarios (Table 2).The midterm (2046-2065) projections of annual precipitation vary between −5 and 9 %, with the MM of 1.7 and 2.1 % for RCP4.5 and RCP8.5, respectively.More than 70 % of the ensembles in both RCPs project an increase in annual precipitation in the CRB over the midterm.The multimodel mean of all ensembles that project an increase (decrease) in precipitation is 2.7 % (−2.4 %) for RCP4.5 and 4.0 % (−2.9 %) for RCP8.5. Aloysius The GCMs project considerable spatial and seasonal variations in precipitation (Table 2 and Fig. 6).However, the stan-Table 2. MM of projected changes in precipitation (%) in the four regions within the Congo River basin (see Fig. 1) for the near term (2016-2035) and the midterm (2046-2065) relative to the reference period of 1986-2005.The regions are identified in Fig. 1.The standard deviation values across the 25 GCM simulations are provided in parenthesis.DJF: December-January-February, MAM: March-April-May, JJA: June-July-August and SON: September-October-November. In general, the GCMs project decreasing precipitation in the driest parts of the southern CRB (mostly in the southeastern CRB but in portions of southwestern as well).Under the RCP8.5 scenario, parts of northeastern CRB also experience a reduction in precipitation in the near term (regions in Fig. 6 with fewer GCMs projecting an increase in precipitation).The areas of decreased precipitation shrink in the southeast and southwest in the midterm; however, drying expands in parts of northern CRB under the two emission scenarios.Most GCMs (14-20) project a precipitation increase outside of southeastern CRB. Intermodel variability in precipitation projections are sensitive to seasons and climate region (Fig. 7a-d).At monthly scale, the northern and southern regions receive less than 50 mm of precipitation for at least 3 months, which persist in the future under both emission scenarios.The dry season is more prolonged in the southeast compared to the rest of the CRB.The intermodel variability is larger in the rainy seasons under RCP8.5 compared to RCP4.5.Larger variability under RCP8.5 highlights that GCMs may have limited skill in simulating precipitation under high greenhouse gas emissions. Runoff In general, modeled runoff increases, and its interannual variability within GCMs is larger during high flow periods compared to low flow periods, except in the equatorial region (Fig. 7e-h; see Fig. 1 for regions).The model projection uncertainty increases towards the middle of century, particularly under the RCP8.5 emission scenario.The temporal pat- terns of runoff in the near term and midterm are similar to the precipitation patterns, but with a time lag.As with precipitation, the monthly runoff shows prolonged periods of low values in the northern and southern CRB in both projection periods.Parts of northern, southeastern and southwestern CRB also show reduced runoff projections relative to the reference period under both RCPs; these reductions are predominantly in the areas where fewer GCMs agree on the increase in modeled precipitation (see Fig. 6 and Tables S4 and S5).The area of decreasing runoff expands in the northern CRB under both emission scenarios in the midterm (see Fig. 6, which shows that more models agree on decreasing precipitation in parts of northern CRB that subsequently results in decreasing runoff).Although the northern and equatorial CRB show an overall increase in precipitation, the decrease in runoff in certain parts in the northern and equatorial CRB is caused by reduction in seasonal precipitation (e.g., JJA and SON; see Table S4).A larger reduction -up to 15 % -in the southeastern CRB covering most of northern Zambia is due to an overall decrease in precipitation simulated by more than half of the GCMs (see Fig. 6). Runoff that can be appropriated for human use is generated mostly in the northern, southeastern and southwestern CRB, which at present varies from 130 mm yr −1 in the southeastern CRB to 250-400 mm yr −1 in the northeastern and southwestern CRB.Runoff is projected to increase in all three of these regions.However, the intermodel variability is greater than twice the MM in nearly all the regions and during all four seasons (Fig. 8 and Table 3).In most cases, the largest uncertainties are in non-rainy seasons and under a high emission RCP8.5 scenario (e.g., DJF in the northern CRB; Fig. 8b, and JJA in the southeastern CRB; Fig. 8h). Variability in accessible flows Only part of the runoff may be appropriated for human use.In the CRB, the accessible runoff (AF), excluding runoff associated with flood events, is about 70 %.The AF is largely underutilized, but its appropriation is expected to increase in the future, mostly in the populated areas of northern, southwestern and southeastern CRB.We present the uncertainty associated with GCM and scenario selection by quantifying seasonal and intermodel variability in AF at eight major tributaries (identified in Fig. 1) that drain watersheds across a range of climatic regions on both sides of the Equator (Fig. 9).Modeled AF exhibits substantial intermodel spread in the near term and widens in the midterm (Fig. S4).The intermodel variability is larger during high flow periods compared to low flow periods. Table 3. MM of projected changes in runoff (%) in the four regions within the Congo River basin for the near term (2016-2035) and the midterm (2046-2065) relative to the reference period of 1986-2005.The regions are identified in Fig. 1.The standard deviation values across the 25 GCM simulations are provided in parenthesis.The asterisks ( * ) show the degree of agreement that projected runoff is greater than 0 in more than 50 % of the ensembles.DJF: December-January-February, MAM: March-April-May, JJA: June-July-August and SON: September-October-November. The spatial and temporal variations in the projected AF have consequences for water resources development and management.For example, projections of increased AF near the proposed Grand Inga Hydropower Project (near gage 8; Showers, 2009) are robust compared to the large variations near the proposed transboundary water diversion in the southeast (near gage 5, Lund et al., 2007).Reductions in high and low flows in streams in the southeastern region will have implications for aquatic life, channel maintenance and lake and wetland flooding. Sources of uncertainty The sources of uncertainty encountered in this work can be broadly categorized into (i) observational uncertainty, particularly the sparse and declining network of precipitation and streamflow gages and (ii) model uncertainty, which in the GCMs includes model structure, model initialization, parameterization and climate sensitivity (i.e., the response of global temperature to a doubling of CO 2 relative to pre-industrial levels).We used only one hydrological model, which is also a source of uncertainty.However, variation in climate signals between GCMs and emissions scenarios, particularly precipitation projections, may be a larger source of uncertainty than the choice of hydrology model (Thompson et al., 2014;Vetter et al., 2016). The climate data used for bias correction and for historical hydrologic simulations have their own uncertainties.Gagebased, satellite-derived data and reanalysis outputs are used to develop the historical observations (Sheffield et al., 2006).Precipitation gages were more numerous at the beginning of the simulation period and declined in number toward the end of the 20th century (Mitchell and Jones, 2005;Washington et al., 2013).Available gage data varied both spatially and temporally (Figs. S5 and S6).For example, the equatorial region -nearly a third of CRB -had about 70 rain gages through the early 1990s, but only 10% of these were functioning by 2005 (Fig. S5).The southeastern and parts of northern CRB also had good rainfall-gage coverage, which has similarly decreased since the 1990s (Mitchell and Jones, 2005).However, satellite-based and sparsely distributed gage data have been used to demonstrate that spatiotemporal distribution of precipitation can be sufficiently described in the CRB region (Munzimi et al., 2014;Nicholson, 2000;Samba et al., 2008).We assume that, even with these limitations, the available historical data are adequate to model the hydrology of the CRB. In addition to climate data, observed runoff data are another limitation that could restrict proper validation of hydrological models.However, we utilized a time period (1950)(1951)(1952)(1953)(1954)(1955)(1956)(1957)(1958)(1959) when the CRB had maximum coverage of both precipitation and runoff data to calibrate and test the hydrology model (for example, see evidence in L'vovich, 1979).Where available, we used additional runoff data to further test model outputs during the historical period.The runoff gage locations are distributed within the CRB (see Fig. 1) such that they adequately capture climatic, land cover and topographic variability. For future projections, the largest sources of uncertainty arise from the GCMs and emission scenarios.GCMs do not consistently capture observed rainfall seasonality and heavy rainfall in regions of the central CRB and in most cases do not show key features such as seasonality and heavy rainfall in regions of the central CRB (Aloysius et al., 2016;Washington et al., 2013).The biases in the GCM-simulated precipitation, particularly in the tropical regions, have been attributed to multiple factors including poorly resolved physical processes such as the mesoscale convection systems, inadequately resolved topography due to the coarse horizontal resolution and inadequate observations to constrain parameterization schemes.These limitations are unavoidable in the current set of CMIP5 projections.We assume that the combination of GCM outputs used in our work and the biascorrection method, which maintains key statistical properties in the original GCM outputs (see Aloysius et al., 2016;Li et al., 2010), adequately captures the uncertainties in GCM and emission scenarios.Based on monthly precipitation climatology, Aloysius et al. (2016) found no significant shift in seasonality in modeled future precipitation projections. The range of projections presented here for the two emission scenarios also highlight the uncertainties planners would encounter when making climate-related decisions.For example, broader agreement on increase in runoff in parts of the CRB would help make robust decisions, whereas weaker agreement in the southern CRB calls for greater scrutiny of regional climate.Generally, the MM approach reduces the uncertainty because averaging tends to offset errors across models.However, one could also ask whether this approach would work with fewer models.Washington et al. (2013) and Siam et al. (2013) presented evidence that evaluating atmospheric moisture flux (which is modulated by wind patterns and humidity) and soil water balance is a better way to diagnose GCM performance in datascarce regions like the CRB.Balas et al. (2007), Hirst and Hastenrath (1983) and Nicholson and Dezfuli (2013) have shown that sea surface temperature (SST) anomalies in the Atlantic and Indian ocean sectors could partly explain precipitation in the CRB region.Along the same lines, Aloysius et al. (2016) identified five models as suitable candidates.We examined this subset of GCM projections (M6, M7, M18, M23 and M24), which we refer to as the select model average, or SM (see Giorgetta et al., 2013;Good et al., 2012;Jungclaus et al., 2013;Meehl et al., 2013;Siam et al., 2013;Voldoire et al., 2012;Yukimoto et al., 2006 andAloysius et al., 2016 Focusing on the northern, southeastern and southwestern CRB, where human appropriation of runoff is expected to increase, we find that the projected increase of annual runoff in SM is more than that of MM (20 to 50 % higher in the SM compared to MM).In addition, the extent of reduction in runoff in the south is concentrated in the southeastern upstream watersheds in both MM and SM, although the magnitude of decrease is smaller in SM (Tables S4 and S5). From the viewpoint of water resources for human appropriation, the changes by seasons are also important.Future changes and uncertainties in modeled seasonal runoff averaged over the four regions are presented in Fig. 8.In comparison with the CRB projections, the uncertainties in subregions are larger.Nearly all the MM and SM projections show an increase in runoff in all the four seasons; however, there is substantial intermodel variability.The uncertainties increase under the high emission RCP8.5 scenario during the midcentury.Considering the southeastern region as an example, under the RCP8.5 emission scenario, uncertainties reported as 1 intermodel standard deviation in the midterm are ±20, ±27, ±26 and ±13 %, respectively, for DJF, MAM, JJA and SON seasons, and are greater than the MM and SM.Further, the deviation of uncertainty within the subregions of CRB increases under the high emission RCP8.5 scenario.For example, the intermodel projection ranges are larger in the northern and southeastern CRB (Fig. 8b and h) compared to the equatorial and southwestern CRB (Fig. 8d and f).Finally, the uncertainty assessment presented here represents climate model uncertainty arising from emission scenarios, different response to the same external forcing, different model structures and parameterization schemes.While these uncertainties in projections pose challenges for robust decisionmaking, they also provide insights into where further research might be most valuable. Conclusions From the point of view of climate change adaptation related to water resources, agriculture and ecosystem management, the challenge faced by CRB countries is recognizing the value of making timely decisions in the absence of complete knowledge.In some settings, climate change presents opportunities as well as threats in the CRB.The projected increases in accessible runoff imply new opportunities to meet increasing demands (e.g., drinking water, food production and sanitation), while the enhanced flood runoff would pose new challenges (e.g., flood protection and erosion control).On the other hand, water managers could face different challenges in the southeast where precipitation and runoff are projected to decrease. GCM-related variability in regional climate projections could be constrained by a subset of models based on at-tributes that modulate large-scale circulations (see Knutti and Sedlacek, 2013;Masson and Knutti, 2011).This approach is particularly useful because regions like the CRB lack complete coverage of observational data but the mechanisms that moderate the climate system, particularly precipitation, are fairly well understood (Hastenrath, 1984;Nicholson and Grist, 2003;Washington et al., 2013).Yet, the span in rainfall predictions among the MM, SM and individual GCMs suggests that, despite the advances in climate modeling, significant uncertainties in precipitation projections for CRB persist. Rather than providing a narrow pathway for decisionmaking, our results, for the first time for CRB, provide a framework to (i) assess implications under various climate model assumptions and uncertainties, (ii) characterize and expose vulnerabilities and (iii) provide ways to guide the search for impact-oriented and actionable policy alternatives, as emphasized by Weaver et al. (2013).Projections and associated uncertainties vary widely by region within the CRB, and therefore diverse but robust planning strategies might be advisable within the river basin.We emphasize that projections provided here could be considered as part of the process of incorporating multiple stressors into climate change adaptation and engaging stakeholders in the decision-making process. Figure 1 . Figure1.Congo River basin: the river basin boundary, the extent of the rainforest, locations of lakes and wetlands, and the locations of streamflow gages are shown.The "all other vegetation" category includes grasslands and savanna ecosystems, and all managed areas.Bartholomé and Belward (2005) provide further details on land cover in the Congo River basin. Figure 2 . Figure 2. Comparison of observed and bias-corrected GCMsimulated average annual precipitation for 30 catchments with streamflow gages (shown in Fig. 1) in the historical period (1950-2005).The y-axis values are statistically downscaled GCMsimulated precipitation.Black dots compare multimodel means with observed precipitation, black horizontal bars show observed interannual variability (±1 SD) and red (blue) vertical bars show maximum (minimum) range of modeled interannual variability (±1 SD) within the 25 climate model outputs.The black line is the linear regression fit between observed and multimodel mean of simulated precipitation (y = 1.16 ± 0.204x − 283.4,p < 0.001, R 2 = 0.825); parameter bounds indicate the 95 % confidence interval.The gray dashed line is the 1 : 1 line. Figure 3 . Figure 3.Comparison of observed and simulated annual runoff at the 30 streamflow gage locations (shown in Fig. 1).(a) Historical simulations with observed climate: the positions of the colored dots compare annual values of observed and simulated historical runoff; the dots' colors (see legend) show the Nash-Sutcliffe coefficient of efficiency (NSE) of observed vs. simulated monthly streamflows; and the black horizontal and vertical bars show observed and modeled interannual variability (±1 SD), respectively.The black line indicates the linear regression fit between annual simulated and observed runoff (y = 0.865 ± 0.158x + 36.63,p < 0.001, R 2 = 0.82), parameter bounds indicate the 95 % confidence interval.(b) Simulations in the historical period with GCM-simulated climate: black dots show the multimodel mean; red (blue) vertical bars show the modeled (forced with GCM-simulated historical climate) maximum (minimum) interannual variability (±1 SD) within the 25 simulations; and gray circles show the multiyear mean of individual GCM simulations.The gray dashed lines in (a) and (b) are the 1 : 1 lines.The GCM-simulated outputs are statistically downscaled and bias corrected. Figure 4 . Figure 4. Mean monthly flows at selected tributaries in the CRB.Flows are in m 3 s −1 and gage numbers are identified in Fig. 1.Monthly values are based on simulated flows (forced with observed precipitation) for the period 1950-2005. Figure 5 . Figure 5. Seasonal variation in runoff in the (a) northern, (b) equatorial, (c) southwestern and (d) southeastern Congo River basin for the historical period (1950-2005).The seasonal total runoff are calculated for December-January-February (DJF), March-April-May (MAM), June-July-August (JJA) and September-October-November (SON).Black dots and vertical bars show the modeled interannual variability forced with observed climate, red dots show the MM forced with GCM-simulated climate, red vertical bars show the maximum range of interannual variability within the 25 models and the gray open circles show the means of individual models.The y-axis scale is different for each plot. Figure 7 . Figure 7. Monthly variation of precipitation (a-d) and runoff (e-h) in the four regions shown in Fig. 1.Box-and-whisker plots for each month show the intermodel variability for the historical period (black), near-term RCP4.5 (light green), near-term RCP8.5 (dark green), midterm RCP4.5 (red) and midterm RCP8.5 (brown).The upper and lower ends of the boxes show the 75th and 25th quartiles, the bar inside each box shows the median and the whiskers cover approximately 90 % of the values.The multimodel mean values for the reference period are shown as triangles for clarity.All values are in millimeters per month.NC -northern, EQ -equatorial, SE -southeast and SW -southwest; see Fig. 1 for locations."Mid" indicates the midterm and "near" indicates near term. Figure 8 . Figure 8. Seasonal runoff projections (as percent relative to the reference period 1986-2005) for the near-term (2016-2035) and midterm (2046-2065) projection periods for northern (a, b), equatorial (c, d), southwestern (e, f) and southeastern (g, h) regions.Boxes show the 25th and 75th percentiles, the horizontal line within the boxes show median value and the whiskers mark the 5th and 95th percentiles.The multimodel mean (asterisks) and the select-model mean (green dots) are also shown.The y-axis range is limited to show the smaller boxes.The y-axis values are in percentages."Mid" indicates the midterm and "near" indicates near term. Figure 9 . Figure 9. Accessible streamflow hydrographs in the near term at selected locations shown in Fig. 1a.Blue and red bars (RCP4.5 and RCP8.5, respectively) show the intermodel variability.The dotted black line shows the hydrograph in the reference period (1986-2005).Plot numbers 1-8 coincide with the gage numbers in Fig. 1.
2019-04-25T13:10:52.039Z
2016-04-06T00:00:00.000
{ "year": 2016, "sha1": "52b0f792addb43a3041aa0592756a0a30cf23fb8", "oa_license": "CCBY", "oa_url": "https://www.hydrol-earth-syst-sci.net/21/4115/2017/hess-21-4115-2017.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4de276238d256473d1901fc005f45acba9968a7e", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Environmental Science" ] }
10112503
pes2o/s2orc
v3-fos-license
Automatic calculation of one-loop amplitudes An algorithm to automatically compute any one-loop amplitu de for all momentum, color, and helicity configurations of the external particles is presen ted. It has been implemented using the toolsHELAC for the evaluation of tree-level off-shell currents, CutTools for OPP-reduction and OneLOop for the evaluation of one-loop scalar integrals. It proves t o be able to deal with at least all sub-processes included in the 2007 Les Houches wish list for LHC involving 3 or 4 particles in the final state. Introduction In order to deal with the data from the experiments at LHC for the study of elementary particles, signals and potential backgrounds for new physics have to be under control at sufficient accuracy [1].In particular, hard processes with high multiplicities, involving many particles or partons, cannot be neglected.On top of that, such processes have to be dealt with at the next-toleading order (NLO) level to, for example, reduce the scale dependence of observables and to have a better description of the shape of their distributions. At the leading order in perturbation theory, many tools are already available that are able to simulate any scattering process involving up to several partons [2].These tools are highly automated and they have been widely used [3].At the next-to-leading order the situation is currently less advanced.MCFM [4] is able to produce results at NLO accuracy for specific scattering processes, based on analytic calculations.Regarding the calculation of one-loop amplitudes, the only automatic tool available for some time now was FeynCalc [5] and FormCalc [6].These tools rely heavily on the use of computer algebra.For processes with two particles in the final state, their performance is very satisfactory.There exist several important calculations that make use of these automatic packages and FeynArts [7], QGRAF [8], producing results with up to four particles in the final state [9], but for the moment no publicly available automatic tool exists.Recently a program called GOLEM [10] has been presented, that is able to deal with processes up to six external legs.It will also provide an alternative to compute automatically one-loop amplitudes [11]. The aforementioned programs express one-loop amplitudes in Feynman graphs, which again are expressed in terms of one-loop tensor integrals.These are then calculated using universal reduction techniques, independent of the amplitude, by expressing them in terms of scalar integrals.In a very different line of thinking, starting from the pioneering work in [12], a new approach has been set forward, known under the name of unitarity approach, which has been proven very powerful in computing multi-parton amplitudes in QCD that seemed to be impossible with the traditional Feynman graph approach.The reason is that one-loop amplitudes are calculated by using tree-order building blocks, that are either known analytically with very compact expressions, or can be evaluated using fast recursive equations.Nevertheless a systematic framework to develop a generic computation of any one-loop amplitude was missing, limiting the applicability of the method. Using the crucial input from [13], this problem has been first solved in [14], introducing a systematic framework in order to calculate all coefficients of the scalar integrals, as well as part of the so-called rational contribution, related to the occurrence of UV-divergences.The remaining rational part can be reproduced by counter terms encoded in tree-like Feynman rules involving up to four fields [15].Therefore, this method, known as the OPP method, provides a self-contained framework for the evaluation of the full one-loop amplitude.In [16], the OPP method was applied within the so-called generalized unitarity approach [13,17] in order to get also the full rational contribution to the amplitude, paying the price to work with tree-amplitudes in higher dimensions. The systematic extraction of all coefficients and of the rational term opened the road for the construction of tools that are able to compute one-loop amplitudes with any number of particles.BlackHat [18] and Rocket [19] were the first tools to realize such a possibility.In the following, we report on the development of a new program. Automatic calculation of one-loop amplitudes A. van Hameren The program For any one-loop amplitude, a number n of propagator denominators can be indentified such that the amplitude can be represented in the form where the loop momentum q and the numerator polynomials NI ( q) are considered to be evaluated in d dimensions.The p i are combinations of the external momenta, and the m i are the masses of particles running a loop.It is well-known that for d → 4, each such term can be cast into the form where Box, Tri, Bub and Tad refer to the well known one-loop scalar integrals and R = R 1 + R 2 is the rational term.Given the momenta p i , the masses m i and a function evaluating the 4-dimensional numerator N I as function of q, the program CutTools [20] identifies the scalar integrals to be evaluated, calculates the coefficients, and determines R 1 following the OPP method.The scalar integrals can be evaluated with the tools from [21] or with our own code OneLOop. For the evaluation of the numerator, the program HELAC [22] is used.It efficiently calculates tree-level amplitudes by applying recursive relations for off-shell currents [23].The convenience of its applicability stems from the freedom in the choice of the initial decomposition represented by the summation in Eq. (2.1).In this application it is a sum over all topologically inequivalent partitions of the external particles.A few examples are depicted on the right, each graph corresponds to an instance of the label I in Eq. (2.1).The external particles are labelled by powers of 2, and the blobs do not contain propagators depending on the integration momentum q.HELAC can easily evaluate such a term by considering one line to be cut, and summing over all possible internal spin, flavor and/or color states of that cut line.Each term in this sum is an amplitude with two more external particles, restricted such that it only contains Feynman graphs including all the propagators eventually forming the loop.This can easily be accomplished by putting together the necessary off-shell currents provided by HELAC. Regarding the color-treatment, HELAC uses the color-connection representation.External quarks and anti-quarks are represented by a color or anti-color index, gluons are represented by a pair of color/anti-color indices, and HELAC calculates all non-zero tree-level color-connections.The same is done to for the amplitudes with two more particles necessary to calculate the one-loop numerators.The contributions from non-planar loops are obtained by taking into account different orderings of these extra particles among the other external particles. The evaluation of the rational part R 2 , finally, follows the same line as the calculation of the counter terms necessary for renormalization, and is comparably trivial.
2015-03-27T18:11:09.000Z
2010-02-17T00:00:00.000
{ "year": 2010, "sha1": "187cfb7b2f877aaff22e1e5dc4078edcdc06e7f1", "oa_license": "CCBYNCSA", "oa_url": "https://pos.sissa.it/084/293/pdf", "oa_status": "HYBRID", "pdf_src": "Crawler", "pdf_hash": "8161ff2a8ade232a4184922f43cddb9bc76930f7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
115153678
pes2o/s2orc
v3-fos-license
Morphology and genome of a snailfish from the Mariana Trench provide insights into deep-sea adaptation It is largely unknown how living organisms—especially vertebrates—survive and thrive in the coldness, darkness and high pressures of the hadal zone. Here, we describe the unique morphology and genome of Pseudoliparis swirei—a recently described snailfish species living below a depth of 6,000 m in the Mariana Trench. Unlike closely related shallow sea species, P. swirei has transparent, unpigmented skin and scales, thin and incompletely ossified bones, an inflated stomach and a non-closed skull. Phylogenetic analyses show that P. swirei diverged from a close relative living near the sea surface about 20 million years ago and has abundant genetic diversity. Genomic analyses reveal that: (1) the bone Gla protein (bglap) gene has a frameshift mutation that may cause early termination of cartilage calcification; (2) cell membrane fluidity and transport protein activity in P. swirei may have been enhanced by changes in protein sequences and gene expansion; and (3) the stability of its proteins may have been increased by critical mutations in the trimethylamine N-oxide-synthesizing enzyme and hsp90 chaperone protein. Our results provide insights into the morphological, physiological and molecular evolution of hadal vertebrates. Analysing the genome of a snailfish from the Mariana Trench, the authors show genetic changes associated with unique morphological and physiological adaptations to life in the hadal zone. stomach, liver and eggs, thinner muscles and an incompletely ossified skeleton (Supplementary Note 1). Our specimens were identified as a new species 9 , P. swirei, based on morphological observations and DNA barcoding (Supplementary Note 1). The stomach of this MHS specimen was filled with 98 crustacean individuals ( Supplementary Fig. 8), most of which were Hirondellea gigas. The dominance of H. gigas is consistent with an earlier report 10 . De novo assembly of the MHS and sea surface snailfish reference genomes. We sequenced one MHS individual using a combination of single-molecule real-time sequencing and paired-end sequencing Supplementary Tables 2 and 3 and Supplementary Note 2). The assembly consisted of 6,094 scaffolds, with a scaffold N50 of 418 kilobases (kb) (total length = 684 megabases (Mb)) and a contig N50 of 338 kb (total length = 682 Mb) (Supplementary Table 4 and Supplementary Fig. 12). A BUSCO assessment of single-copy orthologous genes indicated that the genome's completeness was 91.7%, which is comparable to that achieved for other teleosts (Supplementary Table 5). To further assess the quality of the assembly, 40,154 transcripts were generated by sequencing messenger RNA from 28 samples of 15 tissues (Supplementary Table 6). Over 89% of the transcripts aligned to the genome along at least 90% of their length, confirming the assembly's completeness ( Supplementary Fig. 13). Additionally, 80% of the transcripts in which over 90% of the sequence aligned with the genome were located on single scaffold, demonstrating the contiguity of the assembly ( Supplementary Fig. 13). We annotated 25,262 proteincoding genes (Supplementary Table 7), of which 23,043 (91.2%) were supported by transcriptome data. For comparative analyses, we also performed a de novo assembly of the Tanaka's snailfish genome (Supplementary Fig. 12 and Supplementary Tables 3-5 and 7-9). The genome of the MHS is about 21.9% (150 Mb) larger than that of Tanaka's snailfish. This may be primarily due to expansions of repetitive sequences in the MHS (Supplementary Table 8). Other properties of the MHS genome, including its GC content, codon usage, gene length and exon number (Supplementary Fig. 14 and Supplementary Table 7) are similar to those of the ocean surface snailfish, suggesting that they probably do not contribute greatly to hadal adaptation. Demographic history. We constructed a high-confidence species tree ( Fig. 2a and Supplementary Fig. 15) for nine teleosts, including the MHS, Tanaka's snailfish, stickleback, flatfish, pacific Bluefin tuna, fugu, platyfish, cod and zebrafish, using the coalescent method. The divergence time between the MHS and Tanaka's snailfish was estimated to be about 20.22 million years ago (Ma) (Fig. 2a and Supplementary Fig. 16)-over 10 Myr before the formation of the Mariana Trench (estimated to have occurred 8- 10 Ma 11,12 ). A more extensive sampling effort including populations living at intermediate depths will be required to clarify how snailfish lived and adapted during the formation of the trench. Liparids are known to be the dominant fish in the hadal zone 6 and they are the top predators 8 . Therefore, as a species of liparids, the MHS is likely to have a relatively large population size. Accordingly, its heterozygosity was ~0.36-0.51%, which is greater than that of Tanaka's snailfish (0.26%) and comparable to other teleosts ( Supplementary Fig. 17). Estimates of the dynamic effective population size (N e ) for both species indicated that the MHS had a larger population than the surface snailfish and underwent a significant population expansion around 50,000 years ago ( Fig. 2b and Supplementary Fig. 18). This expansion was confirmed by multiple sequentially Markovian coalescent 13 analyses ( Supplementary Fig. 19), and might be related to some unknown geographic or environmental event. The divergence times among the three (sub)populations The phylogeny topology of the nine teleosts was reconstructed with coalescent methods based on both orthologues and syntenic block datasets. The branch lengths represent divergence times, while the grey rectangle at each node indicates the 95% confidence interval. b, Demographic history estimated by PSMC. The three blue lines represent the three collected MHS individuals, while the green line represents Tanaka's snailfish. c, Comparison of mutation rates in the nine sequenced fish species based on 4D sites. d, Mutation rates of three species, estimated by syntenic alignment along the stickleback genome. The numbers around the outside represent the chromosome ID of the stickleback genome. The blue, green and orange dots indicate the mutation rates for each window in the MHS, Tanaka's snailfish and sticklebacks, respectively. The green and orange dots almost overlap, while the blue dots are appreciably closer to the centre of the figure (corresponding to a lower mutation rate across the genome). μ indicates the mutation rate (×10 -9 site -1 yr -1 ) of each window. e-g, Two-dimension kernel density distribution of Ka (e), Ks (f) and K a /K s (that is, ω; g). The MHS has much lower K s values but similar K a values, and so has a much greater K a /K s ratio. represented by the three individuals were estimated to be ~1.4 and ~2.9 Ma ( Supplementary Fig. 16). These results suggest that the MHS population is quite large and has rich genetic diversity. The MHS has a low rate of mutation across the genome, but a high rate of protein evolution. The branch length of the MHS was about one-third that for Tanaka's snailfish in the maximum-likelihood tree ( Supplementary Fig. 15). Among the nine species included in the tree, the MHS has the lowest mutation rate (Fig. 2c). This was not only true for the fourfold degenerate (4D) sites; the mutation rate of the MHS across the whole genome was also lower than for Tanaka's snailfish and the stickleback (Fig. 2d). Previous studies have suggested that mutation rates are sensitive to many factors, including environmental energy 14 , metabolic rate 15 , life-history traits 16 and, in particular, generation times 17 . Hadal species reportedly have comparatively low metabolic rates 18 , so the MHS may have a 'slow life'. Coincidentally, we observed that the female MHS produced fewer but larger eggs than females of other snailfish species, suggesting that they may have a specialized reproduction strategy (for example, epimeletic behaviour and/or eggs that hatch as juveniles rather than larvae), which could further increase the generation time. It is thus plausible that the MHS has an extended generation time that contributes to its low mutation rate. Despite the low nucleotide-level mutation rate of the MHS, its protein sequences appear to have evolved at a rate similar to other species. While the K s value (the number of mutations per synonymous site) for the MHS was significantly lower than that for Tanaka's snailfish, the two species had very similar K a values (numbers of mutations per non-synonymous site), so the MHS had a significantly greater K a /K s ratio (that is, ω) ( Fig. 2e-g). The high rate of protein evolution in the MHS was verified by comparing the ω distribution along the chromosomes of the stickleback genome ( Supplementary Fig. 20). Overall, the MHS exhibited the largest ω value of the nine teleosts considered in this study ( Supplementary Fig. 21). Its high proportion of mutations at non-synonymous sites could be due to factors such as positive selection or relaxation of selection 19,20 , since we have excluded the possibility of a small population size 21 . Additionally, the ratio of the heterozygosity of zerofold and fourfold degenerate sites in the MHS is lower than that in Tanaka's snailfish, indicating a stronger positive selection effect in the MHS ( Supplementary Fig. 22). Molecular mechanisms underpinning the special phenotypes of the MHS. Vertebrates living on the surface of the Earth have closed skull spaces surrounded by hard bone, to protect the brain and maintain an appropriate intracranial pressure. However, closed skulls cannot maintain their structural integrity under the very high pressures of the hadal environment, necessitating an open system. Consequently, most multicellular hadal species are boneless creatures, such as Decapoda and Crustacea; only a few vertebrates, as well as species such as the MHS that exhibit adaptive structural features, can inhabit this zone 2 . Using micro-computed tomography, we found that the skull of the MHS is not completely closed (Fig. 3a,b and Supplementary Data 1 and 2), allowing internal and external pressure equalization. Moreover, most of the bones consist of cartilage rather than being ossified. Notably, we found that the osteocalcin gene-also known as the bone Gla protein (bglap) gene, which regulates tissue mineralization and skeletal development 22-24 -has a frameshift mutation that may cause premature termination of cartilage calcification in the MHS ( Fig. 3c and Supplementary Fig. 23), which might cause its pseudogenization or severe modification. To evaluate the effects of disrupting bglap functionality in fish, the expression of bglap in the zebrafish (Danio rerio) was knocked down using two types of specific antisense morpholino (MO) oligonucleotides-one to prevent the proper splicing of exon 1 (bglap-e1i1-MO) and another to block the translation of bglap (bglap-ATG-MO) (Supplementary Fig. 24 and Supplementary Note 3). The amount of stained mineralized tissue in treated embryos at five days post-fertilization was markedly reduced compared with control-MO-injected fish ( Fig. 3d-g, Supplementary Table 9 and Supplementary Fig. 24), suggesting that disrupting bglap expression indeed hinders skeletal development in fish, as has been observed in mammals [22][23][24] . Therefore, the premature termination of bglap in the MHS may be associated with this species' unusual skull structure and reduced bone hardness. The environment 7,000 m under the sea is almost completely devoid of light. The MHS did not respond to the lights of our deepsea lander, which is consistent with previous observations 25 . We therefore performed a comparative genomic analysis of changes in the crystallin and opsin genes of the hadal fish, revealing that it has lost several important photoreceptor genes (Supplementary Table 10 Table 11). Rhodopsin, which is encoded by rho and regenerated by rgr 26 , is an extremely light-sensitive receptor protein found in rod cells that is responsible for low-light vision 27 . We hypothesize that the MHS may retain some photon-sensing ability or has gradually lost its visual ability-first losing colour perception, followed by the ability to perceive light in any form. Like other fish that lives in darkness, the MHS has lost its skin pigmentation and has become transparent 28 . We found that the most well-known pigmentation gene, mc1r, has been completely lost in this species (Supplementary Figs. 25 and 26). Changes in cell membranes. The cell membrane is a lipid bilayer containing various proteins. High hydrostatic pressures reduce the fluidity of lipid bilayers and the reversibility of their phase transitions, ultimately leading to the denaturation and functional disorder of membrane-associated proteins 29,30 . Pressure also rigidifies membranes, impairing their transport functions 31 . Gene family analysis of the 9 teleosts included in our study revealed 310 significantly expanded gene families in the MHS (Supplementary Figs. 27 and 28 and Supplementary Table 12). The gene families exhibiting the most significant expansion were those associated with fatty acid metabolism (Fig. 4a and Supplementary Table 13). Phospholipids are major constituents of cellular membranes, and their fatty acid composition is regulated to maintain membrane order and fluidity. Biochemical studies have suggested that the membranes of deep-sea-adapted organisms contain a higher weight percentage of unsaturated fatty acids than the equivalent membranes of shallow-sea species 32,33 . It has been shown that docosahexaenoic acid (DHA) significantly alters many basic properties of membranes, including aryl chain order and 'fluidity', elastic compressibility, permeability and protein activity at high pressure 34 . The last step of DHA biosynthesis is peroxisomal β-oxidation, and the protein acetyl-CoA acyltransferase encoded by acaa1 is the rate-limiting enzyme in this process. We found that the MHS genome has 15 copies of the acaa1 gene, while all other fully sequenced teleosts have only 5 copies ( Fig. 4b and Supplementary Fig. 29). Another gene involved in DHA biosynthesis, fasn, also exhibited copy number increases in the MHS genome ( Supplementary Fig. 30). These changes may increase the abundance of fluid membrane lipids, enabling survival in the world's deepest ocean trench. Other significantly expanded categories include genes belonging to families with ion and solute transport-related functions, such as tfa and slc29a3 ( Supplementary Fig. 30). This is consistent with a need to resist high-pressure-induced inhibition of fluid transport in hadal organisms 35 . The list of expanded gene families provides clues for future functional tests to reveal their correlation with the adaptation of the MHS to the extreme hydrostatic pressure. The extensive deep-sea adaptations of the MHS are probably due to intense selective pressure acting on different gene families. Gene Ontology categories associated with significantly greater rates of protein evolution in the MHS compared with Tanaka's snailfish include 'ion transport' , 'transmembrane transport' and 'calcium ion transport' (Fig. 4c and Supplementary Table 14). The 86 MHS genes identified as positively selected genes (PSGs) (Supplementary Table 15) also exhibited functional enrichment with respect to 'transmembrane transport' , ' ATP binding' and 'ion transport' (Supplementary Table 16). Among the PSGs, 79 have well-known functions, of which 18 are related to membrane transport systems, including 3 ATP-dependent transporters, 4 ion channel genes and 11 secondary transporter genes (Supplementary Table 15). Earlier studies showed that high pressure suppresses the activity of membrane transport genes, and that proteins such as Na + /K + -ATPases from deep-sea species are less pressure sensitive than those of sea-surface species 30 . The lineage-specific adaptive evolution of these genes in the MHS may thus indicate a role in maintaining transport activity and cell homeostasis 36 , helping the fish to thrive at high pressures. Analysis of the amino acid variations in these genes may yield insights into how transmembrane transport proteins adapt to high pressure. Maintenance of protein activity. Hydrostatic pressure strongly inhibits protein function, affecting both folding and enzyme activity. Consequently, species living at great depths must maintain an intracellular milieu that preserves the intrinsic properties of proteins and confers pressure resistance 2 . Mechanisms based on physiological and structural adaptations have been proposed to explain the preservation of protein functionality in deep-sea organisms 35,37 . The physiological adaptation mechanism involves accumulating small organic solutes such as trimethylamine N-oxide (TMAO) to preserve protein function at elevated hydrostatic pressures 38 . TMAO is a physiologically important protein stabilizer that can restore denatured proteins to their native structure 39 . Its abundance in teleosts increases with depth; deep-caught species have significantly higher TMAO levels in all tissues than shallow species 40 . Most teleost genomes contain five copies of the TMAO-generating enzyme flavin monooxygenase 3 (fmo3), four of which are tandem repeats ( Fig. 5a and Supplementary Fig. 31). The first gene (fmo3a) of these four tandem-repeated copies was strongly expressed in the liver of the MHS (Supplementary Table 17). We found that the most strongly expressed copy of fmo3 of the MHS differs from species to species (Supplementary Table 17). It should be noted that this could be impacted by degraded transcriptome. Because these copies diverged long ago and the corresponding proteins' structures differ appreciably, it is likely that different copies of fmo3 have different catalytic efficiencies. Interestingly, fmo3a was positively selected in the MHS. In addition, we predicted more putative promoters (five copies) upstream of this gene in the MHS than in Tanaka's snailfish (one copy) or sticklebacks (two copies) ( Supplementary Fig. 32). These changes in the gene's protein-coding and regulatory sequences may help the MHS increase intracellular TMAO levels to enhance protein stability. Structural adaptations of proteins to deep-sea conditions may include changes in amino acid substitution patterns and protein structure that counteract the effects of pressure on protein function 41,42 . To characterize these adaptations, we compared the MHS with other species with respect to the amino acid composition and substitution of all coding genes together ( Supplementary Fig. 14 and Supplementary Tables 18 and 19) and each gene separately ( Supplementary Fig. 33). No clear signal was identified in this analysis, suggesting that there is no global composition and substitution change that is present in all proteins. However, it has previously been reported that the evolutionary patterns of some proteins responded to hydrostatic pressure 43,44 . We further investigated whether any gene family of the MHS has convergent amino acid substitutions that are different from the ancestral genotypes at the homologous position (see Methods). The only gene family found to exhibit convergent amino acid changes in most of its family members with high confidence was hsp90; the same alanine-to-serine substitution occurred independently in four of five copies of the hsp90 protein of the MHS, at a site that is highly conserved in the corresponding proteins of humans, mice, chickens, chameleons and yeast ( Fig. 5b and Supplementary Fig. 34). This convergent substitution was also found to be very rare under random conditions ( Supplementary Fig. 35). Therefore, the recurrence and fixation of the substitution in such a conservative site suggest it is very likely to be beneficial for the adaptation of the MHS. Hsp90 is an evolutionarily conserved and highly abundant molecular chaperone Voltage-gated ion channel activity Potassium ion transport that promotes the correct folding and activation of over 200 proteins, many of which are involved in essential cellular processes such as signal transduction, cell survival and responses to cellular stress 45,46 . We performed homology modelling using four MHS hsp90 isoforms, examining both the complete sequences and the amino (N)-terminal regions (representing the ATP-binding domains) separately 46,47 . The MHS hsp90 proteins feature an alanine-to-serine mutation in the relatively conserved motif FYSSX, which is predicted to exist as a short α-helix ( Fig. 5c and Supplementary Fig. 36). In all cases, the mutated serine lies in close proximity to the ATP-binding pocket, and may contribute significantly to a local structural interaction affecting hsp90 activity (Fig. 5c). Further structural and chaperone function studies will shed light on this unique mutation's structural and functional effects on the N-terminal regions of hsp90 proteins. Conclusions Advances in deep-diving and genome-sequencing technologies have allowed us to complete this study on the genetic basis of vertebrate adaptation to the extreme environment of deep-sea trenches. A Liparidae species discovered 6,000 m below the ocean surface was found to have adapted to life in the hadal zone over a period of only several million years. Although its mutation rate has declined, its rate of amino acid substitution was found to be high, allowing plasticity and adaptation. The species has undergone extensive internal and external adaptations to tolerate the immense pressures and other challenges of the deep-sea environment. Genomic analyses revealed molecular adaptations consistent with pressure-tolerant cartilage, loss of visual function and skin colour, enhanced cell membrane fluidity and transport protein activity, and increased protein stability. The numerous genetic changes identified in this study shed light on how vertebrate species can survive and thrive in the deep oceans. Transcriptome sequencing and assembly. A total of 28 transcriptomes were generated from 15 tissues (abdominal skin, blood, bone, brain, brain fluid, cholecyst, gill, head, heart, liver, muscle, oesophagus, reproductive organ, spinal cord and stomach) from two MHS individuals collected from the second site. Total RNA was extracted from these individuals using TRIzol (Invitrogen) and subsequently purified using an RNeasy Mini Kit (Qiagen). Paired-end reads with insert sizes of 500 bp were generated using an Illumina HiSeq 2000 sequencing platform. The sequenced reads were filtered and trimmed by fastp 52 52 ) was used to construct a de novo transposable element library, which was then used to predict repeats with RepeatMasker. We also predicted tandem repeats using TRF version 4.0.4 (ref. 56 ). We annotated the coding gene structure of the two genome sequences by integrating ab initio predictions, homology-based gene predictions and direct gene models produced by transcriptome assembly (only for the MHS). First, Augustus version 3.2.1 (ref. 57 ), GeneID version 1.4 (ref. 58 ), GlimmerHMM version 3.0.4 (ref. 59 ) and SNAP version 2013-11-29 (ref. 60 ) were used to generate ab initio predictions with internal gene models. Next, the protein sequences of seven species (cod, fugu, medaka, puffer, stickleback, zebrafish and human; ENSEMBL 89) were aligned to genome sequences with Exonerate. The MHS transcripts were assembled using both Binpacker version 1.0 (ref. 53 ) (de novo) and Hisat2 version 2.1.0 (ref. 61 )/ StringTie version 1.3.3b 62 (reference-guided) with default parameters. We then integrated the two assemblies using Evidence Modeler (EVM) version 1.1.1 (ref. 63 ) with different weights for each. The integrated gene set was translated into amino acid sequences, which were used to search the InterPro database with InterProScan version 5.15 (ref. 64 ) to obtain Gene Ontology and PANTHER information for each gene, and the genes were further annotated using the KEGG databases 65 . Sequencing and assembly of MHS and Phylogeny reconstruction. Protein sequences from nine species (the MHS and Tanaka's snailfish (assembled in this study), stickleback, fugu, platyfish, cod and zebrafish (V89; downloaded from ENSEMBL), flatfish (GCF_001970005.1; downloaded from the National Centre for Biotechnology Information) and Pacific bluefin tuna (Ver.1; downloaded from http://nrifs.fra.affrc.go.jp) were clustered with OrthoMCL version 2.0.9 (ref. 66 ) using default parameters, and 3,915 one-toone orthologues were identified. Five species from ENSEMBL were chosen with the aim of covering more teleost groups (one species for one order). We chose flatfish and Pacific bluefin tuna because of their closer relationship to MHS. The protein sequences of each orthologue were aligned with MAFFT version 7.310 (ref. 67 ) using default parameters, and alignments of the coding sequences were generated with pal2nal version 14 (ref. 68 ) using default parameters. We then generated five datasets using the first, second and third base in each codon, 4D sites and whole coding sequence alignments. The five datasets were used to construct maximumlikelihood trees, separately, with RAxML version 8.2.10 (ref. 69 ) using the following parameters: -f a -m GTRGAMMAI -x 271828 -N 100 -p 31415, under the GTR + I model, which was suggested by jmodeltest2 (ref. 70 ). The maximum-likelihood tree for each gene was also constructed (as above) and plotted using Densitree 71 , to reveal phylogeny heterogeneity at the gene level. Then, a species tree was built with these gene trees using MP-EST version 2.0 (ref. 72 ). We also performed whole-genome synteny alignment for these nine teleosts using Last version 894 (ref. 73 ) and Multiz version 11.2 (ref. 74 ) with default parameters to generate another dataset. The 12-Mb one-to-one synteny alignment was used to construct a maximum-likelihood tree, and the 13,051 synteny blocks with a length larger than 200 bp were used to constructed a species tree. The divergence time was estimated using MCMCtree version 4.5 (ref. 75 Demographic history and genetic diversity. We inferred demographic histories of the MHS and Tanaka's snailfish by applying the pairwise sequentially Markovian coalescence model (PSMC version 0.6.5-r67) 77 to the complete diploid genome sequences. Consensus sequences were obtained using SAMtools version 1.3.1 (ref. 78 ) using the parameters 'mpileup -q 20 -Q 20', and divided into non-overlapping 100-bp bins. Bases of low sequencing depth (less than one-third of the average depth) or high depth (twice the average depth) were masked. The analysis was performed using the following parameters: -N25 -t15 -r5 -p "4 + 25*2 + 4 + 6". The mutation rate per site per year was set at 1.93 × 10 −9 for the MHS and 6.77 × 10 −9 for Tanaka's snailfish; these values were estimated by r8s version 1.81 (ref. 79 ) with the penalized likelihood method. As no information about the snailfish generation time is available, we tested generation times of six months, one year, two years and three years for both species. We also performed an analysis with MSMC version 2.0.0 (ref. 13 ; an extension of pairwise sequential Markovian coalescent analysis) with default parameters, to infer a more recent demographic history for the MHS. All segregating sites were phased and imputed using fastPHASE version 1.1 (ref. 80 ) with default parameters, and the four above-mentioned combinations of generation times and mutation rates were evaluated. The Illumina-sequenced reads from three MHS and one Tanaka's snailfish were aligned to the genome sequences of the MHS with BWA version 0.7.15-r1140 (ref. 81 ) using the parameters: mem -t 16. Duplicated reads were filtered with SAMtools 'rmdup' . Reads around indels were realigned by GATK version 3.6 (ref. 82 ) using default parameters, and the genotype of each site in every individual was called by SAMtools using the parameters: -t DP -A -q 20 -Q 20. We then used the mappability module in GEM version 20130406-045632 (ref. 83 ) using the parameters '-l 150' to extract 407 Mb of regions that could be uniquely mapped. Conservatively, we excluded polymorphic sites that were not bi-allelic or for which QUAL < 30. Finally, we masked any site that lacked 2-to 100-fold depth of aligned read coverage. The 4D sites were extracted and the divergence time of the four MHS individuals was estimated by MCMCtree with the same calibration as above. Whole-genome alignment and mutation rate across the genome. We chose five species (the MHS, Tanaka's snailfish, stickleback, flatfish and Pacific bluefin tuna) for whole-genome synteny alignment. We did not include more species because their divergence times were too long ago. Using the stickleback genome sequence as a reference, we performed synteny alignment for these five species with Last version 894 (ref. 73 ) using the parameters '-m100 -P 4 -E0.05', generating a total of 121 Mb (of which 66 Mb was informative for all species) of one-to-one alignment sequences with Multiz version 11.2 (ref. 74 ) using the default parameters. We applied a sliding window (100 kb) along the synteny alignment to estimate the mutation rate across the genome. For each window, only neutral regions were retained (repetitive sequences and regions located within genes, or 3 kb upstream/downstream of them, were removed) to estimate branch lengths with RAxML and a given topology. The branch lengths were then used to estimate mutation rates for each branch with r8s and the previously estimated divergence time in the root node of these five species. Strength of natural selection. A total of 18,620 genes were extracted from the synteny alignments, together with gene annotations based on the corresponding stickleback genes. Any gene not annotated in all five species at a given position in the synteny alignment was excluded from further analysis. We then filtered the alignments with Gblocks version 0.91b 84 using default parameters, and excluded those with less than 150 bp of informative sites in all species, ultimately retaining 12,370 genes. The ratio of non-synonymous to synonymous substitutions (K a /K s ) in each branch was estimated using the free ratio model of codeml in the PAML version 4.9e 75 software package using default parameters. To enable comparisons with more species, we calculated the K a /K s ratios of the 3,915 one-to-one orthologues with codeml. For this part, the genes with K s values above 2 in any branch due to the possibility of false alignment or pseudogenes were filtered. To assess the ratio of diversity in neutral and functional sites, which should theoretically reflect the strength of natural selection, we first calculated the ratio of heterozygosity at zerofold relative to fourfold sites in the three MHS and one Tanaka's snailfish. We identified a total of 24.2 Mb zerofold and 6.1 Mb fourfold sites with gene annotations in the MHS, and estimated the heterozygosity of each individual at these sites. We then calculated the K a /K s substitution ratio (based on heterozygous single nucleotide polymorphisms) within the four individuals. The non-synonymous and synonymous mutations were identified using SnpEff version 4.1 (ref. 85 ). Putative gene loss. We identified genes putatively lost in the MHS using a four-step method. (1) The opsin-and pigment-related protein sequences (Supplementary Table 10) were downloaded from UniProt and searched against the MHS, Tanaka's snailfish and stickleback protein sets with blastp 86 . (2) Genes absent in the MHS but present in the other two species were searched against the genome sequences and assembled transcripts with tblastn 86 . (3) The synteny alignment between the MHS and Tanaka's snailfish was plotted to determine whether such genes had been partially or fully lost, or simply mis-annotated. Only fully lost genes were retained for further analysis. (4) The reads from the three MHS individuals and Tanaka's snailfish were further mapped to the stickleback genome sequence (ENSEMBL V89) using BWA. For each putative gene, we plotted the read depth of the four individuals along the corresponding coding sequences in the stickleback genome. Genes were identified as lost in the MHS only if the reads from all three MHS individuals could not be mapped to the stickleback genome but the corresponding read from Tanaka's snailfish could be mapped. Bglap gene knockdown experiment and calcein staining. Antisense morpholino oligomers (Gene Tools) were microinjected into fertilized one-cell-stage embryos according to standard protocols 87 Estimation of gene expression in the MHS and other species. The sequenced transcriptome reads were aligned to the coding sequences using Bowtie 2 version 2.3.2 (ref. 88 ) with default parameters. After alignment, the count of mapped reads from each sample was derived and normalized to transcripts per million using custom scripts. Transcriptome data for zebrafish and sticklebacks were downloaded from the Sequence Read Archive database (Supplementary Table 17) and aligned to the corresponding non-redundant gene catalogue by keeping the longest open reading frame. However, it should be noted that decompression as the samples were brought to the surface may have reduced the accuracy of the gene expression measurements. Gene family expansion/contraction. To evaluate gene family expansion and contraction in the MHS, we first used CAFE version 3.1 (ref. 89 ) with default parameters, which applies a maximum-likelihood framework, with results from the OrthoMCL pipeline 90 with default parameters and estimated divergence times between species as input. A conditional P value was calculated for each gene family, and families with conditional P values lower than 0.05 were considered to have a significantly accelerated rate of expansion or contraction. Genes with >200 copies in 1 of the species were filtered out. We also annotated the protein sequences with Pfam 91 using default parameters, and those with a z score above 1.96 and >5 members in the MHS were identified as expanded domains. Identification of rapidly evolving Gene Ontology terms and PSGs. To identify rapidly evolving Gene Ontology terms in the MHS, which had a significantly higher K a /K s ratio than expected, we designed a new statistic that accounts for differences in K a /K s between two species (the MHS and Tanaka's snailfish in this case) for a given Gene Ontology, as well as differences in K a /K s between that Gene Ontology and the genome background (for details, see Supplementary Note 4). The 12,370 genes extracted from the synteny alignment described above were used to identify genes that have evolved under positive selection (PSGs) by applying the likelihood ratio test using the branch model implemented in the PAML package 75 . We first excluded genes with a K s value above 2 in any branch due to the possibility of false alignment or pseudogenes. We then performed a likelihood ratio test comparing the two-ratio model (which calculates the K a /K s ratio for the lineage of interest and the background lineage) with the one-ratio model (which assumes a uniform K a /K s ratio across all branches) to determine whether the focal lineage is evolving significantly faster (false discovery rateadjusted P < 0.05). We also required PSGs to have K a /K s > 1 in the focal lineage. Amino acid preferences. Frequencies of amino acids in orthologues were calculated, and the significance of differences in these frequencies was tested by calculating z scores. We then tested the hypothesis that there may be significant differences in the frequencies of any two or three consecutive amino acids in MHS proteins relative to the mean frequency in orthologues from other species. In addition, the protein sequences in the ancestral node of the MHS and Tanaka's snailfish were reconstructed with RAxML (version 8.2.10). We then counted the frequency of every type of amino acid replacement from the ancestor to the MHS or Tanaka's snailfish, and subjected each replacement pattern in the two species to a two-sided binomial test using custom scripts. Convergence within paralogues. We defined cases where the same amino acid replacement (that is, a replacement at the same position involving the same mutant amino acid) occurred independently in paralogous genes of a single species as instances of 'convergence within paralogues', based on the approach adopted in common convergent analysis (which considers such events in orthologues from different species). Across the 9 studied species, 7,148 PANTHER gene families were identified with InterProScan, of which 3,058 are represented by at least 2 copies in the MHS. For each such gene family, the protein sequences were aligned using MAFFT with the default parameters. The phylogenetic tree and ancestral sequences were reconstructed with RAxML using the parameters '-m PROTGAMMAWAG'. Sites exhibiting the same amino acid substitution from the ancestral state in more than three gene copies within the MHS genome were identified as potential convergent sites. For each potential convergent site, we performed 100,000 Monte-Carlo simulations of protein sequence evolution with Seq-Gen (version 1.3.4) 92 , using the parameters '-n100000 -m WAG -wa -k1', based on the corresponding ancestral sequence and the protein's phylogenetic tree to determine whether such convergence could plausibly have occurred by random chance. Homology modelling of protein structures. To identify the presumable functional region formed by the amino acid sequence containing a serine mutation in four MHS hsp90 isoforms, we aligned their complete sequences with the high-resolution structures of yeast 93 and human 94 hsp90 isoforms using Clustal Omega version 1.2.4 (ref. 95 ). The results clearly indicate that each serine-substituted relevant fragment folds into an ATP-binding domain. We then probed the relative positions of the serine mutations in the three-dimensional structures by submitting the full-length and N-terminal region sequences of each isoform to the web-based server Phyre2 for homology modelling 96 . The ending residue in the N-terminal region was determined by sequence alignment to yeast heat-shock protein 93 . We chose the normal mode on the submitting page for all of the isoforms and downloaded the generated first-ranked model with the highest reported confidence and sequence coverage compared with the template thereafter. When we superimposed the generated pseudo-atomic models from the full-length and N-terminal sequences for each isoform using UCSF Chimera 97 , despite differences in several loop regions, the two models exhibited high similarity in the relatively rigid core structure, which consists of α-helices packed opposing an antiparallel β-sheet (comparisons for each isoform are shown in Supplementary Fig. 36); the serine is located in the fifth short α-helix. Further analysis of the N-terminal models using the protein cavity detection algorithm fpocket2 (ref. 98 ) indicated that the serine is in close proximity to a putative nucleotide-binding cavity in each isoform. Reporting Summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The sequence data have been deposited in the NCBI BioProject database with accession numbers PRJNA472845, PRJNA472846 (genome data) and PRJNA472245 (transcriptome data). The assemblies and annotation files have been deposited in GitHub (http://github.com/wk8910/hadal_snailfish). Statistics For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section. n/a Confirmed The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section. A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted Give P values as exact values whenever suitable. For Bayesian analysis, information on the choice of priors and Markov chain Monte Carlo settings For hierarchical and complex designs, identification of the appropriate level for tests and full reporting of outcomes Estimates of effect sizes (e.g. Cohen's d, Pearson's r), indicating how they were calculated Our web collection on statistics for biologists contains articles on many of the points above. Software and code Policy information about availability of computer code Data collection No software was used in data collection. Data analysis All softwares we used in this study have been listed in the Methods section. For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors/reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Research guidelines for submitting code & software for further information. Data Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A list of figures that have associated raw data -A description of any restrictions on data availability The sequence data have been deposited in the NCBI BioProject database with accession numbers PRJNA472845, PRJNA472846(genome data) and PRJNA472245 (transcriptome data). The assemblies and annotation files have been deposited in Github (http://github.com/wk8910/hadal_snailfish). Field-specific reporting Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection. Life sciences Behavioural & social sciences Ecological, evolutionary & environmental sciences
2019-04-16T14:28:30.586Z
2019-04-15T00:00:00.000
{ "year": 2019, "sha1": "364c7f8e8568a29830698eefcc9eda1657359ef4", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41559-019-0864-8.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "65d73ec81cd27eba56044cc9598d195ca614046e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234335275
pes2o/s2orc
v3-fos-license
The response of saline-sodic soils to reclamation using biological and organic amendments under arid regions of Egypt The study focused on investigating the contribution of reclamation strategies of saline-sodic soils and their impacts on soil fertility characteristics. In this study, the soil treatments were denoted as: SG1 and SG2 (23.8 and 47.7 ton/ha of spent grain); TC1 and TC2 (23.8 and 47.6 ton/ha of compost); Azospirillium in inoculation with seed and soil (Az); Az + SG1 (Az+SG1); Az + TC1 (Az+TC1); mineral fertilizers (NPK); and control (CK). All treatments were mixed in pots with 30 kg soil. The results showed that reclamation with Az and SG2 treatments significantly affected soil pH, EC, and macronutrients. In contrast, no significant (P > 0.05) effects were found with the two compost levels and NPK treatments. The salt contents were maximal in the control treatment, while decreased with Az, SG2, and Az+SG treatments. However, SG2 application decreased the soluble Na+ concentrations in soil solution. The effect of organic and biological reclamations on chemical properties was in the following order: Az+SG > SG2 > Az > TC2 > Az+M > SG1 > TC1 > NPK > CK. Moreover, it positively impacted the salt contents, which improved soil chemical properties in the saline-sodic soil after three months of seed sowing in the greenhouse. Introduction The amelioration and management of salt-affected soils will go a long way to meet the desired 57 % increase in global food production by 2050 [1]. The soil salinity can be defined as the dissolved mineral salts accumulated in soil solution and rhizosphere [2]. The atmospheric deposition from sea salts and the intrusion of sea-water into the ground-water of coastal areas are another form of soil salt. Water overuse can significantly decrease the standard water-table [2]. Salt may rise due to soilevaporation and soil-plant evapotranspiration under high water table conditions. Secondary salinity may be caused by irrigation methods and water quality such as brackish water. The salts excess from soil profile may adversely influence biological, physical, and chemical soil properties. In these soils, the exchangeable Na+ is bound to the negatively charged clays, causing the deflocculation of clay particles. As [3,4] found that the high exch-Na+ percentage can lead to clay swelling and dispersion of clay, as well as soil aggregates breaking. As a consequence, both the water-holding capacity and water infiltration rate could be reduced by this process. As a new way of recycling agricultural wastes to the field, spent grain (SG) directly or indirectly affects soil fertility and ameliorations. SG is the beer industry organic wastes primary product, representing 85% of the total products generated [5]. Plant growth-promoting rhizobacteria (PGPR) is a bacterium that colonizes herbal rhizospheres and stimulates plant growth using various nitrogen fixation, phosphates availability, quorum sensing, etc. PGPR provides multiple ways of substituting chemical fertilizers, pesticides, etc., thereby increasing demand of bio-fertilizers. It would be fascinating to know the fundamentals and the context behind this remarkable science before continuing with current applications and state of the art PGPR and crop plants. In general, about 2-5% of the rhizosphere bacteria are PGPR [6]. The objective of the present study was to evaluate the effects of different amendments on reclamation of saline-sodic soil, and their affected-on soil organic and biological ameliorants under greenhouse conditions under arid climate. The experimental details and preparation Nine treatments were established and shown in Table (2), including two levels of spent grain, SG1 (23.8 ton/ha) and SG2 (47.6 ton/ha); two levels of compost, TC1 (23.8 ton/ha) and TC2 (47.6 ton/ha); injection of Azospirillium with corn seeds and soil (Az); the combination Azospirillum and spent grain (Az+SG1); the combination Azospirillum and compost (Az+TC1); E3S Web of Conferences 247, 01047 (2021) https://doi.org/10.1051/e3sconf/202124701047 ICEPP-2021 mineral fertilizers (NPK) contained on 75: 15: 25 units of urea, calcium phosphate, and potassium sulfate, respectively. All treatments were mixed with pots 30 kg soil for 4 months of seed sown and compared to control (CK, without fertilizers). The Azospirillum was cultured and grown and performed at a dose of 1.0 ml per seed. Soil characteristics The Egypt region prevailing climate is dry and hot summers and semi-cool and wet winters. The average temperature and precipitation are deficient, 25.6 C, and 130 mm, respectively. The experimental period lasted from Jun 2017 to Sept 2017. The study site was a calcareous saline-sodic with clay loam texture, with clay concentrations of (453 ± 0.18 g kg -1 , silt 235 ± 0.21 g kg -1 , and sand 310.2 ± 0.11 g kg -1 ) was determined by the hydrometer as described by [7]. Soil samples were collected from the study site after removing visible roots and fresh litter material; the composite samples were sieved (< 2 mm). The soil chemical properties were determined and shown in Table 1. Soil analyses The soil-plant analysis was performed in cooperation between the soil chemistry and environment laboratory of ALCRI, SRTA-City, Egypt, and laboratory SPbSU University, Russia. The pH was measured in a soil: water suspension (1:2.5 w/v). The electrical conductivity (EC) was measured in saturated paste extracts using an EC meter. Total N content was determined by the Kjeldahl digestion method. Available P concentration was extracted with (0.5 N) NaHCO3 and measured using a spectrophotometer at wavelength 880 nm. Available K content was extracted by (1 N) ammonium acetate solution and measured by the flame photometer. The N, P, and K were measured as explained by [7]. Available Fe 2+ , Zn 2+ , Mn 2+ , Cu 2+ , and B + concentrations were extracted by DTPA solution and measured with atomic emission spectroscopy as explained by [8]. Soil organic carbon content was determined by oxidization with K2Cr2O7. Exchangeable sodium percentage (ESP) of soil was determined using the relationship ESP = (Exchangeable-Na÷CEC) ×100. Exch-Na + was extracted with (1 M) ammonium acetate solution. Soil CEC was estimated following the Bower saturation method as outlined by [9]. Statistical analyses Analysis of variance (ANOVA) and repeated measures/within the subject variance analysis were used to test the effects of depletion (soil carbon and chemical parameters) and soil additives on soil reclamation, germination, plant growth, and yield productivity. Differences were considered statistically significant for P < 0.05. Effects on macronutrient Ca 2+ , K + , and Mg 2+ Data demonstrated that organic and biological treatments influenced the concentrations of macronutrients in saline-sodic soil during the plant growth period (Fig.1). The results revealed that the application of compost, SG and the combination with Azospirillum increased macronutrients concentrations, calcium and magnesium after four months application as follow: Az+SG1 ≥ TC2 > TC1 > Az+TC1 > Az > SG2 > SG1 > CK > NPK, respectively. The differences in the concentrations of Ca 2+ and Mg 2+ were significant (P > 0.05) among all treatment applications. The soluble Ca 2+ of saline-sodic soil increased by 147.55, 147.21, 136.25, 133.38, 129.17, 114.16, 89.88, and 88.78 % for Az+SG1, TC2, TC1, Az+TC1, Az, SG2, SG1, and NPK treatments, respectively in comparison to their CK treatment. The trend of Mg 2+ concentrations was similar to Ca 2+ in saline-sodic soil after corn plant harvest. Az+SG1 and TC2 treatments significantly contributed to Ca 2+ and Mg 2+ were greater than SG1 and Az treatments at the combination of application rates. This result might be due to that the initial source of TC2 had highly electrical conductivity before soil treatment or dissolving some calcium carbonate by Azospirillum bacteria from the soil. The soluble Ca 2+ , K +, and Mg 2+ concentrations are shown in (Fig. 1). The results showed that the increases of Mg 2+ concentrations behaved similarly to the increases of Ca 2+ content among all treatments. On the other hand, the potassium (K + ) concentration in all treatments was lower than that of Ca 2+ and Mg 2+ . These lower K + values may be due to the low inherent content in the saline-sodic soil or due to slow decomposition rate of organic and bio-organic amendments in soil. Generally, the soluble K + of salinesodic soil increased by 828.57, 778.57, 596.42, 539.28, 471.42, 467.85,403.57, and 342.85% for Az+SG1, SG2, SG1, Az, Az+TC1, TC2, NPK, and TC1 treatments, respectively in comparison to their control concentration. The SG1+Az and SG treatments positively affected the content of K + significantly (P > 0.05) in the saline-sodic soil. Similarly, [10,11] reported increases in Ca 2+ , K +, and Mg 2+ concentrations in soil solution of soil treated with organic amendments. In a laboratory experiment on saline-sodic soil reclamation, the effects of deferent organic amendments on the same soluble cations [12] reported significantly (P > 0.05), the soil additives of spent grain and Azospirillum bacteria might provide supplemental soluble macronutrients such as Ca 2+ , Mg 2+, and K + in leachates of saline-sodic soil. Fig. 1. Influence of organic and biological additives on soil soluble Ca 2+ , Mg 2+ , and K + after three months of corn seed sowing in saline-sodic soil. Data points and error bars represent means, and standard errors (n = 3) were analyzed using a oneway ANOVA. The same letter is not significantly different at the least significant difference (LSD=0.05) test (p ≤ 0.05). The pH values after soil application Soil pH is an essential factor that regulates the solubility and availability of plant nutrients. Increasing the availability of plant-nutrients is created by reducing soil pH in saline-sodic soil. The soil analysis showed that in all treatments, soil pH was significantly reduced relative to control (Fig. 2). The pH values of saline-sodic soil treated with organic and bio-organic ameliorants decreased by 2.86, 3.22, 5.93, 6.1, 7.54, 8.68, 8.79, and 11.53 % for TC1, TC2, Az+TC1, NPK, SG2, Az+SG1, SG1, and Az treatments, respectively compared to CK treatment ( Table 3). The soil pH at Az treatment alone was decreased, but the effect was reversed in the TC1 and TC2 treatments; no significant (P < 0.05) effect was observed compared with Az and SG1. In the Az and two levels of spent grain treatments, the decrease in soil pH was enhanced by a combination of Az and SG1 treatments. The reduction in the pH values was influenced by the type of amendment and application rate. The increasing application rate of Az and SG1 was enhanced pH reduction. However, the increase in soil pH may induce the Ca 2+ to become more alkaline and, therefore, more sodic as calcium solubility is suppressed; this was proposed by [12]. In arid and semi-arid climates, the soluble Ca 2+ and Mg 2+ become low, Na + and K + ions accumulate in soil solution when CO3 -2 and HCO3increase [13]. [14] found that the applications of organic amendments such as spent grain and Azospirillum might reduce the soil pH. However, the magnitude of this decrease brought about by 10 t/ha of spent grain did not reduce the soil pH sufficiently to influence lime solubility. A long-term soil incubation experiment was conducted to examine different organic amendments on the soil chemical properties with high salinity, which positively impacted soil pH [15]. On the other hand, [16] found that the organic, bio, and biochar additives did not increase the soil pH [11] without gypsum which reduced the soil pH. Fig. 2. Influence of organic and biological additives on soil pH after three months of corn seed sowing in saline-sodic soil. Data points and error bars represent means, and standard errors (n = 3) were analyzed using a one-way ANOVA. The same letter is not significantly different at the least significant difference (LSD=0.05) test (p ≤ 0.05). EC concentration after soil treatments The high electrical conductivity (EC) of soil affects the crops by reducing water supply (osmotic effects). The EC concentrations in the saline-sodic soil followed the order TC2 > NPK > TC1 > CK > Az+TC1 > Az + SG1 > Az > SG1 > SG2, respectively (Fig. 3). The SG2 possessed the lost EC concentrations in saline-sodic soil after four months of soil treatments. The soil EC of SG2, SG1, Az, Az+SG1, and Az+TC1 treatments were significantly (P < 0.05) different compared to the control. Simultaneously, the EC concentrations of TC2, NPK, and TC1 did not differ significantly from control. These reactions help infiltration, flocculation, and stability of water. The EC concentrations were increased by the additions of NPK, TC1, and TC2 treatments because the initial EC for these treatments was high. When Az+TC1 was applied to the soil, the EC decreased more than that EC with TC1 treatment. Our funding agreement with [11] found the applications of biochar or organic amendments have decreased the Na + soil salinity as much more than the conventional amendment of gypsum. These indicate that the compost application at (23.8 and/or 47.6 ton/ha) may be too much to affect salts solubility in saline-sodic soils. The low EC concentrations in the treated soil with SG2 and Az are due to this reduction in the salt concentration by spent grain and Azospirillum bacteria. The low osmotic effects in the soil treated with SG2 and Az could result from the sodium complex with organic matter as a sodium humate form. These results were agreed with [12]. Also agree with [11,13] found the exchange complexes forming Ca 2+ by exchanging with Na + from the cation exchange complex Na2SO4, MgSO4, and other high-solubility salts illustrate this decrease in EC concentrations. Fig. 3. Influence of organic and biological additives on soil EC after three months of corn seed sowing in saline-sodic soil. Data points and error bars represent means, and standard errors (n = 3) were analyzed using a one-way ANOVA. The same letter is not significantly different at the least significant difference (LSD=0.05) test (p ≤ 0.05). Organic matter content The increases in soil organic matter (OM) and organic carbon (SOC) content were observed in all treatments, except Az, CK, and NPK, due to the low input of organic material by these treatments (Fig. 4). Also, OM content increased significantly (P < 0.05) for SG2, TC2, Az+SG1 treatments were 1.94, 0.946, 0.754 %, this increase was by 1388.37, 1100 and 876.7 %, respectively after three months of seed sowing. The lowest OM contents were with Az, NPK, and CK treatments were 0.160, 0.096, and 0.086 %. On the other hand, the SOC content trend was the same with the trend of OM content in the saline-sodic soil followed the order: SG2 > TC2 > SG2 > Az+SG1 > SG1 ≥ Az+TC1 > Az > NPK ≥ CK. No significant differences were observed between treatments SG1 and Az+TC1 and NPK and CK treatments. These enhancements of the OM and SOC contents in the saline-sodic soil with SG2, TC2, and Az+SG1 treatments due to higher organic matter content for raw materials of spent grain and compost were added to the soil. SOC content increased after organic matter additives to soil and organic matter decomposition by microorganisms. The findings in (Fig. 3) further showed that the combination of Azospirilum with both sources of organic additives significantly improved soil OM and SOC contents over control and NPK and Az only. However, there was still a great effect of growing corn under saline-sodic soil with these treatments. The results agree with other researchers that found the organic amendments such as spent grain and compost significantly increased the soil organic matter and organic carbon contents in soil [17] also, reported the combination of organic sources with 50% of NPK fertilizer enhanced OM content after crop harvest. Fig. 4. Influence of organic and biological additives on soil OM and OC after three months of corn seed sowing in salinesodic soil. Data points and error bars represent means, and standard errors (n = 3) were analyzed using a one-way ANOVA. The same letter is not significantly different at the least significant difference (LSD=0.05) test (p ≤ 0.05). Conclusion The organic amendments to soil showed higher Ca 2+ , Mg, and K concentrations; however, this was depending on the OM decomposition rate of organic additives. The injection of Azospirillum bacteria with corn seed and saline-sodic soil improved pH, EC, and plant growth. Therefore, SG2 and Az applications are found to be ideal for saline-sodic soil amelioration and are an effective way to increase nutrients availability and corn plant productivity under greenhouse in arid regions.
2021-05-11T00:06:29.150Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "6d094c0815ef748c5291888dd846392b3a2d9afe", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/23/e3sconf_icepp21_01047.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9f1e918b74311fa8225497ba269fba78e684ef07", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
238823165
pes2o/s2orc
v3-fos-license
Technical Efficiency of Traditional Village Chicken Production in Africa: Entry Points for Sustainable Transformation and Improved Livelihood : Increasing poultry product consumption trends have attracted researchers and development practitioners to look for interventions that transform the low-input low-output-based village chicken production to a high yielding production system. However, due to the intricate nature of the production system, there is a dearth of evidence that helps design comprehensive interventions at the smallholder level. Using national-level representative data collected from 3555 village chicken producers in Ethiopia, Nigeria, and Tanzania, this study examines the technical efficiency of village chicken production and investigates the main factors that explain the level of inefficiency. We applied a stochastic frontier analysis to simultaneously quantify the level of technical efficiency and identify factors associated with heterogeneity in inefficiency. We found that the level of technical efficiency is extremely low in the three countries, suggesting enormous opportunities to enhance productivity using available resources. The heterogeneity in technical efficiency is strongly associated with producers’ experience in breed improvements and flock management, limited technical knowledge and skills, limited access to institutions and markets, smaller flock size, gender disparities, and household livelihood orientation. We argue the need to adopt an integrated approach to enhance village producers’ productivity and transform the traditional subsistence-based production system into a commercially oriented semi-intensive production system. Introduction In most developing countries, village chicken production is an essential contributor of animal-source protein and income to the resource-poor and marginal groups of the society [1,2]. This sub-sector produces the largest proportion of chicken products consumed by rural and urban households, and these products have important cultural and religious values. Additionally, village chicken production is an integral part of the farming system that converts low-quality feeds and household wastes into high-quality proteins and supply manure for crop production. Despite the momentous contribution of the sub-sector to household livelihood and overall wellbeing, the level of production and productivity organized as follows. Section 2 presents the research methods and empirical approaches adopted. Section 3 explores the results and discussion. Section 4 highlights the conclusions and policy implications. Sampling Methods The data for this study were generated from a baseline survey of the African Chicken Genetic Gains (ACGG) project, which was conducted in Ethiopia, Nigeria, and Tanzania in 2015. This baseline survey adopted multi-stage sampling techniques that include subnational areas/regions, zones, districts, villages, and households. Through desk review and key informant interviews at each administrative level in each country, we identified the following purposive selection criteria: Chicken production potential, agroecology, number of holders, and contribution of chicken production to household income and food security. Based on the above criteria, 203 sample villages were randomly selected from a thorough list of villages. Finally, from the sampled villages, a total of 3555 (Ethiopia = 1258; Nigeria = 1146; Tanzania = 1151) households were randomly selected for a structured interview. All the respondents consented to the survey, and data were collected by well-trained enumerators through face-to-face interviews using tablet computers loaded with Open Data Kit (ODK). We conducted the data management and analysis using the Stata Version 16 Statistical Package. Based on the major objective of the study, sample village producers who keep both local and improved chicken breeds were dropped and 3029 (Ethiopia = 932; Nigeria =1000; Tanzania = 1097) village producers who keep only local breeds were used for the analysis. Dependent and Independent Variables Unlike commercial agriculture farms, efficiency analysis for the village chicken production system is not straightforward due to the following two primary reasons. First, as Taylor and Adelman [21] indicated, rural households face dual production and consumption decisions that may force them to use outputs for home consumption, income generation, and other purposes, making the estimation of real output values difficult. Second, village chicken production generates multiple outputs, live birds, and eggs that violate the single-output production principle of the most common advanced econometric models used for technical efficiency analysis. Researchers have recommended various approaches to conduct efficiency analysis [22,23]. According to Coelli and Perelman [23], aggregation of the total output into a single index such as revenue, dual cost function, input-requirement functions, and 'n' output-or input-orientated distance function are some of the possible options. The first options require the observability of the output prices and the second option requires the cost minimization behavior of the producer. Based on the nature of our data, we chose the aggregation approach and generated the total value of eggs and live birds used for different purposes. These include income generated from the sale of live birds and eggs, the estimated value of live birds and eggs consumed at home, the estimated value of change in flock size, and the estimated values of output transferred to others in the form of a gift or used for other social or cultural values. We used this aggregated value as the output of the frontier function. The inputs used in the frontier function include the value of purchased and owned feed used, the total number of local hens, cost of vaccination and disease treatment, time allocated for chicken management, total land holding, and average household annual income (a proxy for capital). To explore the factors associated with the level of technical inefficiency, which is the main purpose of this analysis, we included various farm, institutional, and environment-related indicators in the inefficiency model. Furthermore, considering the reported production and productivity indicators of the local breeds in the three countries and assuming a similar level of access to technologies in the study areas, we used the pooled and country-level data to estimate the efficiency. In the pooled data, the frontier is based on the best practices in the three countries. This does not only enable us to undertake cross-country comparison, but it also helps generate policy-relevant evidence to be used both at the country and regional levels. In addition, to capture unmeasured and unobservable sources of inefficiency at the country-level, such as macroeconomic policy, institutional, and infrastructural issues, we included a country dummy indicator in the pooled frontier model. The value of inputs and outputs in each country is converted to its USD equivalent using the average annual official exchange rate of 2015 [24]. Theoretical and Empirical Approaches Empirical researchers use different econometric models to assess the productivity and efficiency of agricultural commodities in developed and developing countries. Mostly, researchers have adopted parametric and non-parametric approaches [25]. The most commonly used non-parametric approaches are the Data Envelopment Analysis (DEA) and Free Disposal Hull models [16,26,27]. Although the DEA technique does not require a prior specific functional form for the production frontier, researchers primarily use this technique to handle multiple outputs and multiple inputs without aggregation [25]. While the absence of any prior assumption or lack of parameterization is an important strength for DEA, its sensitivity to extreme observations and attribution of all deviations from the frontier as inefficiency are its drawbacks. The drawbacks lead to biased parameter estimates in the frontier and inefficiency models. Furthermore, if one is interested in estimating both the level of efficiency and factors associated with inefficiency, it should be done using a two-stage analysis. The main parametric approaches used in empirical studies, including frontier production functions and distance functions, can be categorized as stochastic or deterministic functions [9]. In Stochastic Frontier Analysis (SFA), unlike DEA, the deviation from the frontier function is attributed to firm characteristics and other external factors that affect the level of efficiency [28]. In other words, while DEA considers the random noise as inefficiency, SFA separates statistical noise associated with other factors outside the control of the firm and technical inefficiency. The ability to determine the efficiency and factors associated with efficiency simultaneously is considered an important feature in SFA [29]. As a result, SFA has been used by many researchers to assess the efficiency of agricultural productions in different countries [30][31][32][33]. Sometimes researchers use parametric and non-parametric approaches to check the consistency of estimates and compare their robustness [32,34]. Empirical researches show that the choice of models depends on the type of analysis, nature of the data, and the researchers underlying assumptions. Considering the aim of the analysis, we applied SFA to estimate the level of efficiency and identify factors associated with inefficiency. SFA estimates the parameters using a two-step approach: Estimating the model parameters using maximum likelihood followed by estimating the inefficiency point estimate through conditional mean distribution using [14]. The level of inefficiency is estimated by using mean E (U|ε) . After obtaining the estimated values of point estimate u, the technical efficiency is obtained by E f f = exp (−û ). The formula for SFA is represented by the following equation: where y i is the logarithm of the output, x i is the vector of inputs, such as feed, labor, vaccination, capital, land size, and number of chickens. The composite error term ε i stands for the measurement and specification error terms represented by v i and the inefficiency component u i . Both the error terms are assumed to be independent and identically dis-tributed across observations. Furthermore, v i and u i are statistically Independent and Identically Distributed (IID) across observations, and f (v i ) have symmetric distribution. One prominent issue in efficiency analysis is determining the functional form of equations that designate the relationship between inputs and outputs. According to Greene [14] and Kumbhakar, Wang [9], Cobb-Douglas, and Trans Log functions are the most common functional forms used by most empirical research in stochastic frontier analysis. Considering the nature of our data, we applied the Cobb-Douglas production function as indicated below [9]: where y i represents the observed output of village producer 'i' and y * i is the potential output with zero mean and random error. The error term u i is the effect of technical inefficiency on the outputs. The vectors x i and β are distinct types of inputs used and their corresponding coefficients, respectively. The stochastic frontier is defined by Equation (5) due to the u i and the model without u i represents the classical production function. We assumed that v i has a normal distribution with mean zero and variance σ 2 v and u i has truncated normal distribution with mean µ and variance σ 2 u . The notation N + µ , σ 2 u shows the truncation of the normal distribution from above. Some of the explanatory variables in the frontier model have zero values. This is common in agricultural data as some producers may not use all the inputs for different reasons [35]. However, this leads to a significant loss of sample sizes, as the logarithm of zero gives missing data and results in biased coefficients estimates. Empirical research suggests the following approaches to handle the problem of zeros in Cobb-Douglas production functions. The first approach is adding one or any arbitrary number and including all the observations in the analysis. Although this approach helps keep a higher sample size, researchers criticize this approach due to its significant effect on the estimated parameters when the zero observations are large. The second approach uses Inverse Hyperbolic Sine (IHS) transformation, an innovative approach in econometric applications. However, as Bellemare and Wichman [36] explained, using this transformation would change the expected level of input elasticity. The third approach uses dummy variables associated with each of the independent variables that indicate the incidence of zeros. According to Battese [37], this approach gives unbiased parameter estimates and is applied to estimate the technical efficiency of smallholder farmers in developing countries [38]. This paper adopted this approach to include a maximum number of observations and generate unbiased parameter estimates. The equation for the Cobb-Douglas production function of village chicken producers can be specified as follows: where Y i is the total value of egg and live bird produced and FD i , LH i , VT i , , FL i , LD i , and K i represent the feed, number of local hens, vaccination/treatment, family labor, land, and capital, respectively. The indicators represented by D i signify dummy indicators for the inputs, which have zero values. The individual efficiency level (TE i ) is estimated through the following formula: where Y * is the frontier output and Y i is the output i th village producer. In the case of a production frontier, TE i will take a value between zero and one. The overall variance ε is given by: γ stands for the proportion variance accounted for technical inefficiency. The higher the proportion, the larger the technical inefficiency that accounts for the village producers' variability. Exogenous Determinants of Inefficiency As stated above, one of the important advantages of stochastic frontier analysis is the inclusion of exogenous variables that affect the distribution of the inefficiency term in the same model. The exogenous variables should be neither input nor output variables that affect the performance of producers either by shifting or scaling the frontier and distribution functions or both. The inefficiency determinants model can be specified as follows: U i is the inefficiency component as defined above, and 'x i ' represents different households and other socio-economic factors that affect the level of inefficiency. Although the distributional assumption is considered as the most important aspect in technical efficiency analysis, empirical findings suggest that models of technical efficiency analysis are robust for distributional assumptions [14]. Table 1 presents a summary of inputs and output variables used in the technical efficiency analysis. The average 3-month income generated from poultry products is about USD 18.1, 44.9, and 52.8 in Ethiopia, Nigeria, and Tanzania. The overall average 3-month income in the three countries is about USD 39.5. The average number of local hens owned by village producers is five, with a minimum of three in Ethiopia and six in Tanzania and Nigeria. The average feed cost is USD 16.2 in the three countries. The average feed cost in Tanzania is greater than in Nigeria and Ethiopia. Despite the higher disease outbreak and incidence report, the household expenditure for vaccination and disease treatment is low. The expenditure in Ethiopia is by far lower than in Nigeria and Tanzania. On average, producers spend about 57.3 h for management, including purchasing inputs, feeding, watering, cleaning, and other related activities. Producers in Ethiopia spend more time on management than producers in Nigeria and Tanzania. The average landholding in the three countries is 1.7 h, and households in Tanzania have the largest average holding size than Ethiopia and Nigeria. Household capital indicators, the average yearly income, is the highest in Ethiopia and lowest in Nigeria. This shows that village chicken producers in Nigeria engage in limited agricultural and non-agricultural incomegenerating activities compared with Ethiopia and Tanzania. On average, village, chicken producers have about 1.7 h of land. Summary of Exogenous Variables Used in the Inefficiency Model We included household and farm characteristics, access to institutions and markets, and other socio-economic indicators in the inefficiency determinants model. Table 2 details a summary of these indicators. The length kept indicates producers' experience in chicken production. On average, producers in the three countries have 13 years of experience in village chicken production. Producers in Nigeria and Tanzania have the highest and the lowest average experience, respectively. The average number of household head schooling years is about 6 years. Producers in Ethiopia have the lowest schooling years than producers in Nigeria and Tanzania. Training represents the average number of poultry-related extension and training activities the producers participated in the previous 12 months. This indicator shows that producers in the three countries have limited access to poultry production and marketing related education. On average, there are five persons (adult equivalent) within the producers' households that could participate in poultry production and marketing activities. The average number in Nigeria is higher than in Ethiopia and Tanzania. The housing index indicates the quality of chicken housing used by producers in dry and wet seasons and during the day-and night-time. This index is generated by weighting diverse types of housings used by the producers. It shows the level and quality of housing used for chicken production in wet and dry seasons. A higher value represents better housing quality. Producers who use separate poultry houses during dry and wet seasons would have a better index than producers who do not use any type of housing or keep the chicken in the home. On average, producers in Nigeria and Tanzania have a better housing system than producers in Ethiopia. The average distance of producers from all-weather roads is about 2.1 Km, a minimum of 1.30 Km in Tanzania and a maximum of 2.75 Km in Ethiopia. The distance to all-weather roads is a good indicator for access to input and output markets and institutions. The women empowerment index (Women Emp. Index) captures women's engagement in income use decisions, asset ownership, and employment opportunities. We generated this index using normalization and aggregation approaches, and it shows the relative access of women to resources. The higher the value, the better the access to and use of resources. This index shows the presence of a statistically significant difference among the three countries. The overall average flock size is about 20, with a minimum average of 8 in Ethiopia and a maximum average of 26 in Nigeria. The average flock size in Nigeria and Tanzania is more than three times higher than the average flock size in Ethiopia. The last column shows the non-parametric Kruskal-Wallis test used to assess differences in the median values of indicators among the three countries. This test revealed statistically significant differences between the average values of all the variables in the three countries. Table 3 presents a summary of the categorical variables included in the inefficiency determinant model. On average, 52.2 and 60.4% of the producers reported their experience in culling and breed selection activities. Compared with Nigeria, a higher percentage of producers in Tanzania and Ethiopia have experience in culling activities. A higher percentage (82.3%) of producers in Ethiopia and a lower percentage (40.7%) of producers in Nigeria have breed selection experiences. On average, 58.6 and 14.1% of the producers have access to health and credit services in the three countries. The percentage of producers who have access to health and credit services is lower in Nigeria than in Ethiopia and Tanzania. Contribution to livelihood shows if poultry production is considered among the three most important livelihood activities in the household. Of the total respondents, 51.4% reported poultry production as being among the three major livelihood activities. Compared to Ethiopia and Nigeria, a higher percentage of producers in Tanzania consider poultry production as a major livelihood contributor. Producers' feeding and watering practices are included as a categorical variable: Producers who do not provide water and feed in a container, producers who provide either water or feed in a container, and provide feed and water in a container. Only 5.2% of the respondents do not supply water in a container and throw feed on the ground from the total respondents. About 59.1% of the producers' practice either of the management options, and the remaining 35.7% supply both water and feed in a container. Compared to Ethiopia and Nigeria, a higher percentage of producers in Tanzania are hygienic and use containers to supply feed and water. Of the total respondents, about 22.2% are female-headed producers. The gender of the head enables us to assess the associations between the gender of the head and the level of inefficiency. Table 4 presents results from the stochastic frontier production function model for pooled and country-specific data. We regressed the aggregated value of total live birds and eggs produced with different inputs used for production. In the pooled model, the level of technical efficiency is positively associated with the number of local hens, cost of feed, cost of vaccination, amount of labor, and total land holding. The country-specific frontier model shows variability in the elasticities of different inputs used in each country. The number of hens has a positive and significant effect in the three countries models. The effect of vaccination and disease treatment is positive and significant in Ethiopia and Tanzania. The effect of feed, family labor, and household's capital is positive and significant only in Tanzania. The elasticity of inputs on outputs can be explained using the pooled model results. The value of total poultry production looks more elastic with the number of hens followed by vaccination and feed. A 1% increase in the number of hens increases the overall value of production by 0.264%, revealing that holding a lower number of productive hens reduces productivity. A small number of productive hens at the village level is attributed to financial capacity, high mortality, and the longer time needed to bring hens to reproductive maturity. Feed is the other production and productivity limiting factor under village management conditions. A 1% increase in feed expenditure increases the value of output by 0.041%. Similarly, the effect of vaccination and disease treatment on the value of outputs is positive and significant. A unit increase in vaccination expenditure increases the production value by 0.153%, which could be associated with reduced mortality and enhanced productivity. Results of the Stochastic Frontier Model The effect of time spent on management is also positive and significant. A 1% increase in an hour spent on management increases the value of production by 0.048%. Correspondingly, the effect of landholding size on the value of production looks elastic and positive. A 1% increase in total landholding size increases the overall value of production by 0.026%. This is expected as a larger holding size enables producers to keep larger flocks, produce more feed, and construct better chicken housing. The proxy for household capital, the average household income, has a positive but insignificant effect on the production value. The country dummy indicator for the difference in macroeconomic policy, institutional and environmental-related factors underline significant variations between countries. Considering Ethiopia as a reference, the indicators for Tanzania and Nigeria are positive and statistically significant at 1%. This implies that village chicken producers in Tanzania and Nigeria could produce more outputs by the given inputs than producers in Ethiopia. Lambda, the ratio of 'u i ' and 'v i ', is remarkably high and statistically significant at 1% in the pooled and country-specific models. This shows that the variation in the technical inefficiency of chicken producers could be attributed to farm or householdspecific characteristics than random variation. The estimated variance for the pooled model is 0.93, which indicates that about 93% of the total variation is due to variation in village producer's production efficiency and not random variability. The country-specific variance for Ethiopia and Nigeria is also 93%, while the variance for Nigeria is 96%. The sum of the partial elasticities of all the inputs or the return to scale for the pooled data is about 0.55. This highlights a decreasing return to scale and suggests that increasing all the inputs by 100% leads to an increase of output by 55%. Similarly, the country-specific return to scale for Nigeria and Tanzania are 0.39 and 0.69, respectively. Unlike Nigeria and Tanzania, the return to scale in Ethiopia is 1.0, which indicates a constant return to scale. The decreasing return to scale underscores that current production technologies used by village producers are not input elastic. This could be associated with the production system and type of poultry breeds kept by producers. Therefore, illuminating the need to avail other productive technologies and improved production practices with better elasticity to inputs. When the production function exhibits a decreasing return to scale, using more and more inputs would decrease productivity. Alternatively, the use of better technologies is an appropriate option [9]. Table 5 presents the estimated overall and country-specific technical efficiency of village chicken producers in the three countries. The estimated technical efficiency for the pooled model is about 38%, with a minimum of 28.5% in Ethiopia and a maximum of 43.6% in Nigeria. This overall technical efficiency indicates that an average village chicken producer produces 38% of the value of output produced by the most efficient village chicken producer using the same technology and inputs. The average efficiency score in the pooled model is less than the country-specific efficiency score in Nigeria and Tanzania and greater than the country-specific efficiency score in Ethiopia. The pooled and country-specific models suggest that producers in Nigeria and Tanzania use existing resources efficiently than producers in Ethiopia. The estimated average technical efficiency scores prove significant productivity variation among village chicken producers in the three countries. This shows the presence of a tremendous opportunity to improve village chicken production and productivity with available inputs without a momentous change in production practices. A lower technical efficiency also highlights that improving farmers' access to locally adapted technologies could be another important policy option to enhance productivity [39]. The available few empirical studies have also examined lower technical efficiency levels in the region [6,20,40]. The low level of technical efficiency underlines the need for examining the potential causes of efficiency differentials. This helps explore entry points for research and development activities. Furthermore, including exogenous determinants of inefficiency in the production function equation helps improve the estimated parameters [9,29]. As stated above, the dependent variable in the exogenous determinant model is technical inefficiency generated from the frontier model, and the exogenous determinants are household socioeconomic and other institutional variables. We present a summary of the model results in Table 6. The estimated parameters from the regression model revealed that the level of technical inefficiency is strongly affected by household and farm level characteristics. In the estimated coefficients, a positive sign shows an increasing effect on technical inefficiency or a decreasing effect on technical efficiency and the negative sign shows a decreasing effect on technical inefficiency or an increasing effect on technical efficiency. We categorized these characteristics into lack of experience in breed selection and management, technical skill and experiences, management practices, access to institution and markets, household livelihood strategies, farm/flock size, and an inter-and intra-household gender disparity. Experience in Breed Selection and Management Improved production practices play a significant role in enhancing the level of production and productivity of smallholder farmers. In smallholder chicken production, the productivity of chicken depends on breeds performances, including sexual maturity age, the number of eggs/clutches, number of chicks produced per single hatch, adaptability, and growth performance. The estimated coefficients from the pooled and country-specific models highlight the presence of a negative and statistically significant association between breed selection and culling with the level of technical inefficiency. The marginal effect in the pooled sample shows that producers who practice breed selection are 18% less inefficient than those who do not. The effect of breed selection in Ethiopia and Nigeria is higher than in Tanzania. In Ethiopia, producers who practiced breed selection are 36% less inefficient than others who do not. Breed selection improves productivity by building a flock with the desired characteristics. According to the sampled respondents', producers use body size/weight, egg size/weight, egg productivity, feed requirement, and others to select breeds. Empirical studies in developing countries have also documented the critical role of breed selection in improving the production and productivity of chicken [5,41,42]. Despite the long duration it takes, in the rural household settings, breed improvement through selection is more sustainable than other breed improvement techniques [5]. The negative and significant association between breed selection and technical inefficiency also suggests enhancing smallholders' access to locally adapted improved breeds that produce better eggs and meat using modest change in management practices. The pooled model shows that producers who practice culling are 15% less inefficient than others. The effect of culling in Ethiopia is higher than the effect in Tanzania and Nigeria. In Ethiopia, producers who practiced culling are 28% less inefficient than others. Village producers cull chickens based on criteria related to farmers' production objectives, mainly income generation and consumption of products [43]. Culling helps remove low productive and undesirable chickens. According to the sample respondents, most of them use egg productivity, age, disease concern, body weight, and broodiness as the most important criteria for culling. The strong relation between culling and technical efficiency underlines the important role of building producers' capacity in this aspect through proper guidelines. Technical Skills and Experiences Village chicken producers' skills and abilities in production and marketing decisions have a significant role in improving the production and productivity of the sector. Experience in poultry production and participation in poultry production and management training are the skill-related factors included in the inefficiency determinant model. There is a negative and statistically significant association between experience in production and technical inefficiency in the pooled and country-specific models. The marginal estimate shows that a unit increase in chicken production experience decreases technical inefficiency by 1%. There is a negative and significant association between the producer participation in training and technical inefficiency in the pooled and Tanzania models. The pooled model shows that producers' participation in poultry production and marketing training decreases technical inefficiency by 15%. Producers who participated in poultry management (feeding, watering, housing, breeding, health) and marketing training are more technically efficient than others. Moreover, to strengthen smallholders' competitiveness with the emerging commercial sector, their entrepreneurial skills such as record keeping (e.g., breeding, financial), networking, and exploiting marketing opportunities need to be capacitated. The above findings highlight the critical role of building the smallholder ability to enhance production and productivity. Empirical studies have also documented the positive roles of skills and experiences in smallholder's technical efficiency [44][45][46]. Management Practices Improved flock management practices such as better housing, feeding and watering practices enhance smallholders' production and productivity. The association between housing index and technical inefficiency is negative and significant in the pooled and Ethiopia models. The effect of this variable in Ethiopia is highly significant than the pooled model. The marginal estimate from the pooled model shows that a unit increase in the housing index reduces the technical inefficiency by 18.3%. In Ethiopia, a unit increase in the housing index reduces technical inefficiency by 96%. The negative relationship between these two variables is anticipated as poultry housing protects birds from extreme weather conditions, disease contamination, predator attacks, accidents, and theft. The multidimensional role of poultry housing to improve smallholder chicken production and productivity has been documented by other researchers [4,47]. Although it is not significant, the estimated parameter highlights the negative association between producers' feeding and watering practices and producers' inefficiency level. Taking producers who do not provide water and feed with containers as a reference, producers who provide water and feed with containers look less inefficient. The availability of feeders and drinkers enhances bird health through increased access to water and food, improved administration of drugs and vitamins, and reduced contaminations of feeds and clean water. It also prevents mice, rats, and other birds from eating their feed and transmitting diseases. Other researchers have also explored the positive role of improved husbandry practices and input use to enhance the productivity of the poultry sectors in developing countries [12]. Access to Institutions and Markets We included producers' access to credit, access to health services, and access to allweather roads as institution and market related exogenous determinants of the inefficiency term. There is a negative and significant association between producers' access to credit and technical inefficiency in the pooled and Ethiopia models. The pooled model show that producers who have access to credit services are 15% less inefficient than producers who have no access to credit services. In Ethiopia, producers who have access to credit are 36% less inefficient than others. Access to credit helps use productivity-enhancing inputs such as feeds, vaccines, and other fixed assets such as feeders, drinkers, and better housing systems. For instance, due to the limited financial capacity, smallholder producers usually do not provide adequate vaccines and balanced feeds, leading to substantial loss of chicks and reduced productivity. However, smallholders' limited credit use could be attributed to both supply and demand related challenges [48], and adopting integrated approaches are feasible solutions. Other researchers have also examined the significant role of access to credit on the technical efficiency of smallholder farmers in developing countries [49,50]. Vaccination and disease treatment practices are expected to have a positive effect on technical efficiency. Contrary to expectation, there is a positive and significant association between access to health services and producers' inefficiency in the pooled model and an insignificant positive association in Nigeria and Tanzania models. Although it is insignificant, access to health services seems to have a negative effect in Ethiopia. There are several explanations for the positive association between vaccination and disease treatment practices and technical inefficiencies, such as failures of vaccines/medication and limited extension and support services. Vaccine/medication failures might have resulted from a lack of proper handling, quality of vaccines, use of local antigens, immunogenic response inside the bird's body, and inability to follow manufacturers' instructions [51]. For example, from the total respondents who used vaccines or different medications in 12 months, more than half reported that the treatment was either poor or fair. Most of the producers administered the disease treatment by themselves, and only a smaller proportion (4.3%) received expert services. The limited skills of village producers and inadequate extension advice would contribute to the reported vaccines or drugs failures. Furthermore, the availability of health services does not merely lead to using the services due to producers' limited financial capacity or inadequate information on the available services. For instance, out of the total respondents who had access to paid health services, a sizeable proportion (51.6%) did not take any routine vaccination and treatments in 12 months. This could be associated with fatigue from repeated drug failure, poor services, limited capacity of veterinary workers, and inadequate supply of vaccines or drugs or the workload if it is time-consuming. Alternatively, from the total respondents who conducted routine vaccination or disease treatment, about 29.4% said they had no access to paid veterinary services, indicating that producers likely have alternative means of access to vaccine/disease treatment. The extensive use of traditional treatment options in the three countries could be the other potential cause for an unexpected relationship. Of the total respondents who experienced disease outbreaks, 26.2 and 32.3% of the producers used traditional and modern medicine, respectively. The extensive use of traditional treatment options may lead to significant loss of chicks due to the unstandardized administration and limited efficacy of the drugs. Taken together, the above pieces of evidence demonstrate that the availability of paid health services may not lead to intensified use and success in disease prevention and treatments at the smallholder level. This suggests building producers' capacity in health management options and establishing innovative health services delivery systems. Such changes could be achieved through integrated approaches that include building producers' ability in disease monitoring and identification and improved treatment and prophylaxis of birds. Different stakeholders (e.g., the private sector, NGOs, cooperatives) that provide livestock health and extension services need to be engaged for sustained outcomes. Producers' distance from all-weather roads is an important indicator for access to different institutions and markets. There is a positive and significant association between distance to all-weather roads and inefficiency in the pooled, Ethiopia and Nigeria models. In the pooled model, a unit increase from an all-weather road increases technical inefficiency by 4%. In Ethiopia and Nigeria, a unit increase in distance to roads increases inefficiency by 7 and 2%, respectively. Access to all-weather roads enhances access to input and output markets and other productivity-improving services [18]. In developing countries where there are limited suppliers of inputs and seasonal products consumption, access to central markets has an irreplaceable role in improving the production and productivity of the sector. Limited rural road connectivity affects production by increasing the cost of moving inputs and outputs. Other empirical studies have also documented the positive role of access to roads in smallholders' technical efficiency [52,53]. Household Livelihood Strategies Smallholders' livelihood strategies involve a range of farms and non-farm activities that significantly affect agricultural production and productivity. We included chicken production contribution and an income diversification index in the inefficiency models. The result highlights that producers who rank poultry production among the top three livelihood contributors have a negative and significant association only in the pooled model. The marginal estimate shows that compared to others, these producers are 9.0% less inefficient. Farmers' production goals and attitudes can explain the observed significant association. Smallholder poultry production is mainly driven by economic and noneconomic goals, which dictate the management practices of producers. Smallholders' production goals affect the choice of input and decision on the size of farms. Producers who consider poultry production as significant livelihood contributors may invest their time and resources to maximize economic and other non-economic gains. This illustrates the role of producers' orientation on the level of production and productivity. The effect of the household income diversification index on the technical inefficiency of households is negative and significant in the pooled and Ethiopia models. This result implies that village producers who have diverse income sources are less inefficient than others. In the pooled model, a unit increase in income diversification of households would decrease the level of inefficiency by 8.0%, while in Ethiopia, it reduces the level of inefficiency by 13%. This could be associated with better access to liquid capital to purchase inputs, and improved social capital resulted during diverse activities [54]. In developing countries, where there is a significant challenge to access liquid capital, diversified income sources enhance producers' ability to use the necessary inputs. The role of production objective and income diversification in smallholder's technical efficiency has also been explored by other researchers [55][56][57]. Inter-and Intra-Household Gender Disparity Poultry is the major type of livestock owned and managed by women in developing countries [4,58,59]. This shows the need for considering the role of gender in the production and productivity of the sector. We included a dummy indicator for head gender and simple women empowerment index as exogeneous determinants of inefficiency. Considering male as a reference, the relationship between the gender of head and technical inefficiency is positive and statistically significant in the pooled and Nigeria models, suggesting that female-headed households are more technically inefficient than male-headed households. Compared with male-headed households, female-headed households are 10.0% more technically inefficient. Higher technical inefficiency for female-headed households could be associated with limited access to resources, information, and other institutional services that help enhance production and productivity. Conversely, there is a negative and significant association between the women empowerment index and technical inefficiency in the pooled model. This demonstrates the role of improving women's access to resources and their decision-making ability on pro-duction and productivity. Due to their crucial role in producing and consuming poultry products, this depicts the need to identify gender-specific constraints and design intervention that would enhance the sector's efficiency [60]. Altogether, mainstreaming gender in the research and development efforts and overcoming existing gender inequality could be among the essential strategies in developing countries. Few empirical studies have also documented the significant impact of gender empowerment in agricultural technical efficiency [61]. Farm/Flock Size Understanding the relationship between farm size and technical efficiency is a critical issue given the relevance of the topic and uncertainties in the available researchers [62]. We included a total number of chickens kept by households, measured by quantile size, in the inefficiency model to assess the effect of flock size on the level of inefficiency. Considering the first quantile as a reference, there is a negative and significant association between flock size and technical inefficiency in the pooled and all country-specific models. However, unlike Nigeria and Tanzania, the effect in Ethiopia is lower, which could be associated with the smaller flock size variability among the sampled respondents. Compared with the first quantile, the pooled model shows that producers in the second, third, and fourth quantile are 24.3, 71.6, and 130.1% less inefficient, respectively. This result demonstrates that an increase in flock size decreases the technical inefficiency of producers. Apart from the economies of scale benefits, holding a larger flock size could be an incentive to use different inputs (e.g., feed, vaccines, and housing) and improve overall production and productivity. Empirical studies have also examined the effect of farm size on technical efficiency [63][64][65]. The farm size indicator strongly asserts the significant role of increasing flock size to enhance the sector's productivity. A smaller flock size at smallholder production could be associated with a high mortality rate and slower replacement of productive stocks. The mortality rate is highest during the chick stage due to inadequate vaccination, poor management, and predators' attack. A slower stock replacement is the result of low hatchability, longer brooding periods, and small numbers of eggs set in a single hatch. The availability of multiple factors for flock size-reduction suggests the need for innovative interventions that would simultaneously address the most critical constraints. This may demand introducing new chick delivery and raising system in the value chain that may include integrating smallholder producers into the commercial production system to benefit the two actors concurrently. However, as Alvarez and Arias [66] examined, increasing the size of farms without building the technical skills and knowledge of producers may lead to diseconomies of scale, which also underlines the critical role of integrated interventions to enhance agricultural production in developing countries. Implication for Sustainable Transformation While the frontier model shows the number of productive hens, vaccination/disease treatment, feed, and labor as essential inputs to enhance efficiency, the inefficiency model demonstrates various socio-economic factors as the source of heterogeneous inefficiency. Remarkably, results from the frontier and inefficiency models underline the need to tackle multidimensional challenges to transform the inefficient, traditional production sector into a more productive and efficient sector. These needs integrated interventions that address the identified production and marketing constraints through innovative approaches. Such approaches require multi-stakeholder engagement, including village producers, government and non-government organizations, the private/commercial sector, marketing actors such as collectors, and downstream actors such as processors and consumers. We present a simple graphic summary of the possible integrated interventions in Figure 1. The left side of the figure depicts major inputs currently used by most village producers in the traditional production system. These are available resources that village producers do not efficiently use due to the above-stated constraints. The center of the figure portrays the integrated interventions identified from the frontier and inefficiency models. While some of these interventions focus on addressing specific challenges at the farm, institutions or production environment level, others need to be integrated as a crosscutting issue to address various challenges across the whole value chain. Capacity building and women empowerment are the two most critical cross-cutting issues that need to be mainstreamed within all interventions. Aligning these issues in different interventions would have significant contributions to sustain the sector transformation. Smallholder producers need to be capacitated in a range of issues, including feed formulation, disease prevention and treatments, flock management, marketing of products, collective actions, and networking with other actors. Similarly, due to their vital role in the production, consumption, and marketing of poultry products, women need to be at the center of any strategic intervention to transform the sector. Although implementing selected intervention may have specific outcomes, the sustainable transformation of the sector requires strategies that incorporate development options listed under the integrated interventions. Moreover, strategies and development options that focus only on some of the suggested interventions may lead to other potential problems that would result in major economic and social losses. For instance, improved production and productivity without access to a better market can lead to significant economic and financial losses. Therefore, any intervention that aims to transform the sector should consider devising integrated interventions that could be implemented through partnership and collaborations. The role of integrated interventions for sustainable agricultural development is documented in various studies [67,68]. The right side of the figure shows the bidirectional interplay between enhanced efficiency and household livelihood outcomes. As stated above, household livelihood strategies have a significant effect on the level of technical efficiency. On the other hand, enhanced efficiency results in better livelihood outcomes through higher productivity gains and reduced production costs. Figure 2 presents the average number of eggs and chicken consumed in three months by quantile efficiency. On average, village producers in the lower quartile have lower egg and chicken consumption levels than producers in higher quantiles. For instance, producers in the fifth quantile consumed 13 eggs in three months, while producers in the first and second quartile consumed about two and seven eggs, respectively. Higher eggs and live bird productivity help improve household members' food and nutritional security (i.e., women and children), increase household income, and even social capital. The above findings show that the association between enhanced efficiency and livelihood outcomes seems cyclical. We hypothesize that such cycles of improved productivity and livelihoods would positively affect the beliefs of village chicken producers, encouraging them to adopt better management and production practices. Therefore, enhanced efficiency has a short-term effect on higher production or productivity, but it also has a repercussive impact on the level of the sector's overall performance due to the positive reinforcing changes in perceptions and livelihood outcomes. Conclusions and Policy Implications Village chicken production supports the livelihoods of many smallholder farmers in Africa, but the sector is widely known for low production and productivity. This could be associated with a lack of technical efficiency and inadequate technological progress in the production system. Results from the empirical analysis presented above reveal high technical inefficiency in the region, suggesting the presence of tremendous opportunities to enhance the production and productivity of the sector without additional inputs. The heterogeneity in technical efficiency is associated with multiple factors at household, farm, and institutional levels. Therefore, policy options that aim to transform the sector should address various challenges simultaneously at different levels. This may require implementing integrated interventions tailored to the specific socio-economic and environmental contexts of village producers. Research and development efforts need to give considerable attention to building the capacity of smallholder farmers in various production and marketing activities, with particular attention to the empowerment of women along the value chain. A higher level of production efficiency could be achieved through a sizable and strategical investment in capacity building through extension and training services and creating enhanced input delivery and output marketing systems. Moreover, a decreasing return to scale of the production frontier in our empirical model may suggest the limited potential of existing technologies and production practices to generate higher outputs with the increasing use of inputs. Shifting the production frontier by introducing improved technologies that respond to better inputs use could be the other policy option. A po-tential and immediate solution would be improving village producers' access to locally adapted and farmer preferred improved breeds and production practices. The above policy options can be realized through innovative public-private partnerships, and stakeholder engagement approaches. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The datasets used for this study can be found in the International Livestock Research Institute (ILRI) dataset portal in https://data.ilri.org/portal/ (accessed on 26 July 2021).
2021-09-09T20:45:57.713Z
2021-07-30T00:00:00.000
{ "year": 2021, "sha1": "cbfb10b24a394678fdf7c1cf3e0ee08b010eb3e5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/15/8539/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "824415286479bc3cf5e10dfca7cb529034600790", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics", "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
259327548
pes2o/s2orc
v3-fos-license
Temperature change exerts sex-specific effects on behavioural variation Temperature is a key factor mediating organismal fitness and has important consequences for species' ecology. While the mean effects of temperature on behaviour have been well-documented in ectotherms, how temperature alters behavioural variation among and within individuals, and whether this differs between the sexes, remains unclear. Such effects likely have ecological and evolutionary consequences, given that selection acts at the individual level. We investigated the effect of temperature on individual-level behavioural variation and metabolism in adult male and female Drosophila melanogaster (n = 129), by taking repeated measures of locomotor activity and metabolic rate at both a standard temperature (25°C) and a high temperature (28°C). Males were moderately more responsive in their mean activity levels to temperature change when compared to females. However, this was not true for either standard or active metabolic rate, where no sex differences in thermal metabolic plasticity were found. Furthermore, higher temperatures increased both among- and within-individual variation in male, but not female, locomotor activity. Given that behavioural variation can be critical to population persistence, we suggest that future studies test whether sex differences in the amount of behavioural variation expressed in response to temperature change may result in sex-specific vulnerabilities to a warming climate. JAB, 0000-0003-3312-941X; GP, 0000-0001-9737-7995; BBMW, 0000-0001-9352-6500; DKD, 0000-0003-2209-3458 Temperature is a key factor mediating organismal fitness and has important consequences for species' ecology. While the mean effects of temperature on behaviour have been well-documented in ectotherms, how temperature alters behavioural variation among and within individuals, and whether this differs between the sexes, remains unclear. Such effects likely have ecological and evolutionary consequences, given that selection acts at the individual level. We investigated the effect of temperature on individuallevel behavioural variation and metabolism in adult male and female Drosophila melanogaster (n = 129), by taking repeated measures of locomotor activity and metabolic rate at both a standard temperature (25°C) and a high temperature (28°C). Males were moderately more responsive in their mean activity levels to temperature change when compared to females. However, this was not true for either standard or active metabolic rate, where no sex differences in thermal metabolic plasticity were found. Furthermore, higher temperatures increased both among-and within-individual variation in male, but not female, locomotor activity. Given that behavioural variation can be critical to population persistence, we suggest that future studies test whether sex differences in the amount of behavioural variation expressed in response to temperature change may result in sex-specific vulnerabilities to a warming climate. to the effect of temperature on underlying physiology, whereby higher temperatures (when experienced below thermal optima) increase metabolic rate and energy production [2,10]. As behaviour can determine organismal survival and reproductive success [12][13][14], the effect of temperature on behavioural and physiological traits may have important consequences for the ecological and evolutionary dynamics of populations. However, recent research has revealed that within populations, not all individuals are similarly responsive to temperature change [15][16][17][18][19]. Importantly, individual differences in thermal behavioural plasticity may have broader consequences for animal populations by altering the amount of individual-level behavioural variation expressed across differing temperatures [18]. While relatively less is known about the role of temperature in mediating individual-level behavioural variance, recent work in a variety of ectotherms has found that rising temperatures can drive increases in behavioural variation both among-and withinindividuals [17,18,20,21]. Here, it is thought that the positive relationship between temperature and metabolic rate in ectotherms results in a greater amount of energy available to express behavioural variation at higher temperatures [10,17]. These changes in individual-level variance may be key to the adaptive capacity of animal populations. Indeed, previous research has shown that increased among-individual behavioural variation increases colony fitness in ants (Temnothorax longispinosus; [22,23]). Similarly, while the ecological and evolutionary consequences of within-individual (i.e. residual) behavioural variation are not yet clear, prior studies have suggested that individuals may exhibit increased within-individual variation as an adaptive strategy for dealing with heightened predation risk [24,25]. Therefore, changes in individual behavioural variation in response to thermal fluctuations are expected to have implications for organismal fitness and population persistence in the face of environmental change. Despite the importance of individual-level behavioural variance to population persistence, little is currently known about the intrinsic factors that mediate the effects of temperature on among-and within-individual behavioural variation. In particular, it is likely that such variation may differ across the sexes given the clear differences in life history between males and females, and the reported differences between males and females in their thermal responsiveness [26][27][28]. Indeed, a recent meta-analysis across 44 ectothermic species found a negative correlation between body mass and thermal acclimation capacity [27]. Interestingly, the mass-dependence of thermal acclimation capacity was associated with modest sex differences in thermal plasticity within species that exhibited sexual size dimorphism-potentially due to heavier organisms having greater thermal inertia in response to temperature change [27]. Moreover, males and females have also been shown to differ in their relationship between physiology and behaviour [29][30][31], suggesting that temperatureinduced effects on physiological traits may exert sex-specific effects on behaviour. However, whether the effects of temperature on individual-level variation are sex-specific remains largely uninvestigated. Given that individual-level behavioural variation may promote population persistence in response to environmental change [32,33], sex differences in levels of thermally mediated among-and within-individual variation may have key ecological consequences. Here, we investigated the potential for ecologically relevant temperature change to mediate sex-specific behavioural variation. Specifically, we tested the effects of temperature on individual-level variation in locomotor activity in male and female vinegar flies (Drosophila melanogaster), and explored whether any such effects are associated with underlying metabolic rates. The vinegar fly is a small ectothermic species that exhibits sex differences in both body size and energy management strategies [29,30], making it an ideal study subject. A recent meta-analysis found a negative association between body size and thermal acclimation capacity in ectotherms, whereby smaller size is associated with higher capacity for thermal acclimation [27]. Based on this reported association, we predicted that male flies would demonstrate increased population-level (i.e. mean-level) thermal plasticity in both their locomotor activity and metabolic rate due to their smaller body size in comparison to females, resulting in greater sex differences in activity and metabolic rate at 28°C compared to 25°C. We also aimed to identify whether temperature change altered both among-and within-individual variation in locomotor activity, and whether these effects differed between males and females. However, we had no clear predictions about how the effect of temperature on behavioural variance would differ between male and female D. melanogaster, and thus, we present these results as a first step in assessing sex differences in thermally mediated among-and withinindividual variation in locomotor activity. Methods (a) Fly collection and maintenance The experimental population of D. melanogaster used in this study was originally collected from wild populations in Coffs Harbour, NSW, Australia. Details on the origin and maintenance of this population have been reported previously [34]. Briefly, 60 wild-caught non-virgin females were collected and transported to the animal facilities at Monash University. Offspring from each female (10 daughters and 10 sons) were mixed to create a single, mass-bred population. Flies were housed in standard vials (40 ml) on a potato dextrose-yeast-agar food medium (37.32% yeast, 31.91% dextrose, 23.40% potato medium and 7.45% agar combined with 98.48% water, 0.97% ethanol, 0.45% propionic acid and 0.11% nipagen) for approximately 260 discrete generations-corresponding to ten years of breedingunder standardized conditions (16 pairs per vial, across 10 vials, all adults admixed each generation prior to redistribution into individual vials; egg density limited to 100-120 per vial). Stocks were held within a controlled-temperature room (12 : 12 h light : dark cycle) maintained at 25°C (mean ± s.e. during experimental period = 24.57 ± 0.002°C), with the exception of unavoidable rare power outages or malfunctions of infrastructure (approx. three occasions over a decade) that led to short periods of thermal stress. (b) Experimental animals Focal individuals (64 females and 65 males) were produced by parents and grandparents that were each 5 days of adult age at the time of egg-laying. Virgin male (body mass; mean ± s.e. = 0.66 ± 0.01 mg) and female (body mass; mean ± s.e. = 1.10 ± 0.01 mg) focal flies were collected under light CO 2 anaesthesia within 6 h of eclosion. We chose to use only virgin flies as previous research has demonstrated significant effects of mating status on D. melanogaster locomotor activity and metabolic rate [30]. All flies were sorted into individual vials and left to recover royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 290: 20230110 for 3 days prior to the start of experiments. This 3-day recovery period ensured that any lasting effects of CO 2 anaesthesia on fly physiology were eliminated by the time of experimental assays [35]. (c) Behavioural experiments Adult focal flies were 3 days post-eclosion at the start of behavioural trials. All flies were individually tested for locomotor activity across 6 seperate behavioural trials, each of 15 min in duration, over the course of 3 days. Behavioural assays were always performed within an approximately 2 h time window (13:49 -15:35 h) to control for the previously reported variation in fly activity over the course of the day [30]. The locomotor activity of flies was tested at both their standard housing temperature (25°C) and at a high temperature (28°C). Data collected by the Australian Bureau of Meteorology from Coffs Harbour (where the original population was sourced) in 2020 indicate that average monthly minimum and maximum summertime temperatures range between approximately 19.7 -29.2°C, with an approx. 7.3°C (± 0.07°C) average range in temperatures within a single day. Thus, 25°C and 28°C were chosen as they are within the thermal range that Australian D. melanogaster experience in the wild, and have been previously used to investigate thermal plasticity in this species [36]. At the beginning of behavioural trials, all flies were individually sorted into clean polycarbonate chambers (65 × 3 mm; length × width; volume = 0.46 ml) capped with 5 mm of foam at each end. Half of the flies underwent the first behavioural trial at their standard housing temperature (25°C), while the remaining individuals were first tested at the high temperature (28°C). Prior to the beginning of the trial, we measured the actual temperature of each individual polycarbonate chamber in both the 25°C (mean ± s.e. = 25.27 ± 0.02°C) and 28°C (mean ± s.e. = 27.53 ± 0.02°C) treatments using an infrared thermometer (Smart Sensor, Dongguan, China). All animals were given 60 min to acclimate to the testing temperature prior to the start of the assay. Behavioural trials were conducted in one of two separate assay chambers where fly activity was automatically tracked using ZebraLab software (ZebraBox, ViewPoint Behaviour Technology, Lyon, France). The temperature treatment of each assay chamber was randomized over the experiment to avoid any confounding effects of assay chamber on treatment temperature. Similar to previously established methods [37], we recorded the total distance that each fly moved (in mm) as a measure of locomotor activity over the 15 min trial. After the completion of the first behavioural trial, flies were removed from the assay chamber and allowed to acclimate for 60 min to the test temperature of the second trial. This was set up so that those flies that were first tested at 25°C, were subsequently tested at 28°C, and vice versa. Following the conclusion of the second behavioural trial, flies were returned to their individual housing vials. This process was repeated each day for three days and allowed us to repeatedly measure the locomotor activity of each fly at both their standard housing temperature and at the high temperature. The order of the temperature treatments was alternated daily to control for any order effects. The experiment was run across four one-week sampling blocks that were each separated by one week; the focal flies used in each block were generated by independent sets of parental flies (n = 32 flies per block). The sex of flies and temperature treatment order were balanced within each block across the experiment to control for any differences between blocks. (d) Metabolic rate After the completion of behavioural trials, all flies were tested for their standard (SMR) and active (AMR) metabolic rates at both their housing temperature (25°C) and a high temperature (28°C), following previously established prtocols [30]. Trials took place within one of two Panasonic MIR 352H-PE climate control cabinets (Panasonic Healthcare, Sakata, Japan) set at either 25°C (mean ± SE = 24.8 ± 0.01°C) or 28°C (mean ± SE = 28.2 ± 0.02°C). Metabolic trials were conducted for 9.5 h overnight (21:00 -06:30) across two separate nights (see electronic supplementary material for detailed metabolic rate methods). Flies that underwent the first metabolic rate trial at 25°C were subsequently tested at 28°C during the second trial in a distinct metabolic rate chamber, and vice versa for flies initially tested at 28°C. Similar to behavioural trials, temperature treatment order was fully balanced across sexes. We measured the rate of CO 2 production (VCO 2 µl h −1 ) of each fly as a proxy for metabolic rate using eight Sable Systems International (SSI, Las Vegas, Nevada, USA) multiple animal versatile energetics systems (MAVEn), each attached to a Li-Cor 7000 CO 2 /H 2 O infrared gas analyser (Li-Cor, Lincoln, Nebraska, USA). Flies were individually sorted without anesthesia into clean polycarbonate chambers (65 × 3 mm; volume = 0.46 ml) capped with 5 mm of foam at each end. Four randomly chosen individuals (2 males and 2 females) were then gently loaded into one of eight MAVEn systems where they remained until the end of the trial. Individual chambers were sequentially measured for a period of 10 min each, with a baseline recording (5 min) taken between each measurement to account for drift in the Li-Cor 7000 throughout the experiment. This was repeated six times over the course of each trial, resulting in six VCO 2 measurements for each individual fly at both 25°C and 28°C. Flow rate was set by the MAVEn system and held constant at 15 ml min −1 throughout all experiments. We also simultaneously recorded the routine movement of each fly during each measurement using infrared light detectors in the MAVEn activity board. Movement was detected through changes in the infrared light field above each detector and is presented as a unitless measurement corrected to an absolute difference sum (ADS-movement). Specifically, ADS-movement is calculated by sequentially adding the absolute differences between adjacent data points from deflections in the infrared light detectors above each metabolic rate chamber. While ADS-movement is not an absolute measure of locomotion, it can be likened to the 'intensity' of movement exhibited by the animal and is widely used in the literature to account for variance in metabolic rate due to variation in organismal activity during the recording [38,39]. For each fly, we extracted the mean VCO 2 from each 10 min recording, as well as the corresponding range in ADS-movement. Similar to previous research [40], we took the recording period with the lowest and highest VCO 2 readings at both 25°C and 28°C as measures of SMR and AMR, respectively. Following the completion of the metabolic rate assay, we measured the body mass of each fly using a fine micro-balance (±0.0001 mg; Cubis series MSA2.7s-000-DM microbalance, Sartorius AG, Goettingen, Germany). (e) Statistical analysis Data were analysed using R v. 4.0.3 [41]. A total of 768 behavioural trials (i.e. 192 h of behavioural recording) from 129 individuals (64 females and 65 males) were included in the analysis. One male escaped after the first day of behavioural recordings and was replaced by a male conspecific of the same age from the same block. All continuous covariates were meancentred and scaled (mean = 0; s.d. = 1), while the chamber in which behavioural trials were conducted in (1 or 2) was centred (i.e. chamber 1 = -0.5; chamber 2 = 0.5) prior to analysis to aid in model fitting and interpretation (see [42]). In all analyses, we used the brms package [43] to fit Bayesian linear mixed-effects models to investigate sex differences in locomotor activity and metabolic rate. All models were run for 5000 iterations (1000 royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 290: 20230110 warmup), with a thinning interval of two, and on four chains using relatively uninformative, default priors. Model convergence was visually checked via trace plots, with all chains converging ðR ¼ 1Þ. Inference was based on posterior means and their associated 95% credible intervals (CI). We first used a Bayesian, double-hierarchical generalized linear mixed-effects model to investigate sex differences in behavioural variation across the two different temperatures (table 1). Briefly, this approach allows the explicit modelling of both mean (i.e. mean-model) and residual (i.e. residual model) level behavioural variation within a single overarching framework [44]. Preliminary analysis found no substantial effect of either experimental block (F 3,122 = 0.05, p = 0.99) or temperature treatment order (F 1,123 = 1.47, p = 0.23) on locomotor activity, and therefore, these variables were excluded from the final model to reduce model complexity. For the final double-hierarchical model, we included total distance moved (mm) as the response variable, while the mean model included body mass (mg), trial number (1-6), assay chamber (1 or 2) and time of day (13 : 49-15 : 35 h; coded as min since 13 : 00 h) as covariates, while sex (male or female) and temperature (25°C versus 28°C) were included as fixed-effect factors. The final model also included a sex by temperature interaction. To test for sex differences in behavioural variation, we fitted individual ID as a random intercept separately for each sex and allowed this to differ between the two temperature treatments. In the residual model, we allowed variance estimates to differ between males and females at each temperature to investigate how sexes differed in their residual, within-individual behavioural variation across temperature treatments. A recent meta-analysis found that individuals often differ from one another in their withinindividual behavioural variance [45]. While not the focus of the current study, we nevertheless included individual ID as a random intercept in the residual model to control for any among-individual differences in within-individual variance. Following model fitting, we extracted all variance estimates and calculated the magnitude difference in among-individual (i.e. ΔV I ) and residual within-individual (i.e. ΔV W ) variance between treatment groups to statistically compare how males and females differed in the effect of temperature on behavioural variation (e.g. [46][47][48]). Similarly, we calculated the coefficient of both amongindividual (CV I ) and within-individual (CV W ) variation in locomotor activity for males and females at both temperature treatments. The coefficient of variation is a mean-standardized variance estimate that disentangles the effect of temperature change on behavioural variation from mean-level changes in locomotor activity [49,50]. As above, we took the magnitude difference in coefficients of among-(ΔCV I ) and within-individual (ΔCV W ) variation to statistically compare treatment groups. We also report adjusted repeatability estimates here for both sexes at both temperature treatments for completeness (table 2). We ran two univariate generalized linear mixed-effects models to investigate sex differences in the population-level thermal plasticity of metabolic rate (see electronic supplementary material, tables S2 and S3 for full model output). Some individuals were lost due to early mortality prior to the completion of metabolic rate trials, resulting in a total of 64 females and 57 males that completed all metabolic rate trials and were included in the model. Routine movement data of flies during the metabolic rate trials (ADS-movement) was log 10 + 1 transformed prior to analysis. Both SMR and AMR were each included as the respective response variables in two separate models, while body mass, relative humidity (95.39-97.67%), trial day (day 1 versus day 2), and ADS-movement of each individual fly during the trial were included as covariates. In addition, we included sex and temperature (25°C versus 28°C) as fixed-effects factors, as well as interactions between sex and mass, sex and temperature, and sex and ADS-movement in the model. Individual ID was included as a random intercept separately for each sex. In these models, a significant interaction between sex and temperature would indicate that males and females differed in how they altered their metabolic rate across the temperatures (i.e. sex differences in population-level thermal metabolic plasticity). We found that males and females marginally differed from each other in their population-level thermal plasticity in locomotor activity (see §3). Given that sex differences in body size may contribute to variation between males and females in their thermal plasticity [27], we also ran a post-hoc analysis to Table 1. Model estimates (± 95% CI) extracted from the Bayesian linear mixed-effects model investigating sex differences in locomotor activity. Estimates are given for both the mean model (i.e. average behaviour) and the residual model (i.e. within-individual behavioural variation). Fixed-effects estimates from the mean model displayed in bold are those whose CI do not overlap zero (note: variance estimates cannot overlap with zero as they are positively bound). Females at 25°C are set as the reference group. Variance estimates from the mean and residual model were converted back to the original scale for each treatment group from brms model output. suggesting that body mass differences between males and females may be driving sex differences in thermal plasticity). The model structure was identical to the locomotor activity analysis described above, with the addition of a body mass by temperature interaction in the mean model. As males and females did not differ in their population-level metabolic plasticity (see §3), we did not include a post-hoc analysis for the SMR or AMR models. Finally, as previous research has found genetic correlations between locomotor activity and metabolic rate in male, but not in female D. melanogaster [29], we also ran two bivariate generalized linear mixed-effects models to investigate potential sex differences in the relationship between locomotor activity and metabolic rate (both SMR and AMR). Both activity and either SMR or AMR (depending on the model) were included as the response variables. The activity, SMR, and AMR models contained the same fixed-effects as described directly above. Individual ID was included as a random intercept separately for each sex in all models. We estimated sex-specific among-individual correlations between locomotor activity and either SMR or AMR, respectively (note that correlations were not temperature-treatment specific as we did not have repeated measures of SMR or AMR at either 25°C or 28°C). ADS-movement was retained as a covariate in both the SMR and AMR models to control for the effect of routine movement on metabolism, following previously established methods [29,30]. However, for completeness, we also ran a supplementary analysis where metabolic rate was not corrected for ADS-movement during the trial. The correlation estimates between locomotor activity and either SMR or AMR uncorrected for ADS-movement were qualitiatively similar to those reported in the main text (see electronic supplementary material). Results (a) Mean-level effects: locomotor activity Males and females marginally differed from each other in their population-level thermal behavioural plasticity (i.e. sex × temperature interaction; figure 1a; (table 1). When including a body mass by temperature interaction in the post-hoc model, there was no longer any interaction between sex and temperature on locomotor activity (sex × temperature interaction in the post-hoc model [95% CI] = -5.08 [-283.18, 284.64]), suggesting that the marginal sex difference in population-level behavioural thermal plasticity was likely driven by differences in body mass between males and females. Males were also more active than females at both temperature treatments, after controlling for sex differences in body mass (figure 1a; table 1). Furthermore, trial number, time of day and assay chamber all had an effect on locomotor activity. However, while there was a marginally negative effect of body mass on locomotor activity, CIs for this effect were wide and included zero (table 1). (table 1), suggesting that the rank-order of among-individual differences in locomotor activity was maintained across the temperature treatments in both sexes [51]. However, coefficients of among-individual variation differed little between the sexes across the temperature treatments, suggesting that the sex differences in the effect of temperature on among-individual variation were largely driven by changes in mean-level locomotor activity. Table 2. Coefficients of among-(CV I ) and within-individual (CV W ) variation and adjusted repeatability estimates (±95% CI) for the locomotor activity of females and males at both 25°C and 28°C. Figure 1. Population-level thermal plasticity of (a) locomotor activity, (b) standard metabolic rate (SMR), and (c) active metabolic rate (AMR) in both males and females. Plots represent conditional effects (± 95% CI) extracted from Bayesian linear mixed-effects models for both females (activity: n = 64; metabolic rate: n = 64) and males (activity: n = 65; metabolic rate: n = 57), with estimates displayed in blue and red for assays conducted at 25°C and 28°C, respectively. Discussion We predicted that sexual dimorphism in the body size of D. melanogaster would result in males displaying, on average, increased population-level thermal plasticity in their locomotor activity and metabolic rate, when compared to females. In line with our predictions, we found increased population-level plasticity in locomotor activity in response to temperature change in males relative to females. However, this was not the case for either standard (SMR) or active (AMR) metabolic rate, with both males and females similarly increasing their metabolism in response to higher temperatures. In addition, we found evidence that temperature change exerts sex-specific effects on both among-and within-individual variation in locomotor activity. These results may have possible implications for population persistence in the face of environmental change, which we discuss below. We found that males increased their locomotor activity to a greater extent than females in response to rising temperatures, and that this effect was likely attributable to sex differences in body size. However, we should note that there was uncertainty around this effect, with credible intervals partially overlapping zero. Nevertheless, previous work has also demonstrated sex differences in the population-level thermal behavioural plasticity of D. melanogaster [36,52]. In particular, while the average activity of males has been shown to increase with higher temperatures, female activity rates plateaued at temperature increases above 24°C [52]. Similarly, previous research found that male D. serrata maintained higher activity rates across a broader range of temperatures than females, resulting in wider thermal performance curves in males than females [26]. These sex differences in the thermal plasticity of locomotor activity may have substantial fitness consequences. Indeed, prior research has suggested that locomotor activity may be under sexually antagonistic selection in D. melanogaster, whereby increased activity rates result in high reproductive fitness in males, but decreased fitness in females [53]. Therefore, the greater increase in activity levels observed in males, relative to females, in response to high temperatures may be an adaptive response to maximize fitness under contrasting thermal environments. Future research testing locomotor activity and fitness across a broader range of temperatures will be needed to investigate whether the sex differences in population-level thermal plasticity are adaptive. Previous work in D. melanogaster has demonstrated that mating status, starvation, and the social context of flies during the assay may influence locomotor activity, and that such effects may differ between the sexes [30,37,54]. We used satiated virgin flies that were tested in an asocial context; whether the marginal sex difference in the thermal plasticity of locomotor activity is maintained in mated flies tested across different levels of food deprivation and varying social conditions is not clear and will require further research. Despite our results showing sex differences in thermal behavioural responsiveness, we found no substantial sex differences in the population-level thermal plasticity of either SMR or AMR. More specifically, contrary to our predictions that the smaller body size of males would result in greater population-level thermal metabolic plasticity, we found that both male and female flies were similarly responsive in their SMR and AMR to rising temperatures. Previous research in D. melanogaster also found no evidence for greater population-level thermal metabolic plasticity in males compared to females [36]. Indeed, males flies were actually shown to be less responsive in their metabolic rate to temperature change than females [36]. Taken together, this suggests that the reduced body size of male D. melanogaster, relative to females, does not result in males being more metabolically plastic, on average, in response to changes in the thermal environment. Why sexes differed in their thermal behavioural, but not in metabolic plasticity, at the population vel remains unclear. We surmise that this may have been due to sex differences in energy management strategies. For example, we found that the relationship between activity and either SMR or AMR was in opposite directions for males and females, albeit with substantial uncertainty around these estimates (CIs overlapping with zero). Indeed, previous research has reported a positive genetic correlation between locomotor activity and SMR in D. melanogaster males, but not females [29]. Thus, potential sex differences in the relationship between locomotor performance and metabolism may explain the current findings, whereby equal increases in SMR at higher temperatures are associated with increased activity rates in males, but not females. However, whether there are sex differences in genetic covariance between locomotor activity and SMR in our study population, and whether these genetic covariances change across different temperature treatments is not known and would therefore benefit from future quantitative genetic and metabolomic studies (e.g. [55,56]). We also found that temperature altered among-individual variation in locomotor activity, and that this effect was sexdependent. Further, male and females flies demonstrated positive among-individual correlations in their locomotor activity across the temperature treatments that were close to 1, suggesting that the rank order of among-individual differences was maintained across the temperature treatments. Previous work in ectotherms has found that individuals differ from each other in their behavioural response to temperature change, resulting in changes to among-individual variance [17,18]. However, whether patterns of temperaturedependent among-individual behavioural variance may royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 290: 20230110 differ between the sexes has previously been overlooked. While among-individual variation in locomotor activity increased at higher temperatures in males, this was not the case for females, which did not differ in their among-individual variation across the temperature treatments. This resulted in males showing greater among-individual variation compared to females at 28°C, but not at 25°C. Previous research has reported greater additive genetic variance for locomotor activity in male D. melanogaster when compared to females [29]. In our study, all flies were raised under tightly controlled conditions, suggesting that at least some of the variation we detected among individuals may have had an additive genetic basis. If this is indeed the case, higher temperatures may release cryptic genetic variation in male locomotor activity that may help buffer them against the potentially negative effects of thermal variation. However, coefficients of among-individual variation (mean-standardized variance estimate) differed little between the temperature treatments in either sex, suggesting that the sex-dependent effects of increased temperature on among-individual variation were largely driven by sex differences in average locomotor activity at 28°C. Whether temperature change alters the expression of additive genetic variance differently in males and females and how this is influenced by changes in average locomotor activity will be a key topic for further research. Future experiments that have power to partition genetic variance should repeatedly test the behaviour of individual flies across a broader range of temperatures to investigate whether individual differences in locomotor activity across changing temperatures are heritable. Measuring the survival and reproductive success of these individuals to hone in on associations between temperature-dependent behavioural variation and organismal fitness will be key to understanding and predicting potential sex-specific vulnerabilities to rising temperatures. Sex differences in within-individual variance in locomotor activity were also linked to temperature; males demonstrated greater within-individual variance with increased temperatures, while females showed the opposite pattern. The findings in males are in line with previous research in aquatic ectotherms, which has similarly found increased within-individual behavioural variance at higher temperatures [17,18,21]. It has been suggested that increased within-individual variation at higher temperatures may be due to the positive effect of temperature on ectothermic metabolism and behavioural activity, where increased temperatures result in a greater amount of energy available to express behavioural variation [10,17]. Indeed, coefficients of within-individual variance in males did not increase substantially at higher temperatures, suggesting that the greater within-individual variance in males at higher temperares was largely driven by their increased average locomotor activity at 28°C. Yet we did not observe such a pattern in our females. On the contrary, we found that within-individual variance in activity rates actually decreased at higher temperatures in females, and this effect was independent of average changes in locomotor activity. This is despite finding that female metabolic rates increased with warmer temperatures, highlighting that increased energy production in response to rising temperatures in ectotherms may not necessarily drive concurrent increases in within-individual behavioural variance. It is unclear why the effect of temperature on within-individual variation in locomotor activity differed between males and females in our study. We suggest that this effect may again be due to sex differences in energy management strategies, whereby strong positive relationships exist between activity and metabolic rate in males, but not in females [29,30]. Here, increased metabolic rates at higher temperatures may provide males with more energy available to express greater behavioural activity and subsequent withinindividual behavioural variation. Conversely, previous research has actually found a negative correlation between evening activity and metabolic rate in female D. melanogaster, suggesting a potential energetic trade-off between locomotor performance and metabolism [30]. While locomotor activity was not measured during the evening in the current study, this trade-off between activity and metabolism may partly explain why increased metabolic rates at higher temperatures in females resulted in lower within-individual behavioural variance. While the ecological implications of these sexdifferences in within-individual behavioural variability are unclear, we note that previous studies have identified putative associations between within-individual variance and predation in invertebrates [24,57,58]. This suggests that sexspecific effects of temperature on within-individual variation in activity rates found in the current study could lead to temperature-dependent differences between males and females in their vulnerability to predation. Further studies are required to test these links in D. melanogaster. It is also important to highlight that sex differences in residual within-individual variance may have been caused by differences between males and females in measurement error, or sex differences in plasticity in response to unmeasured microenvironmental changes. While we cannot rule this out, we find these explanations for the current results unlikely, given that locomotor activity was automatically tracked using the same equipment for both sexes, and that behavioural trials were conducted at a consistent time of the day in assay chambers with standardized temperature, humidity and lighting. Furthermore, we also note that while not the focus of the current study, we found preliminary evidence for greater among-individual differences in within-individual variance in males when compared to females (table 1). Future quantitative genetic studies will be needed to better understand whether the greater differences between individual males in their within-individual variation has an additive genetic basis and can respond to selection. In summary, our study revealed key sex differences in thermal behavioural, but not metabolic, plasticity in the vinegar fly. We also found that higher temperatures triggered larger among-and within-individual variation in activity rates in males, but not in females, and that these effects were partly attributable to the influence of higher temperatures on average locomotor activity. Given that increased behavioural variation and a diversity of behavioural strategies have been suggested to enhance population persistence in the face of changing environmental conditions [32,33,59], sex differences in the amount of behavioural variation expressed in response to temperature change may result in sex-specific vulnerability to a warming climate. While our research represents a first step in assessing these implications, future studies investigating whether behavioural differences in thermal responsiveness are heritable and mediate organismal fitness royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 290: 20230110 are needed to better understand the adaptive capacity of populations to persist in the face of future climate change. Data accessibility. Data and statistical code to reproduce the results reported in this manuscript are publically available from the Open Science Framework online repository (https://osf.io/geczs/).
2023-07-05T16:47:18.404Z
2023-07-05T00:00:00.000
{ "year": 2023, "sha1": "1bef5d306373de6333de7dcbb0e3bf774cb7c40b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "RoyalSociety", "pdf_hash": "1bef5d306373de6333de7dcbb0e3bf774cb7c40b", "s2fieldsofstudy": [ "Biology", "Psychology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
253193209
pes2o/s2orc
v3-fos-license
Characteristics of patients with hip fractures and comorbid fall‐related injuries in the emergency department Aim Hip fracture is one of the most common fall‐related injuries in the elderly population. Although falls may cause multiple types of injuries, no study has investigated the details of fall‐related injuries accompanied by hip fractures. This study aimed to characterize the features of such injuries. Methods This is a cross‐sectional study using data from four tertiary emergency departments in Japan. We identified patients diagnosed with hip fracture including femoral neck fracture, trochanter fracture, or subtrochanteric fracture from May 12, 2014 to July 12, 2021. Among patients with hip fracture, we included those with fall‐related hip fracture. We excluded patients ages <40 years old and whose fall was high energy onset, defined as fall from more than three steps or 1 m. Results Among 326 emergency departments patients diagnosed with fall‐related hip fracture, 288 patients were eligible for the analysis. Seventeen patients (6%) had injuries in addition to hip fractures. The most frequent injury was upper limb injury (e.g., distal radial fracture; n = 5, 30%), followed by head injury (e.g., subdural hematoma; n = 4, 24%), chest injury (e.g., pneumothorax; n = 2, 12%), and trunk injury (vertebral compression fracture; n = 2, 12%). There were no significantly different clinical characteristics between patients with hip injuries and those without. Conclusion A total of 6% of patients diagnosed with hip fracture had other fall‐related injuries. The most frequent were upper limb injury and head injury. Our findings underscore the importance of whole‐body assessment in patients with fall‐related hip fracture in the emergency department. INTRODUCTION H IP FRACTURE IS one of the most common fallrelated injuries with the estimated annual incidence of 1.31 million, and the prevalence of consequent disability was 4.48 million in the world. 1 Hip fracture is common in the elderly population, which is expected to increase to 1.4 billion in 2030 and to 2.1 billion by 2050. 2 As a result, the annual incidence of hip fracture is estimated to increase to 4.5 million by 2050. 3 Therefore, fall-related hip fractures is an important issue worldwide. Hip fracture is not only frequent in the elderly, but also in female, osteoporosis, sarcopenia, cognitive decline, and institutionalized patients. 4,5 These conditions are also risk factors for other injuries and fractures, 6 suggesting that patients who visit the emergency department (ED) for fallrelated hip fractures may also have other fall-related injuries. 7 It was reported that the comorbid head injury with hip fracture was associated with a significantly higher mortality rate than hip fracture alone. 8 Therefore, it is important to know other injuries accompanying a hip fracture. To find such comorbid injuries, whole-body survey is important. It is often difficult to obtain medical histories because elderly patients with impaired cognitive status cannot complain of their symptoms and injury mechanisms clearly. 9 Furthermore, apparent hip fracture sign may lead to a wrong medical prejudice with insufficient physical examination. 10 However, whole-body survey is not always conducted in trauma including hip fracture. Even in trauma centers, it was conducted only in 65% of patients. 11 To our knowledge, no previous studies have investigated the frequency of fall-related injuries accompanied with hip fractures in the ED. It is important to know the epidemiological data because neglecting comorbid fall-related injuries with hip fracture results in delayed treatment and worse outcomes. We aimed to provide epidemiological data of fallrelated injuries accompanied with hip fractures in the ED. Study design T HIS IS A cross-sectional study using data at the ED of Hitachi General Hospital (Ibaraki, Japan) between May 12, 2014 and October 11, 2020, Saiseikai Utsunomiya Hospital (Tochigi, Japan) between April 1, 2020 and May 18, 2021, Japanese Red Cross Society Kyoto Daiichi Hospital (Kyoto, Japan) between April 1, 2014 and June 11, 2021, and Kakogawa Central City Hospital (Kakogawa, Japan) between April 1, 2020 and July 12, 2021. Because we extracted data from the NEXT Stage ER system (an emergency department information system by TXP Medical Tokyo, Japan), there was variation in the period of data collection because of the dates of the system's implementation. The institutional review board of each participating hospital approved this study, and the requirement for informed consent was waived because the data was anonymized. Data collection We extracted the following clinical data: age, sex, route of presentation to the hospital (walk-in, ambulance, or physician staffed ambulance), chief complaint, medical history, physical examination, and physician's diagnosis, which was made on admission or discharge using the Next Stage ER system. 12 The Next Stage ER system can extract clinical data from electronic medical records and translate to existing categories through natural language processing algorisms. The details of the system have been previously described. 13 This system can accurately extract clinical data and has been validated for use in clinical research. [13][14][15] Anonymized patient information was extracted from the electronic medical records of each hospital. Among collected data, the following data were used for this research: age, sex, route of presentation to the hospital (walk-in or ambulance), comorbidities, medications, type of hip fracture, and accompanied injuries based on the International Classification of Disease, Tenth Revision (ICD-10) codes. Comorbid injuries were counted in duplicate when the patients have multiple injuries. Therefore, the total number of comorbid injuries was presented regardless the number of patients. Patient selection criteria and definition We identified patients diagnosed with hip fracture who had comorbid fractures including femoral neck fracture, trochanter fracture, and subtrochanteric fracture. These diagnoses were based on ICD-10 codes S72.0X (femoral neck fractures), S72.1X (trochanter fracture), and S72.2X (subtrochanteric fracture). We further identified fall-related hip fracture from their medical records. To exclude atypical fallrelated hip fractures (i.e., endogenous or secondary to high energy trauma), the exclusion criteria were patients ages <40 years old and whose fall was high energy onset defined as more than a fall from more than three steps or 1 meter, based on the definition of fragility fracture. 16,17 Comorbid injuries were also diagnosed using ICD-10 codes. Statistical analysis Continuous data was expressed as medians (interquartile range) and were compared using the Mann-Whitney U test. Categorical data was expressed as frequency (%) and were compared using a v 2 test or Fisher's exact test. We used R statistical software (version 4.1.0; R Foundation for Statistical Computing, Vienna, Austria) for all analyses. RESULTS A TOTAL OF 106,477 patients visited the EDs of the participating hospitals during the study period (Fig. 1). Among these patients, 456 patients were diagnosed with hip fracture, and 326 patients were diagnosed with fall-related hip fracture. We excluded 38 patients who were younger than 40 years old (four patients) or had high energy onset (34 patients), and the remaining 288 patients were eligible for the analysis (Fig. 1). DISCUSSION I N THIS STUDY using data from four EDs of tertiary care hospitals, 6% of patients with all-related hip fractures had comorbid injuries. Among these, upper limb and head injuries were the two major comorbid injuries. Because there were no specific clinical characteristics in patients with comorbid injury, it is important to look thoroughly for other injuries in patients with hip fracture. The most frequent site of comorbid injury was the upper limb, which plays the important role of preventing critical injuries secondary to falls. It is reasonable to assume that This study suggests that careful, systematic examination of patients with fall-related hip fractures may be needed to properly assess comorbid injuries in the ED. The systematic examination is a head-to-bottom examination based on the anatomical physical examination as the tertiary survey, and it is important to examine all parts of the body, even those that may not be related to the injury. Because this study investigated the characteristics of comorbid injury and not overlooked injury, this study data did not include missed injuries. Therefore, this study may have underestimated the possible risk of missed injuries. According to a previous epidemiological study, up to 38% or injuries were missed among trauma patients in intensive care units. 18 Therefore, our findings should facilitate whole body trauma survey in fall-related hip fracture. Previous evidence suggests that whole body examinations are highly effective in reducing missed injuries. For instance, a meta-analysis showed that the addition of tertiary surveys to secondary surveys significantly reduced missed injuries in trauma patients (odds ratio, 0.63, 95% confidence interval [CI], 0.44-0.90). 19 In our study, some patients required early medical intervention, such as those with subdural hematoma and pneumothorax. Treatment delay may lead to severe complications. Therefore, the identification of comorbid injury in patients with hip fractures in the ED is essential. In our study, comorbid injuries were more likely to be found in women (among patients with accompanied injury, women accounted for 82%), suggesting that female sex may be correlated with comorbid fall-related injuries in patients with hip fractures. This may be because frailty is common in the elderly female population, which may lead to more fallrelated injuries. 20 Harmsen et al. 21 also reported that older age and female sex are risk factors for fall-related injuries. Because this is a small exploratory study, larger studies are needed to further explore the identified candidate risk factors of comorbid fall-related injuries with hip fractures. There are several limitations to this study. First, although this study used data from four tertiary care hospitals, the sample size was limited, especially patients who had comorbid injuries. Second, we only identified comorbid injuries among hip fractures because the data set of this study did not allow for sufficient discussion of diagnostic delays and did not identify missed comorbid injuries, therefore, the true frequency of missed comorbid injuries is unknown. CONCLUSION S EVENTEEN PATIENTS (6%) with hip fracture had other fall-related injuries. The frequent sites of comorbid injuries were the upper limb and head. A thorough assessment of the whole body may be beneficial in preventing missed diagnoses of comorbid injuries in patients with hip fracture.
2022-10-29T15:06:15.211Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "044423557a6acf5690640c4110cc57852f868852", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "00fd1dbec348d46b576f9f2aa58268c4c8f32fc5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
58890074
pes2o/s2orc
v3-fos-license
Effect of Tubular Chiralities and Diameters of Single Carbon Nanotubes on Gas Sensing Behavior : A DFT Analysis Using density functional theory, the adsorption of CO, CO2, NO and CO2 gas molecules on different chiralities and diameters of single carbon nanotubes is investigated in terms of energetic, electronic properties and surface reactivity. We found that the adsorption of CO and CO2 gas molecules is dependent on the chiralities and diameters of CNTs and it is vice versa for NO and NO2 gas molecules. Also, the electronic character of CNTs is not affected by the adsorption of CO and CO2 gas molecules while it is strongly affected by NO and NO2 gas molecules. In addition, it is found that the dipole moments of zig-zag CNTs are always higher than the arm-chair CNTs. Therefore, we conclude that the zig-zag carbon nanotubes are more preferred as gas sensors than the arm-chair carbon nanotubes, especially for detecting NO and NO2 gas molecules. Introduction Monitoring of combustible gas alarms, gas leak detection, and environmental pollution is of great concern in public security.Advances in nanotechnology give great promise for achieving new sensing materials.Since the discovery of carbon nanotubes in 1991, the single-walled carbon nanotubes (SWCNTs) have been intensively investigated as nanoscale gas sensors because of their great surface areas to bulk ratio and their abilities to mod-ulate electrical properties upon adsorption of various kinds of gas molecules [1]- [17].The emission of carbon and nitrogen oxides (CO, CO 2 , NO and NO 2 ) results from the combustion of fossil fuels, contributing to both smog and acid precipitation, and affecting both terrestrial and aquatic ecosystems [18].Although many efforts have been made to use catalysts to reduce the amount of carbon or nitrogen oxides in the air [19]- [25], an efficient method of sensing and removing carbon and nitrogen oxides is still required. Because carbon and nitrogen oxides are the most dangerous air pollutants, toxic and global warming gases, our work is concentrated on investigating the effect of tubular chiralities and diameters of single carbon nanotubes on gas sensing behavior for CO, CO 2 , NO and NO 2 gas molecules, applying the first principle calculations. Computational Methods All calculations were performed with the density functional theory as implemented within G03W package [26]- [29], using B3LYP exchange-functional and applying basis set 6 -31g (d,p).Pure carbon nanotubes ( ) The obtained diameters [30] and the adsorption energies of gas molecules on CNTs (E ads ) [31] are calculated from the following relations: ( ) where n and m are integral numbers, the composition of chiral vector. Adsorption of CO, CO2, NO and NO2 Gas Molecules on CNTs We have adsorbed CO and CO 2 gas molecules vertically on different three positions of ( ) 5, 0 , ( ) 9, 0 , ( ) and ( ) 6, 6 CNTs: above a carbon atom (carbon site), above a bond between two carbon atoms (bond site) and above a center of a hexagon ring (vacant site).The calculated adsorption energies of CO and CO 2 gas molecules Also, we have adsorbed NO and NO 2 gas molecules vertically on different three positions of ( ) 5, 0 , ( ) 9, 0 , ( ) 5,5 and ( ) 6, 6 CNTs: above a carbon site, above a bond site and above a vacant site.The calculated adsorption energies of NO and NO 2 gas molecules are listed in Table 3.It is found that the best adsorption energies of NO gas molecule are on the ( ) 9, 0 CNT above a bond site, then above a carbon site and after that above a vacant site with adsorption energies of −1.65 eV, −1.55 eV and −1.34 eV, respectively.However, for NO 2 gas molecule is found to be above the bond site on ( ) 9, 0 CNT with adsorption energy of −1.75 eV.Also, it is noticed that the vacant site is always preferred for NO 2 gas adsorption on all the studied CNTs except for ( ) 9, 0 CNT.Therefore, one can conclude that all CNTs can be used as gas sensors for NO and NO 2 gas molecules. From Table 2, Table 3, one can investigate the effect of the chiralities and the diameters on the CNT gas sensors behavior.It is clear that the adsorption of CO and CO 2 gas molecules is dependent on the chiralities and the diameters of CNTs.The adsorption of CO and CO 2 gas molecules is enhanced with increasing the diameter of the zig-zag CNTs.However, the adsorption of NO and NO 2 gas molecules is independent on the chiralities and the diameters of CNTs. Energy Gaps of Adsorbed CO, CO2, NO and NO2 Gas Molecules on CNTs From Table 4, it is clear that the adsorption of CO and CO 2 gas molecules on CNTs does not affect the elec- tronic character of the CNTs.Also, the band gaps of pristine CNTs and the adsorbed CO and CO 2 gas molecules on CNTs are so close. From Table 5, the adsorption of NO and NO 2 gas molecules on CNTs is strongly affected the electronic character of the ( ) 9, 0 and ( ) 5, 0 CNTs.However, there is not any change of the electronic character for ( ) and ( ) 6, 6 CNTs.The band gap of pristine ( ) 5, 0 CNT is increased from 0.70 eV to 1.61 eV and to 1.37 eV when NO and NO 2 gas molecules are adsorbed on it, respectively.Also, The band gap of pristine ( ) 9, 0 CNT is increased from 0.25 eV to 1.34 eV and to 1.25 eV when NO and NO 2 gas molecules are adsorbed on it, respectively.One can conclude that the electronic character of ( ) 5, 0 , ( ) 9, 0 , ( ) 5,5 and ( ) 6, 6 CNTs is not affected by the adsorption of CO and CO 2 gas molecules.The adsorption of NO and NO 2 gas molecules on CNTs is only strongly affected the electronic character of the ( ) 9, 0 and ( ) 5, 0 CNTs, however the ( ) HOMO-LUMO Orbitals of Adsorbing CO, CO2, NO and NO2 Gas Molecules on CNTs Our calculated band gaps show that the adsorption of CO and CO 2 gas molecules on CNTs is not affected the band gaps of the pristine CNTs, however the adsorption of NO and NO 2 gas molecules is strongly affected the band gaps.To explain that the molecular orbitals of adsorbing CO, CO 2 , NO and NO 2 gas molecules on ( ) Comparing the HOMO-LUMO energies of the pristine CNTs with ones after the adsorption of CO and CO 2 gas molecules, it is clear that the energy values are so close.Also, it is noticed that there is not any contribution from the gas molecules at the molecular orbitals and the electron density of HOMO and LUMO is distributed over all the carbon atoms of CNTs except for ( ) 9, 0 CNT is located at the terminals of the tube, see Figure 2. Comparing the HOMO-LUMO energies of the pristine CNTs with ones after the adsorption of NO and NO 2 gas molecules, it is clear that the energy values are so close in case of ( ) 5,5 and ( ) 6, 6 CNTs and are quite far in case of ( ) 5, 0 and ( ) 9, 0 CNTs.The HOMO energy levels in case of ( ) 5, 0 and ( ) 9, 0 CNTs after adsorbing NO and NO 2 gas molecular are getting deep (lower) in energy however the LUMO energy levels are getting higher in energy.Results in increasing the band gap from 0.70 eV to 1.81 eV in Table 4.The calculated energy gaps (E g ) of CO and CO 2 above a carbon site, a bond site and a vacant site of pristine ( ) case of ( ) 5, 0 CNT and from 0.25 eV to 1.34 eV in case of ( ) 9, 0 CNT.Also, it is noticed that there is representation from the NO gas molecule at LUMO of ( ) 9, 0 and ( ) 6, 6 CNTs, see Figure 3. The Reactivity of CNT Surfaces before and after Adsorbing Gas Molecules Our calculated band gaps and molecular orbitals show that the adsorption of CO and CO 2 gas molecules on CNTs is not affected neither the band gaps nor the molecular orbitals of the pristine CNTs but the adsorption of NO and NO 2 gas molecules is strongly affected both of the band gaps and the molecular orbitals of ( ) 5, 0 and ( ) 9, 0 CNTs.To clear that the reactivity of CNT surfaces before and after adsorbing CO, CO 2 , NO and 2 gas molecules on ( ) 5, 0 , ( ) 9, 0 , ( ) 5,5 and ( ) 6, 6 CNTs are studied, see Table 6, Table 7.The surface reactivity of the pristine CNTs is calculated and is listed in Table 6.The dipole moments of pristine ( ) 5, 0 , ( ) 9, 0 , ( ) and ( ) 6, 6 CNTs are found to be 0.54 Debye, 0.20 Debye 0.00 Debye and 0.00 Debye, respectively.Comparing the dipole moments of the pristine CNTs with ones that are adsorbed the CO and CO 2 gas molecules, it is clear that the dipole moment values are so close in case of the adsorption of the CO gas molecule but they are higher in case of the adsorption of the CO 2 gas molecule, see Table 6.Also, it is noticed that the highest dipole moments after the adsorption of the CO 2 gas molecule are 0.74 Debye (when CO 2 is adsorbed above the bond site of ( ) 9, 0 CNT) and 0.77 Debye (when CO 2 is adsorbed above the vacant site of ( ) 6,6 CNTs.Energies of HOMO and LUMO are listed above their molecular orbitals and are given by eV. Table 6.The calculated dipole moments of pristine and after adsorbing CO and CO 2 gas molecules above a carbon site, a bond site and a vacant site of ( ) 5,0 , ( ) 9,0 , ( ) 5,5 and ( ) 6,6 CNTs.All dipole moments are given by Debye.spectively.Comparing the dipole moments of the pristine CNTs with ones that are adsorbed the NO and NO 2 gas molecules, it is found that the dipole moments are getting higher.When the NO and NO 2 gas molecules are adsorbed on the vacant sites of CNTs, their dipole moments are either quite close to or are lower than the dipole moments of pristine CNTs, except in case of adsorbing NO 2 on ( ) 5, 0 CNT, the dipole moment is increased.Also, all the calculated dipole moments of adsorbing NO and NO 2 gas molecules on the carbon sites of CNTs are increased, except in case of adsorbing NO 2 on ( ) 5, 0 CNT, the dipole moment is decreased.In case of adsorbing NO and NO 2 gas molecules on the bond sites of CNTs the dipole moments are also increased, except in case of adsorbing NO 2 on ( ) 9, 0 CNT is decreased, see Table 7. From Table 6, Table 7, it is clear that the dipole moments of zig-zag ( ) 5, 0 and ( ) Conclusion The gas sensing behavior of CNTs, considering a range of different nanotube diameters and chiralities, as well as different adsorption sites is reported.The adsorption of CO, CO 2 , NO, and NO 2 gas molecules on the ( ) 5, 0 , ( ) 9, 0 , ( ) 5,5 and ( ) 6, 6 CNTs are studied using B3LYP/6-31 g(d, p).Three different adsorption sites (above a carbon site, a bond site and a vacant site) are applied on CNTs.It is found that the adsorption of CO and CO 2 gas molecules is dependent on the chiralities and the diameters of CNTs and it is enhanced with increasing the diameter of the zig-zag CNTs.However, the adsorption of NO and NO 2 gas molecules is independent on the chiralities and the diameters of CNTs.Also, the electronic character of ( ) 5, 0 , ( ) 9, 0 , ( ) 5,5 and ( ) 6, 6 CNTs is not affected by the adsorption of CO and CO 2 gas molecules.While, the adsorption of NO and NO 2 gas molecules on CNTs is only strongly affected by the electronic character of the ( ) 9, 0 and ( ) 5, 0 CNTs but the ( ) 5,5 and ( ) 6, 6 CNTs are not affected at all.It is found that the dipole moments of zig-zag ( ) 5, 0 and ( ) 9, 0 CNTs are always higher than the arm-chair ( ) 5,5 and ( ) 6, 6 CNTs.Also, it is noticed that the dipole moment of adsorbing NO gas molecule on the bond site of ( ) 5, 0 CNT is increased by ten times compared with the dipole moment of pristine ( ) 5, 0 CNT.Therefore, these findings prove that the zig-zag carbon nanotubes are better than the arm-chair carbon nanotubes as gas sensors, especially for NO and NO 2 gas molecules. Table 2 . It is found that the best position and adsorption energy for CO gas molecule is above the bond site on ( ) Table 1 . The configuration structures and diameters of the studied CNTs. Table 3 . The calculated adsorption energies (E ads ) of NO and NO 2 above a carbon site, a bond site and a vacant site of pris- 6, 6 CNTs are investigated, see Figure 2, Figure 3.The band gaps of the pristine CNTs are calculated and are listed in Table 5 . The calculated energy gaps (E g ) of NO and NO 2 above a carbon site, a bond site and a vacant site of pristine ( ) Table 7 . The calculated dipole moments of pristine and after adsorbing NO and NO 2 gas molecules above a carbon site, a bond site and a vacant site of ( )
2018-12-15T04:39:00.974Z
2014-04-16T00:00:00.000
{ "year": 2014, "sha1": "3cc1967a9f6c878610775b09db41f60111847775", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=45005", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3cc1967a9f6c878610775b09db41f60111847775", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
13881546
pes2o/s2orc
v3-fos-license
Still and Moving Image Evidences for Mating of Echinococcus granulosus Reared in Culture Media. BACKGROUND Echinococcus granulosus cultivation is very important for improvement of different aspect of medical and veterinary researches. Despite many advances in this case, there is a missing link for in vitro life cycle of adult worms and it is fertilization. Regarding the researchers' observations, self-fertilization can be done in worms living in dog intestine, but despite all sorts of experimental techniques, this phenomenon has never been observed in reared worms in culture media. Furthermore, cross fertilization has not been observed in vitro and even in parasites with dog intestinal origin; although it theoretically is possible. During a follow-up of cultivated adult worms, evidences of behaviors similar to self-mating (Type 2) and cross-mating were observed in our lab which will be presented here. METHODS Protoscoleces were aseptically removed from sheep hydatid cysts, washed twice with PBS and then cultivated in S.10E.H culture medium. The stages of parasite growth were observed using an inverted microscope for two months and all stages and behaviors were microscopically photographed. Different movies have also been made from these behavioral features. RESULTS After around 55 days post cultivation, some evidences of behaviors similar to self-mating (Type 2) and cross-mating were observed in some of the mature adult worms. However, fertile eggs in these parasites have never been observed. CONCLUSION Regarding the above observations, these parasites show tendency to unsuccessful self-mating/fertilization (type 2) which failure could be due to anatomical position and physiological maturation. Also lack of suitable conditions for self-fertilization causes the worms try to do unsuccessful cross- mating/fertilization in culture media. Introduction chinococcus granulosus causative agent of cystic echinococcosis is an important zoonotic worm with global distribution, hygiene and economic importance. The relative disease is prevalent in Middle East and many part of our country, Iran (1-4(. Cultivation of this parasite is very valuable for improvement of different branches of medico-veterinary researches. Although many advances have been done with cultivation methods, fertilization is a missing link for in vitro life cycle of E. granulosus (5)(6)(7). Regarding previous observations, self-fertilization is considered to be a normal way for sperm transfer in natural intestinal worms but despite several experimental techniques, it has never been seen in cultivated worms (6,8). In addition, cross-fertilization has never seen even in intestinal parasite, although theoretically it is possible (8,9) and rarely has been suggested by molecular studies (10,11). During daily follow up of cultivated worms (12), we succeeded to detect some evidences of behavior similar to selfmating (type 2) and cross-mating which can be very important to further understanding of the biology and speciation of the parasite and subsequent researches. Here we will discuss our observations. Materials and Methods Hydatid cysts with more than 80% fertile protoscoleces (PSC) were collected from infected sheep at Shiraz abattoirs, cut opened under aseptic conditions, passed through two layers sterile gauze, washed three times with sterile PBS and then cultured in S10E.H culture media using the same procedures described before with some modifications (6,12,13). All stages of parasite growth were observed during a long term assay (about two months) from the first day of growth to adult forms, using an inverted microscope. Photos as well as movies were made during this follow up. Results After around 55 days post cultivation, 50-60% of initially PSCs were developed to adult worms with at least 3 proglottids in culture flasks. In addition, most of them had at least a mature segment. Active proglottids with protruded cirrus have frequently observed in mature adult worms. Genital pore also was opened and closed rhythmically special in latest segment of some of them. During daily control of the culture flasks we succeeded to detect some behavior similar to mating in reared E. granulosus. In a survey second segment of an individual worm connected to third one of another worm (Fig 1: A & B; Movie: 1). It was considered that the connection has occurred between cirrus of a worm and genital pore of another one. Furthermore in another observation an individual worm has attempted to close its third segment to the first segment of the same worm (Fig 1: C & D; Movie.2). Although these observations did not cause that we find mature eggs in culture flasks but these evidences may confirm positive potential of the E.granulosus to cross-mating/fertilization. Discussion Initial efforts were carried out since 1926 and 1928 by Dévé to grow PSCs in cyst fluid (14,15). Smyth succeeded to introduce in vitro cultivation of different larval stages of this parasite (16). He also cultivated sexually mature strobilate of this parasite from PSC (5). Despite within several years many advances have been done with cultivation method but fertilization is missing link for in vitro life cycle of E. granulosus (5-7; 12-19; 21). E. granulosus is a hermaphroditic parasite. Presence of male and female reproductive systems and common genital pore shows the parasite has potential for both cross and self-fertilization (9). Probably selffertilization is an advantage for this small worm that it might be difficult to find another worm particularly in light infections (22). This phenomenon has been observed in E. granulosus collected from naturally and experimentally infected definitive host (8,9). In sections derived of dog worms, the cirrus was inserted into the vagina and morphologically has been proved seminal receptacle of intestinal worms filled with spermatozoa (8(. Self-fertilization has also confirmed by some molecular studies in dog worms (10,11,23). Although cross-fertilization has not been observed in sections obtained of dog intestine (8) some researchers have suggested that aggregative behavior observed in intestinal worms are probably as a result of attraction between individual worms (24). Limited molecular reports have suggested probability of this phenomenon (10,11) while others have rejected it (23). The major difference between sexually mature cultured and dog worms, was the failure of insemination to occur in the reared worms in culture media (25). In the well-developed genitalia of the mature reared adult worm many spermatozoa were seen in testes, but uterus filled with immature ova and empty seminal receptacle has always been observed (6(. In present study, we have frequently observed active proglottids with protruded cirrus. Genital pore was also opened and closed rhythmically special in latest segment, the situation that has also observed in previous investigations (6,13).We also succeeded to detect some behavior similar to mating between the same and different worms although mature eggs did not find. Our finding may be confirming positive potential for cross-fertilization as it was confirm with detection early stages of the development of a shelled egg in the uterus cavity of a monozoic/ vesicular worm cultured in vitro (20(. According to the movie number 1(Movie 1), we speculate some cultivated parasites show tendency to "cross-mating" in culture which could be due to lack of suitable conditions for self-fertilization. Especially the large number of the parasite in culture media may be an advantage in which the small worms can find others. Another our finding is confirmed this theory that insemination between different proglottids of the same strobila is highly unlikely to occur (9). According to movie number 2(Movie 2), we believe the worms have tendency to unsuccessful mating between their individual proglottids. Kumaratilake et al, have believed the type (2) of self-fertilization is unsuccessful because no two proglottids of the same strobila are ever at the same stage of maturation (9). In addition to Kumaratilake's opinion, we are believed that short stature and low number of the segment are important reasons of failure in connection and then mating/fertilization between two strobila segments of an individual worm. Conclusion We have observed some possible evidences of unsuccessful mating/fertilization in reared worms. Extremely complexity of requirements for fertilization may cause this physiologic function cannot dissolve production of fertile egg. Regarding the above observations, these parasites show tendency to unsuccessful self-mating/fertilization (type 2) which failure could be due to anatomical position and physiological maturation. Also lack of suitable conditions for self-fertilization causes the worms try to do unsuccessful cross-mating/fertilization in culture media. More study is necessary to improve our knowledge in this manner.
2018-04-03T05:41:10.201Z
2014-03-01T00:00:00.000
{ "year": 2014, "sha1": "740aeb5ed58093d58684b9ebe3a269f4a04ff844", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "740aeb5ed58093d58684b9ebe3a269f4a04ff844", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
386532
pes2o/s2orc
v3-fos-license
Disease Phenotypes in a Mouse Model of RNA Toxicity Are Independent of Protein Kinase Cα and Protein Kinase Cβ Myotonic dystrophy type 1(DM1) is the prototype for diseases caused by RNA toxicity. RNAs from the mutant allele contain an expanded (CUG)n tract within the 3' untranslated region of the dystrophia myotonica protein kinase (DMPK) gene. The toxic RNAs affect the function of RNA binding proteins leading to sequestration of muscleblind-like (MBNL) proteins and increased levels of CELF1 (CUGBP, Elav-like family member 1). The mechanism for increased CELF1 is not very clear. One favored proposition is hyper-phosphorylation of CELF1 by Protein Kinase C alpha (PKCα) leading to increased CELF1 stability. However, most of the evidence supporting a role for PKC-α relies on pharmacological inhibition of PKC. To further investigate the role of PKCs in the pathogenesis of RNA toxicity, we generated transgenic mice with RNA toxicity that lacked both the PKCα and PKCβ isoforms. We find that these mice show similar disease progression as mice wildtype for the PKC isoforms. Additionally, the expression of CELF1 is also not affected by deficiency of PKCα and PKCβ in these RNA toxicity mice. These data suggest that disease phenotypes of these RNA toxicity mice are independent of PKCα and PKCβ. Introduction Myotonic dystrophy type 1 (DM1) is a slowly progressing and highly variable multisystemic disorder. It is characterized by wasting of muscles and weakness. DM1 is caused by an expanded (CTG)n repeat in the 3'-untranslated region (UTR) of the DM protein kinase (DMPK) gene [1][2][3]. The mutant RNA forms RNA foci, which alter the activity of RNA binding proteins such as CELF1 and muscleblind-like 1(MBNL1) [4,5]. MBNL proteins can colocalize with the RNA foci [6][7][8], and the prevailing model of DM1 pathogenesis invokes sequestration of these proteins by the mutant DMPK mRNA [4]. Strong evidence for the role of MBNL proteins in DM1 pathogenesis has been obtained through mouse knockout models of the various Mbnl genes [9][10][11][12][13]. In contrast, CELF1 levels are reportedly increased in myoblasts [14], in the heart [15], and skeletal muscles from DM1 patients [16]. Thus, mouse models have utilized over-expression of CELF1 and demonstrated DM1 related phenotypes such as muscle histopathology and cardiac defects [17][18][19]. Proposed molecular mechanisms of increased CELF1 invoke signaling pathways mediated by PKCs and/or glycogen synthase kinase 3 beta (GSK3β) [20][21][22]. Consistent with this idea, inhibitors of PKC and GSK3β were able to rescue some of the salient phenotypes in mouse models of RNA toxicity [21,23]. The protein kinase C (PKC) family comprising many isoforms, phosphorylates serine and threonine residues in many target proteins [24]. Different PKC isoforms are expressed in skeletal muscle, including the classical isoform, PKCα [25]. PKCα is the predominant isoform in skeletal muscle, whereas PKCβ and PKCγ are expressed at very low levels [26]. The role of PKC in RNA toxicity in skeletal muscle is not clear, but it has been investigated in a cardiac specific mouse model using pharmacological inhibitors that were effective in improving cardiac phenotypes [23]. Previously, we have shown increased CELF1 expression in our inducible/ reversible DM5 mouse model of RNA toxicity and that CELF1 levels are responsive to the presence of the toxic RNA [27]. In addition, we demonstrated that the levels of CELF1 in skeletal muscle correlated with skeletal muscle histopathology in the mouse model and in tissues from patients with DM1 [28]. Of note, genetic deletion of Celf1 in the DM5 mice resulted in mild improvement of muscle histology [28]. Since increased CELF1 levels are thought to be due to activated PKC, we investigated the role of PKC in the skeletal muscle phenotypes of our RNA toxicity mice using a genetic approach. Phenotypic effects of Prkca -/-/Prkcb -/double knockout in the RNA toxicity mice Using our inducible/reversible DM5 mouse model of RNA toxicity, we have shown that induction of toxic RNA expression (with 0.2% doxycycline in drinking water) results in many features of DM1 includes myotonia, cardiac conduction abnormalities, abnormal muscle pathology, and RNA splicing defects [27]. In this model, CELF1 is increased in the skeletal muscle, but not in the heart [27]. We also showed that deletion of Celf1 in this model results in mild improvement in skeletal muscle histopathology [28]. To assess the role of PKCα in regulating CELF1 levels and the phenotypes in these RNA toxicity mice, preliminary experiments were done using Prkcα knockout mice (Prkca tm1Jmk ) obtained from Dr. J. Molkentin [29]. The DM5/ Prkca tm1Jmk +/+ , DM5/ Prkca tm1Jmk +/-, and DM5/ Prkca tm1Jmk -/mice were normal before induction of RNA toxicity. After induction with 0.2% doxycycline (w/v), all the mice developed severe myotonia and similar degrees of advanced cardiac conduction abnormalities at two weeks post-induction. We found no significant differences between the groups in terms of survival, running distance, and grip strength after one and two weeks of induction (S1 Fig). We also obtained another Prkca (Pkcα) knockout mouse as well as a Prkcb (Pkcβ) knockout mouse from Dr. M. Leitiges [30,31]. The rest of the experiments were done with these lines bred with the DM5 mice to generate double knockout mice in the RNA toxicity background. Due to the high severity of the phenotypes in the DM5 mice, including severe cardiac conduction abnormalities that led to mortality in the preliminary experiments, we tried various lower concentrations of doxycycline. We found that 0.02% doxycycline led to robust induction of myotonia without severe cardiac conduction abnormalities or increased mortality. This resulted in 2-3 fold induction of toxic RNA expression in the skeletal muscle and no induction in the heart (S2 Fig). This correlated with the absence of severe cardiac conduction abnormalities at 2, 4, 6, and 8 weeks after induction of RNA toxicity (S3 Fig). The DM5 +/wt / Prkca -/-/ Prkcb -/and a control group of DM5 +/wt / Prkca +/+ /Prkcb +/+ mice did not show any evidence of myotonia by EMG) or cardiac conduction abnormalities (by ECG) prior to induction of RNA toxicity. The mice deficient for PKCα/PKCβ were slightly smaller, had a slightly longer PRinterval on ECG, and did not run as far on treadmill running assays, but showed no difference in grip strength (S1 Table). After inducing the expression of the toxic RNA transgene (referred to as D+ or Dox+), mice were analyzed for body weight and tested by the aforementioned phenotypic assays at 2, 4, and 6 weeks post-induction. We found no change in body mass at 6-weeks post-induction between the two groups ( Fig 1A). By six weeks post-induction, the DM5 +/wt / Prkca -/-/Prkcb -/mice became weaker but this was similar to the DM5 +/wt / Prkca +/+ /Prkcb +/+ mice (Fig 1B). We tested these mice for their ability to run on a treadmill and recorded data as the percentage of retained run distance as compared to their pre-induction results. Again, we found that though both groups had deficits, there was no significant difference ( Fig 1C). Also, no difference in cardiac conduction abnormalities were observed after 6 weeks of RNA toxicity (S3 Fig). Both groups of mice also developed a similar degree of myotonia by 4 or 6 weeks after induction of the toxic RNA ( Fig 1D). We also confirmed that toxic RNA (S2 Fig) and Clcn1 mRNA levels (S4 Fig) were similar between study groups by quantitative RT-PCR at 6 weeks post-induction. These results suggest that absence of PKCα/β has no beneficial effect on the muscle functions in these RNA toxicity mice. Absence of PKCα/β does not affect CELF1 expression in these RNA toxicity mice Previous studies have shown that nuclear accumulation of the toxic RNA results in increased levels of CELF1 protein. The toxic RNA is thought to activate PKC signaling leading to CELF1 hyper-phosphorylation and stabilization [22]. Consistent with this idea, blocking PKC activity with Ro-31-8220 resulted in improvement in a heart-specific DM1 mouse model, and was correlated with reduced phosphorylation and decreased levels of CELF1 [23]. Mis-splicing and muscle histopathology in these RNA toxicity mice are not corrected by absence of PKCα To determine whether the absence of PKCα/β corrects the mis-splicing events affected by the toxic RNA, we analyzed splicing of Clcn1 (ex7a), Nfix1 (ex7), Fxr1h (exons 15,16), and Nrap (exon 12), all targets which are mis-spliced in this mouse model of RNA toxicity [28]. All these targets were found to be misregulated by the toxic RNA in the DM5 +/wt / Prkca +/+ /Prkcb +/+ mice (Fig 3A and 3B). But, we found similar levels of splicing defects in the DM5 +/wt / Prkca -/-/ Prkcb -/mice in the presence of the toxic RNA. Although the splicing defects we studied were relatively mild, the absence of PKCα/β was still unable to correct these. The data suggests that these mis-splicing events in this RNA toxicity mouse model were independent of PKCα/β. Discussion Previously, we reported that expression of the toxic RNA in our mouse model results in many features of DM1 includes myotonia, abnormal muscle pathology, and RNA splicing defects [27]. Using this mouse model, we have shown that CELF1 is post-transcriptionally increased in response to the toxic RNA and that CELF1 contributes to skeletal muscle histopathology [28]. In that study, we found that depletion of CELF1 stabilized some functional phenotypes and improved skeletal histopathology in the RNA toxicity mice [28]. However, the roles of PKCα/β, which have been reported to increase CELF1 levels through phosphorylation and increased protein stability [22], have not been investigated in the skeletal muscle of mice with RNA toxicity. In this study, we used a clear genetic approach to eliminate the expression of both PKCα and PKCβ in these mice with RNA toxicity. We find that key muscle phenotypes associated with RNA toxicity are independent of PKCα/β. We also show Nrap (exon 12) shows that PKCα/β deficiency has no effect on splicing in the mice with RNA toxicity. (B) Quantification of the gels in (A) shows that RNA toxicity leads to splicing defects in the DM5 mice for all targets tested and PKCα/β deficiency has no effect on these splicing defects. For each groups, at least 4-5 mice were analyzed. *p = 0.05, Student's t test; n.s. means not significant; error bars are mean±stdev. Role of PKCα/β in a Mouse Model of RNA Toxicity that increased CELF1 levels are not mitigated by the absence of PKCα/β in skeletal muscle. Concordantly, neither functional outcomes nor abnormal muscle histology in our mice expressing the toxic RNA were restored towards normal in the absence of PKCα/β. Both PKCα and PKCβ have been implicated in the pathogenesis of cardiac disease with pharmacological and gene-therapy based inhibition of PKCα/β having been shown to enhance cardiac contractility in heart failure models [29,32]. With respect to RNA toxicity associated with DM1, previous studies have reported that PKCα/β signaling is activated in cells expressing expanded CUG repeat RNAs and that PKCα/β inhibition by Ro-31-8220 correlates with reduced PKCα/β activation and CELF1 levels [22]. In a cardiac specific mouse model, treatment with Ro-31-8220 was associated with improved cardiac function and also attenuated splicing defects related to increased CELF1 levels [22,23]. In contrast, using a genetic approach, we find that PKCα/β is not involved in affecting skeletal muscle phenotypes in our RNA toxicity mice. Failure to rescue the phenotypes in our RNA toxicity models by genetic deletion of PKCα/β could be attributed to differences in the mouse model and the approaches used in the different studies. Using a human DMPK promoter, our mouse model expresses its toxic RNA in multiple tissues that are affected in DM1 including skeletal muscle, the heart and smooth muscle [27]. The other published models have used non-DMPK tissue specific promoters [22, 23]. Our mouse model has DM1 relevant phenotypes such as myotonia, cardiac conduction defects, RNA splicing defects, increased CELF1 in skeletal muscle, muscle histopathology and shortened lifespan (likely due to cardiac conduction abnormalities) that are present simultaneously and clearly responsive to RNA toxicity. Limited subsets of these phenotypes are also seen in a tissue specific manner in the other mouse models. In addition, as in the other mouse models, MBNL1 does bind the RNA expressed in our mice [33]. However, our mice express a DMPK 3'UTR RNA with (CUG) 5 (i.e. a perfect but non-expanded repeat tract) that does not form visible RNA foci, and the other mouse models express a RNA comprising of a concatemer of forty eight, interrupted but non-expanded repeat tracts containing (CUG) 20 that forms visible RNA foci. Whether RNA foci play a role in affecting CELF1 levels is uncertain since it has been reported that the HSA-LR mice (which express only an expanded (CUG) 250 ) with many RNA foci and DM2 patients whose RNA foci contain only expanded (CCUG) n , do not show increased CELF1 in skeletal muscle [34] It may be that these distinctions account for the differences in the various studies. It is interesting to note that Ro-31-8220 did not influence the phenotype of a mouse model engineered to over-express CELF1, despite the fact that this model recapitulated aspects of DM1 [23]. Thus, over-expression of CELF1 may cause DM1 associated phenotypes in a PKC independent manner. This is analogous to our observations. It is also possible that DM1 phenotypes induced by RNA toxicity are inhibited by compounds such as Ro-31-8220 through a variety of means. Although Ro-31-8220 has stated potency against PKCα, it can also affect other kinases including GSK3β [35][36][37]. The results of our study with the deletions of PKCα and PKCβ clearly demonstrate that alternate pathways are likely involved in CELF1 regulation in our mice. Similarly, a recent study demonstrated that the effect of Ro-31-8220 on the toxic RNA in DM1 cells may be independent of effects on PKCα [38]. CELF1 has been posited as a substrate for a number of kinases including Akt, cyclinD3/cdk [39] and more recently, GSK3β [21]. In investigating some of these other targets, we find that GSK3β levels are increased in the skeletal muscles of our mice with RNA toxicity (S5 Fig). Though they are not part of this study, results such as these provide fertile ground for future studies assessing the effects of pathways such as GSK3β and the role that inhibitors such as the Ro-81-3220 may play. The protein kinase C family comprises at least 10 different isozymes, which are classified by their second messenger activators [24]. The predominant forms reported to be expressed in skeletal muscle are PKCα and PKCθ [25]. PKCα accounts for approximately 97% of the classical PKC activity and, has primarily been studied in skeletal muscle with respect to its effects on glucose metabolism and insulin responsiveness [26,30]. Interestingly, PKCθ (PKCtheta), another isoform expressed in skeletal muscle has been suggested to be involved in a number of biological events and phenotypes that have been associated with DM1. For instance, PKCθ has been suggested to plays role in myoblast fusion [40,41], a role in modulating chloride channel function in skeletal muscle [42] and its loss has been shown to reduce skeletal muscle histopathology in a Duchenne Muscular Dsytrophy (DMD) mouse model [43,44]. Given our negative results with PKCα/β, we investigated the levels of PKCθ and found that phosphorylated PKCθ levels were increased in the skeletal muscles of our RNA toxicity mice (Fig 6, S6 Fig). In conclusion, using double knockout mice for PKCα and PKCβ, we find that PKCα/β are not required for the skeletal muscle phenotypes in our RNA toxicity mice. Our data also suggest that PKCα/β play little role in increased CELF1 levels and or the aberrant splicing events observed in the skeletal muscles of our mouse model. Interestingly our preliminary evaluation of alternative kinases that might be involved in RNA toxicity suggests that a previously reported target, GSK3β, and a novel target, PKCθ, are both affected in our mouse model and warrant further investigation in future studies. Phenotypic analysis Mice were analyzed for running on treadmill and forelimb grip strength was measured using a digital grip-strength meter. All the details about protocols are described elsewhere [28]. All results are reported as retained function with reference to baseline for each mouse. EMG and ECG were also measured as described previously [28]. RNA isolation, qRT-PCR assays and splicing analysis Total RNA was extracted from skeletal muscle tissues using protocol as described [46]. 1 μg of total RNA was used for making cDNA using QuantiTech Reverse Transcription Kit (Qiagen). qRT-PCR was done using the BioRad iCycler and detected with SYBER-Green dye. Data were normalized using endogenous control (Gapdh), and normalized values were subjected to a 2 -ΔΔCt formula to calculate the fold changes between uninduced and induced groups. Primer sequences are given in S2 Table. All the splicing assays were done in at least five mice or more per group. Splicing primers and conditions have been described in S3 Table. Histology and fiber size quantitation H&E staining was done according to standard procedures and examined under a light microscope. Histopathology was assessed by H&E staining of quadriceps femoris (6 μm) cryosections. Muscle fiber size was determined using AxioVision TM V4.8.2.0 (Carl Zeiss MicroImaging). At least 3-5 mice per group were studied and for each mouse, 3-5 images were analyzed. Statistical analysis Statistical significance was determined using a two-tailed Student's t-test with equal or unequal variance as appropriate. All data are expressed as mean ± standard deviation. p<0.05 was considered statistically significant unless otherwise specified. Study Approvals All animal protocols were approved by the institutional ICAUC at the University of Virginia.
2018-04-03T03:09:00.880Z
2016-09-22T00:00:00.000
{ "year": 2016, "sha1": "41c7730e9c5fc904064b5e16c97cf2dfb487af5e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0163325&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41c7730e9c5fc904064b5e16c97cf2dfb487af5e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233744819
pes2o/s2orc
v3-fos-license
Efficacy and Safety of Angiotensin-Converting Enzyme Inhibitor in Combination with Angiotensin-Receptor Blocker in Chronic Kidney Disease Based on Dose: A Systematic Review and Meta-Analysis Background: The purpose of this meta-analysis was to evaluate the controversy of angiotensin-converting enzyme inhibitor (ACEI) in combination with angiotensin-receptor blocker (ARB) in the treatment of chronic kidney disease (CKD) based on dose. Methods: PubMed, EMBASE, and Cochrane Library were searched to identify randomized controlled trials (RCTs) from inception to March 2020. The random effects model was used to calculate the effect sizes. Potential sources of heterogeneity were detected using sensitivity analysis and meta-regression. Results: This meta-analysis of 53 RCTs with 6,375 patients demonstrated that in patients with CKD, ACEI in combination with ARB was superior to low-dose ACEI or ARB in reducing urine albumin excretion (SMD, −0.43; 95% CI, −0.67 to −0.19; p = 0.001), urine protein excretion (SMD, −0.22; 95% CI, −0.33 to −0.11; p < 0.001), and blood pressure (BP), including systolic BP (WMD, −2.89; 95% CI, −3.88 to −1.89; p < 0.001) and diastolic BP (WMD, −3.02; 95% CI, −4.46 to −1.58; p < 0.001). However, it was associated with decreased glomerular filtration rate (GFR) (SMD, −0.13; 95% CI, −0.24 to −0.02; p = 0.02) and increased rates of hyperkalemia (RR, 2.07; 95% CI, 1.55 to 2.76; p < 0.001) and hypotension (RR, 2.19; 95% CI, 1.35 to 3.54; p = 0.001). ACEI in combination with ARB was more effective than high-dose ACEI or ARB in reducing urine albumin excretion (SMD, −0.84; 95% CI, −1.26 to −0.43; p < 0.001) and urine protein excretion (SMD, −0.24; 95% CI, −0.39 to −0.09; p = 0.002), without decrease in GFR (SMD, 0.02; 95% CI, −0.12 to 0.15; p = 0.78) and increase in rate of hyperkalemia (RR, 0.94; 95% CI, 0.65 to 1.37; p = 0.76). Nonetheless, the combination did not decrease the BP and increased the rate of hypotension (RR, 3.95; 95% CI, 1.13 to 13.84; p = 0.03) compared with high-dose ACEI or ARB. Conclusion: ACEI in combination with ARB is superior in reducing urine albumin excretion and urine protein excretion. The combination is more effective than high-dose ACEI or ARB without decreasing GFR and increasing the incidence of hyperkalemia. Despite the risk of hypotension, ACEI in combination with ARB is a better choice for CKD patients who need to increase the dose of ACEI or ARB (PROSPERO CRD42020179398). INTRODUCTION Chronic kidney disease, characterized by a reduced glomerular filtration rate (GFR) and/or increased urinary albumin excretion, is an increasing public health issue owing to its high prevalence and increased risk of end-stage renal disease, cardiovascular disease, and premature death (Matsushita et al., 2010). The prevalence of CKD is estimated to be 8-16% worldwide (Jha et al., 2013). CKD is a great global-health challenge, especially in low-and middleincome countries (Mills et al., 2015). National and international efforts for the prevention, detection, and treatment of CKD are needed to reduce its morbidity and mortality worldwide. Hypertension commonly coexists with CKD, and its prevalence progressively increases with decline in kidney function (Muntner et al., 2010;Egan et al., 2014). According to recent guidelines, angiotensin-converting enzyme inhibitor (ACEI) or angiotensin-receptor blocker (ARB) should be the drugs of first choice for CKD (Kalaitzidis and Elisaf, 2018). The 2020 Kidney disease: Improving Global Outcomes (KDIGO) guideline recommends that treatment with an ACEI or an ARB be initiated in patients with diabetes, hypertension, and albuminuria and that these medications be titrated to the highest approved dose that is tolerated. The 2012 KDIGO guideline on IgA nephropathy recommends long-term ACEI or ARB treatment when proteinuria is >1 g/d, with up-titration of the drug depending on blood pressure (BP), and to achieve proteinuria <1 g/day. However, some CKD patients still have proteinuria after ACEI or ARB treatment (Igarashi et al., 2006;Slagman et al., 2011). Previous studies have suggested that the additive antiproteinuric and hypotensive effects of combined renin-angiotensin-aldosterone system (RAAS) blockade were superior to single RAAS blockade in CKD (Susantitaphong et al., 2013). Nonetheless, the use of ACEI in combination with ARB is not supported by all recent guidelines owing to concerns regarding adverse events such as renal dysfunction, hyperkalemia, and symptomatic hypotension in high-risk CKD patients (Esteras et al., 2015). Whether ACEI in combination with ARB or increasing the dose of ACEI or ARB is more effective in the treatment of CKD remains controversial. Therefore, the present meta-analysis of randomized controlled trials (RCTs) was designed to assess the efficacy and safety of ACEI in combination with ARB in patients with CKD based on the dose. Data Sources and Searches We searched PubMed, EMBASE, and Cochrane Library from inception to March 2020 to retrieve relevant articles. Two reviewers (Mingming Zhao and Rumeng Wang) independently screened the titles and abstracts of all electronic citations and fulltext articles were retrieved for comprehensive review and independently rescreened. If a disagreement occurred between them, it was resolved by consulting with a third investigator (Yu Zhang). Medical Subject Headings and free-text terms were used in each database with the following relevant keywords: "diabetic nephropathy," "hypertensive nephropathy," "glomerular disease," "proteinuria," "renal insufficiency," "kidney disease," "chronic renal failure," "chronic kidney disease," "drug therapy combination," "renin-angiotensin system," "angiotensinconverting enzyme inhibitor," and "angiotensin receptor blocker" (Supplementary Material S1). Study Selection We included studies that met the following inclusion criteria: 1) patients (>18 years old) with CKD (KDIGO: CKD is defined as abnormalities of kidney structure or function, present for >3 months, with implications for health); 2) the intervention group received ACEI in combination with ARB (dual therapy), and the comparison group received ACEI or ARB (single therapy); 3) the outcomes involved albuminuria, proteinuria, GFR (creatinine clearance or estimated GFR), BP, hyperkalemia (>5.5 mmol/L or as defined in the individual studies), or hypotension (as defined in the individual studies); 4) randomized, controlled, crossover, or parallel trials; 5) the articles were published in English language. Data Extraction and Quality Assessment Two reviewers (Mingming Zhao and Rumeng Wang) extracted data independently and disagreements were resolved by consulting with a third investigator (Yu Zhang). The following data were extracted from each of the published studies included in our review: the first author's name, publication year, study design, intervention, dose of ACEI or ARB (low-dose: single dose compared with the same RAAS blockade in ACEI in combination with ARB group; high-dose: more than single dose compared with the same RAAS blockade in ACEI in combination with ARB group), sample size, percentage of men, mean age of subjects, duration of intervention, GFR, urine albumin or protein excretion rate, systolic blood pressure (SBP), diastolic blood pressure (DBP), mean arterial pressure, hyperkalemia, and hypotension. The methodological quality of the included studies was evaluated according to the recommendation of the Cochrane Handbook, including random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other bias. Marked 1 point when the risk was low. Data Synthesis and Analysis The random effects model was used to calculate the effect sizes of eligible studies. For continuous outcomes, we calculated a weighted mean difference (WMD) or standard mean difference (SMD) with a 95% confidence interval (CI). For dichotomous outcomes, we estimated the relative risk (RR) with a 95% CI. Heterogeneity of the included studies was described with the I 2 index and the chi-square test. I 2 ≥ 50% and p < 0.05 were used to indicate medium-to-high heterogeneity. We detected the potential sources of heterogeneity using meta-regression based on a priori selected study characteristics, including baseline of GFR, duration of intervention, mean age of subjects, and quality of included studies. Sensitivity analysis was performed to assess the robustness of the pooled results. The publication bias was evaluated using Begg's test and Egger's test. Statistical analysis was performed using Stata (version 15.1). The methodological quality of the included studies was assessed using RevMan5.3. We have registered the protocol for the present systematic review and meta-analysis, and the registration number in PROSPERO is CRD42020179398. Characteristics and Quality of the Studies A total of 24,880 studies (18,664 from PubMed, 4,034 from EMBASE, and 2,182 from the Cochrane Library) were identified, of which 53 studies met the inclusion criteria ( Figure 1A). The characteristics of the individual trials are presented in Table 1. Fifty-three studies with 6,375 patients consisted of 19 crossover and 34 parallel-arm RCTs. The sample size varied from 10 to 1,448. The mean age of the subjects of the trials ranged from 31 to 76 years, and the duration of intervention ranged from 1 to 60 months. Twenty-eight studies enrolled patients with GFR ≥60 mL/min or mL/min/1.73 m 2 and eight studies enrolled patients with GFR <60 mL/min or mL/min/1.73 m 2 . Seventeen FIGURE 3 | Comparison of ACEI in combination with ARB vs. low-dose ACEI or ARB for urine protein excretion (g/g of creatinine or g/24 h). Frontiers in Pharmacology | www.frontiersin.org May 2021 | Volume 12 | Article 638611 studies did not report the subjects' baseline kidney function. Fourteen studies were of fair quality (score 1-3) and 39 were of good quality (score 4-7) ( Figure 1B). Sensitivity Analysis and Meta-Regression To ensure reliability of the present meta-analysis, we evaluated the robustness of the results ( Table 2) using sensitivity analysis, which indicated that the results of the meta-analysis were robust. Significant heterogeneities were observed for DBP and urine albumin excretion ( Table 2). We detected the potential sources of heterogeneity using meta-regression based on a priori selected study characteristics, including the mean age of subjects, duration of intervention, baseline of GFR, and quality of included studies. A significant heterogeneity was observed for the outcome of urine albumin excretion ( Table 2, summary effect of ACEI in combination with ARB vs. high-dose ACEI or ARB, I 2 75.4%, p 0.001), which was dependent on the mean age of subjects (exp, 1.30; 95% CI, 1.04 to 1.63; adjusted R 2 89.09%; p 0.03) and duration of intervention (exp, 1.27; 95% CI, 1.09 to 1.48; adjusted R 2 100.00%; p 0.01). Using meta-regression, it was found that the heterogeneity of DBP ( Table 2, summary effect of ACEI in combination with ARB vs. low-dose ACEI or ARB) was not associated with a priori selected study characteristics. Publication Bias Begg's test and Egger's test were used to evaluate publication bias based on the key outcomes of the trials included in the meta-analysis. The results suggested less susceptibility to publication bias, except for urine albumin excretion and urine protein excretion ( Table 2). DISCUSSION In the present meta-analysis of 53 RCTs encompassing 6,375 participants, we aimed to compare the efficacy and safety of ACEI in combination with ARB vs. low-dose and high-dose ACEI or ARB. We demonstrated that ACEI in combination with ARB was superior to low-dose ACEI or ARB in reducing urine albumin excretion, urine protein excretion, and BP, including SBP and DBP. However, the combination was associated with a decreased GFR and increased rates of hyperkalemia and hypotension. ACEI in combination with ARB was more effective in reducing urine albumin excretion and urine protein excretion than high-dose ACEI or ARB, without decreased GFR and increased rate of hyperkalemia. Nonetheless, the combination did not decrease the BP and increased the rate of hypotension compared with the high-dose ACEI or ARB. Proteinuria and hypertension are risk factors for CKD progression (Liu and Lv, 2019;Nagai et al., 2019). Proteinuria is also an independent predictor of all-cause mortality. A FIGURE 5 | Comparison of ACEI in combination with ARB vs. low-dose ACEI or ARB for glomerular filtration rate (mL/min or mL/min/1.73m 2 ). Frontiers in Pharmacology | www.frontiersin.org May 2021 | Volume 12 | Article 638611 9 combination of severely decreased GFR and proteinuria further increases the risk of all-cause mortality (Wu et al., 2018). For CKD patients with proteinuria, the updated hypertension guidelines recommend a BP goal of <130/80 mmHg (Hamrahian, 2017). More-intensive BP control is associated with a reduced risk of all-cause mortality compared with lessintensive BP goals in this high-risk population (Juraschek and Appel, 2018). Nevertheless, proportions with uncontrolled BP were greater in those with CKD than in those without CKD, and multiple medications and ACEI/ARB were associated with less uncontrolled BP (Plantinga et al., 2009). It should be emphasized that to lower albuminuria and achieve BP goals, moderate to high doses of ACEI or ARB are often required. However, ACEI or ARB may only reduce proteinuria by up to 40-50% in a dosedependent manner, particularly if the patient complies with dietary salt restriction (Nakamura et al., 2000). This leads to a recommendation to use a more complete RAAS blockade to maximize kidney protection and improve outcomes. In order to study the effect of dose on ACEI in combination with ARB, we defined low-dose and high-dose as relative values. Compared with the same RAAS blocker in ACEI in combination with ARB group, a low-dose was defined as single dose, and the high-dose was defined as greater than single dose. According to our metaanalysis, ACEI in combination with ARB was superior to lowdose and high-dose ACEI or ARB in reducing urine albumin excretion and urine protein excretion. It is more effective to use ACEI in combination with ARB than to increase the dose of ACEI or ARB. Although experimental and clinical studies have demonstrated that dual RAAS blockade therapy is more effective in reducing proteinuria and preventing structural lesions than either drug alone (Susantitaphong et al., 2013;Zhang et al., 2017), it is associated with higher incidences of adverse effects than monotherapy. The key safety issues associated with ACEI in combination with ARB are hypotension, which may lead to syncope, and impaired kidney function, which may lead to (Oktaviono and Kusumawardhani, 2020). In this meta-analysis, although ACEI in combination with ARB was associated with a decrease in GFR and increased incidences of hyperkalemia and hypotension relative to low-dose ACEI or ARB, dual therapy did not decrease GFR nor increase the incidence of hyperkalemia compared with high-dose ACEI or ARB. Except for hypotension, the safety of ACEI in combination with ARB was equivalent to that of high-dose ACEI or ARB, and hypotension in some patients is temporary and mild (Song et al., 2006;Meier et al., 2011). In recent years, the use of ACEI in combination with ARB has raised controversies, and no systemic review and meta-analysis have analyzed the efficacy and safety of the use of ACEI in combination with ARB in patients with CKD. This meta-analysis evaluated the effect of ACEI in combination with ARB on kidneyrelated endpoints, BP, and adverse events based on the dose. However, there are certain limitations to this study. First, only a few RCTs have evaluated the efficacy and safety of ACEI in combination with ARB vs. high-dose ACEI or ARB. More largescale studies are needed to further clarify the application prospect of ACEI in combination with ARB in CKD. Second, some of the studies included in the present analysis were of a fair quality. Third, the included studies were heterogeneous; we performed sensitivity analysis and meta-regression to warrant the reliability of the present meta-analysis. Fourth, most of the included studies were aimed at CKD patients with a normal GFR or only a mildly reduced GFR. There are few with moderately reduced renal function and none with severely reduced renal function. The results of this meta-analysis are only applicable to CKD patients with a fairly maintained kidney function. CONCLUSION In conclusion, ACEI in combination with ARB is superior to low-dose and high-dose ACEI or ARB in reducing urine albumin excretion and FIGURE 9 | Comparison of ACEI in combination with ARB vs. high-dose ACEI or ARB for urine protein excretion (g/g of creatinine or g/24 h). FIGURE 11 | Comparison of ACEI in combination with ARB vs. high-dose ACEI or ARB for glomerular filtration rate (mL/min or mL/min/1.73m 2 ). Frontiers in Pharmacology | www.frontiersin.org May 2021 | Volume 12 | Article 638611 13 urine protein excretion. Although ACEI in combination with ARB is associated with a decreased GFR and increased rates of hyperkalemia and hypotension compared with low-dose ACEI or ARB, the combination is more effective than high-dose ACEI or ARB without decreasing GFR and increasing the incidence of hyperkalemia. Despite the risk of hypotension, ACEI in combination with ARB is a better choice for CKD patients who need to increase the dose of ACEI or ARB. The results of this meta-analysis are only applicable to CKD patients with a fairly maintained kidney function.
2021-05-06T13:22:33.195Z
2021-05-06T00:00:00.000
{ "year": 2021, "sha1": "15dea507816aa38927457e843ccc3ade248f043b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2021.638611/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "15dea507816aa38927457e843ccc3ade248f043b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
253127907
pes2o/s2orc
v3-fos-license
MPO-ANCA-associated vasculitis in the context of autoimmune polyglandular syndrome type 3: case report and literature review MPO-ANCA-associated vasculitis in the context of autoimmune polyglandular syndrome type 3: case report and literature review Cédric Dikovec *, Kevin Wolters , Liv M. Vossen, Sören A. Gärtner, Rémy L. M. Mostard, César Magro-Checa 1 Department of Rheumatology, Zuyderland Medical Centre, Heerlen/Sittard-Geleen, The Netherlands Department of Internal Medicine, Section of Nephrology, Zuyderland Medical Centre, Heerlen/Sittard-Geleen, The Netherlands Department of Pulmonology, Zuyderland Medical Centre, Heerlen/Sittard-Geleen, The Netherlands *Correspondence to: Cédric Dikovec, Department of Rheumatology, Zuyderland Medical Centre, PO Box 5500, 6130 MB Sittard-Geleen, The Netherlands. E-mail: c.dikovec@zuyderland.nl DEAR EDITOR, A 55-year-old White man was admitted to the nephrology department with renal insufficiency, haematuria, progressive fatigue, dry cough and exertional dyspnoea. He had been suffering from general malaise and generalized arthralgias without clear signs of inflammatory arthritis for the last 6 months. His past medical history was remarkable for vitiligo, RP, hypothyroidism owing to Hashimoto's thyroiditis with elevated thyroid peroxidase antibodies since 2011, and chronic atrophic gastritis with positive intrinsic factor and parietal cell antibodies since April 2021. He was currently treated with levothyroxine, simvastatin, carbasalate calcium, ticagrelor and metoprolol. At admission, the vital signs were as follows: temperature 38.5 C, blood pressure 94/73 mmHg, regular pulse 81/min, respiratory rate 15/min and oxygen saturation 96% without supplemental oxygen. Physical examination was significant for bilateral fine crackles on pulmonary examination. The skin was remarkable for vitiligo on the lower limbs and thorax; no signs of cutaneous vasculitis or SSc were observed. He had no muscle weakness or inflammatory arthritis. Laboratory testing at admission revealed the following results (normal values in parentheses): CRP 153 (<10) mg/l, ESR 108 (1-20) mm/h, creatinine 199 (61-113) mmol/l, estimated glomerular filtration rate 32 (>90) ml/min/1.73 m 2 , haemoglobin 5.9 (8.5-11.0) mmol/l; urinalysis revealed 463 (<10) erythrocytes/ml with 20-30% dysmorphic erythrocytes and no erythrocyte or granular casts, 76 (<20) leucocytes/ml and mild 24-h proteinuria of 0.68 g. Blood and urine cultures were negative. A chest CT scan showed centrilobular opacities primarily in the right lung and mediastinal lymphadenopathy without signs of cavitary lesions. A bronchoalveolar lavage revealed endobronchial blood without an active bleeding focus. Bronchoalveolar lavage fluid analysis was consistent with diffuse alveolar haemorrhage, showing 10.00 Â 10 9 erythrocytes/l, 721.00 Â 10 6 leucocytes/l with 93% macrophages, and a marked positive iron staining (97% of all macrophages); bacterial, mycobacterial, fungal and viral tests and cultures were negative. Additional serological testing was positive for antibodies direct against MPO [39 (<3.5) IU/ml]. A renal biopsy showed pauci-immune necrotizing crescentic glomerulonephritis. Out of 18 glomeruli, eight were normal, nine showed fibrinoid necrosis (with extracapillary proliferation and/or crescent formation) and one showed segmental sclerosis without any signs of activity. No significant tubular atrophy or interstitial fibrosis was observed. Vessel wall necrosis was observed in one artery. The glomerulonephritis was classified as crescentic type according to the Berden classification and was given a score of zero on both the Mayo Clinic Chronicity Score and the ANCA Renal Risk Score [1]. A diagnosis of MPO-ANCA-associated vasculitis (MPO-AAV) with glomerulonephritis and diffuse alveolar haemorrhage associated with autoimmune polyglandular syndrome type 3 was made. Our patient received three pulses of i.v. methylprednisolone 1000 mg, followed by oral prednisolone 60 mg daily in a tapering dose and rituximab 1000 mg on days 0 and 14, with rapid clinical improvement of the pulmonary symptoms and stabilization of the renal function. Six months after diagnosis, the patient reported no complaints, the MPO-ANCA titre had decreased to 6.5 IU/ml and the estimated glomerular filtration rate was stable at 36 ml/min/1.73 m 2 . Furthermore, prednisolone was tapered down to 5 mg daily and he received one infusion of rituximab 1000 mg as maintenance therapy. Autoimmune polyglandular syndrome is a heterogeneous group of diseases characterized by immune-mediated activity against endocrine and non-endocrine organs. Autoimmune polyglandular syndrome can be classified into four different subtypes (types 1-4) based on clinical criteria. Autoimmune polyglandular syndrome type 3 includes autoimmune thyroid diseases plus another autoimmune disorder in the absence of Addison's disease. If the other autoimmune disorder present is an endocrine disease, most commonly type 1 diabetes mellitus, it is designated as type 3A. Type 3B involves gastrointestinal diseases, mostly chronic atrophic gastritis and pernicious anaemia, and type 3C involves cutaneous, neurological and haematological diseases. Type 3D involves systemic autoimmune rheumatic diseases, with SS, RA and SLE being the most frequently reported; other systemic autoimmune rheumatic diseases have been described less frequently [2]. By reviewing the literature, we identified another four cases of AAV in the context of autoimmune polyglandular syndrome (Table 1) [3][4][5][6]. Interestingly, four of the five patients presented a similar serotype (MPO-ANCA positive) and four of the five patients had biopsy-proven pauci-immune crescentic glomerulonephritis. Furthermore, all five patients had a history of Hashimoto's thyroiditis. Autoimmune thyroid diseases are more common in patients with AAV, especially in MPO-ANCA-positive patients and patients with renal disease, than in the general population. This association is potentially attributable to 44% sequence homology between MPO and thyroid peroxidase, resulting in cross-reactivity, or general loss of tolerance to peroxidases [7]. The pathogenesis of autoimmune polyglandular syndrome remains unclear, but it is most likely to be attributable to a combination of environmental triggers in individuals with genetic susceptibility. Several genes coding for key regulatory proteins in the adaptive and innate immune system, particularly in the MHC, have been associated with autoimmune polyglandular syndrome [2]. In a cohort consisting of Caucasian patients, mostly with autoimmune polyglandular syndrome type 3, HLA class II alleles DRB1*0301, *0401, DQA1*0301, *0501, DQB1*0201 and *0302 were observed more often in autoimmune polyglandular syndrome than in patients with autoimmune thyroid diseases and controls [8]. HLA-DRB1, HLA-DQA1 and HLA-DQB1 have also been proposed as potential predisposing factors for AAV [9]. Furthermore, several immunoregulatory genes, such as those encoding protein tyrosine phosphatase non-receptor type 22 (PTPN22) and cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4), are associated with an increased risk of autoimmune polyglandular syndrome type 3 and AAV [9,10]. Polymorphisms rs2476601 (C1858T) of PTPN22 and rs3087243 (CT60) of CTLA-4 were found to be associated with autoimmune polyglandular syndrome in Caucasian patients [10]. Interestingly, the same polymorphisms are associated with the occurrence of AAV [9]. We report the rare combination of three well-defined autoimmune diseases (Hashimoto's thyroiditis, vitiligo and chronic atrophic gastritis) with a severe MPO-AAV in the context of autoimmune polyglandular syndrome type 3 and suggest a pathogenetic link between these diseases. Physicians should be aware that autoimmune polyglandular syndrome increases the risk of the development of other autoimmune components. Funding No specific funding was received from any bodies in the public, commercial or not-for-profit sectors to carry out the work described in this manuscript. Disclosure statement: The authors have declared no conflicts of interest. Data availability statement All relevant patient data are included in the paper. Additional data regarding the literature review are available from the corresponding author upon reasonable request.
2022-10-27T15:11:11.793Z
2022-10-25T00:00:00.000
{ "year": 2022, "sha1": "fe9320ffd793dfdf30b723b6c260d88d37f35f2b", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/rheumap/advance-article-pdf/doi/10.1093/rap/rkac085/46628261/rkac085.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0efae38a9ddeaa74d32b375c4fe15a57dfa43e87", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
95805632
pes2o/s2orc
v3-fos-license
Strong molecular field effect from Gd on 3d-electronic states of Cu in concentrated Gd-Cu non-crystalline alloys . The molecular field effect from Gd on the 3d electronic states of Cu was studied from both a macro-scopic magnetization measurement and a micro-scopic way of magnetic Compton profile MCP in the rich content of Gd where the molecular field from Gd was increased as high as possible. From magnetization measurement, the magnetic moment of Gd was estimated to be about 7.0 μ B in Gd B 67 Cu 33 and Gd 70 Cu 30 alloys, respectively. The Curie temperature Tc was found to increase gradually from 140 K for Gd 60 Cu 40 to 150 K for Gd 70 Cu 30 . The MCP measurement revealed that some kinds of spin–polarization of 3d electrons in Cu were not detected effectively. That is, the 3d electronic states of Cu were stable and not affected effectively even in the stronger molecular field applied. Introduction Magnetism in Rare Earth (RE)-transition metal (TM) compounds and alloys has been investigated intensively from fundamental and applied sides [1,2]. In RE-TM systems, the Ni is well known to lose its magnetic moment at RENi 2 and in more contents of RE. This phenomenon is explained by a charge transfer model that the outer shell electrons of RE transfer and occupy the 3d-electronic states (band) of Ni (TM). That is, the 3d band of Ni is completely occupied at ReNi 2 compound [3,4]. However very recently, the Ni was found to retain and does not lose its magnetic moment at GdNi 2 and even in GdNi compound [5,6,7]. The RE-Cu system, especially Gd-Cu system where the magnetic structure is the simplest, has been one of the most fascinating compounds and alloys [8,9,10]. Since the Cu is naturally expected to be non-magnetic, the information about magnetic natures such as exchange interaction energy J Gd-Gd of the RE in RE-TM system is expected to be separated from those of TM and to be obtained clearly without the interferences of TM. However there was an obstacle that the magnetization does not saturate in Gd-Cu system, especially in Cu-rich concentration regions. On the other hand, in Gd-rich region (Gd=50 and 60 at%), amorphous Gd-Cu (a-Gd-Cu) alloys were investigated aiming at deriving the magnetic properties such as J Gd-Gd and so on employing magnetization measurements and the magnetic Compton profile method MCP [10]. In the study, the statistical accuracy in MCP measurement was not sufficient and the analytical result was no so clear. In addition, the idea of the molecular field effect from Gd on the 3d electronic states was not included [10]. In this study, the effect of strong molecular field upon the electronic states of 3d of Cu was investigated in detail from macro-and micro-scopic points of view. For this aim, the samples of Gdrich contents for amorphous Gd X Cu 1-X (X=60, 67 and 70) were selected. Experimental procedure Samples of a-Gd X Cu 1-X (X=60, 67 and 70) alloys were prepared in the form of ribbons with meltspinning method of single-roller system. The atmosphere of the chamber was introduced with pure Ar gas. The rotation seed of Cu roll was varied from 3,000 to 4,000 (rpm) depending on the content of Gd. Temperature dependence of magnetization M(T) was measured with vibrating sample magnetometer (VSM) under a magnetic field up to 11 kOe in a temperature between 4.2 K and 250 K. Saturation magnetization Ms was determined by extrapolating the inverse magnetic field (1/H) to zero. Magnetic moment per Gd was derived from Ms under the assumption that Cu is non-magnetic. Curie temperature was determined by Arrott-plot. Furthermore, magnetic Compton scattering profile (MCP) experiments were carried out at AR-NE1 beam line. Circular-polarized X-rays from an elliptical multi-pole wiggler were monochromatized and focused by a single channel, cut bent Si crystal. The energy of the incident X-rays used was 135 keV. Temperature of the sample was kept at 10 K and 110 K employing a closed-type refrigerator in the magnetic field of 1 Tesla. Results and Discussion The magnetization as a function temperature was measured for a-Gd 67 Cu 33 between 4.2 K and 250 K in the magnetic fields of 10,000, 8,000, 6,000, 4,000, 2,000, 1,000 and 50 Oe, respectively. The result is shown in figure 1 (a) and the magnetization process M-H for the same sample at 4.2 K is shown in (b). From figure 1 (a), the magnetic structure is considered to be a simple ferromagnetism. The Curie temperature T c is found to be about 150 K from low-field (50 Oe) magnetization and this result coincided to that determined by the Arrott-plot. The Tc increases gradually with the increase of Gd concentrations from 100 K (Gd 50 Cu 50 ), 140 K (Gd 60 Cu 40 ) to 150 K (Gd 67 Cu 33 ). Figure 1 (b) shows that this sample is magnetically soft with little anisotropy, however this sample shows a little high fieldsusceptibility. Therefore the saturation magnetization M s was determined by the 1/H-plot and the The magnetic Compton profile MCP at 10 K in a field of 1 Tesla is sown in figure 2. The open circles are measured MCP data and the solid and one-dotted lines are the calculated results, respectively, employing the Hartree-Fock calculation for the 4f-electrons of Gd [11]. The best fitting result is found to be between the solid line and the one-dotted line. In RE-TM system, the 3delectrons of TM and the 4f-electrons of RE play a dominant roll in the magnetism and the nearly-free electrons such as 4s, 5d and 6s electrons contribute to exchange-interactions such as RKKY interaction and spin-polarization which attributes to a magnetic moment [12]. From the figure 2, the measured MCP can be fitted well by the calculated MCP for 4f-electrons of Gd in the region of Pz > 2 (a.u.). Taking into account that the MCP for 3d-electrons differs clearly from that for 4f-electrons in the region of Pz<3 a.u. [7,11,12], it is found that the 3d-electrons in Cu does not contribute to the MCP and they are magnetically inactive in essence. Furthermore, the MCP contributed from s, p-like electrons is well known to be dominant in Pz< 2 a.u. [12] and after all, it can be concluded that the measured MCP is composed of the 4f-component of Gd and the s, p-like electrons component of the constituents (=Gd+Cu). Employing the evaluation method that the magnetic moment is proportional to the area of MCP [12], we can estimate the value of the magnetic moment of 4f-component and s, plike component, respectively. Under the assumption that the magnetic moment of 4f is 6.8μ B [10], the value of s, p-like electrons is derived to 0.22-0.45μ B B B and total . Accordingly, the total magnetic moment from MCP measurement becomes 7.02-7.25 μ B and a little larger than that of macroscopic measurement. The value estimated from MCP is nearly the same that obtained for a-Gd60Cu40 and a-Gd70Cu30 alloys. The discrepancy between the micro-scopic MCP and the macro-scopic VSM measurement can be attributed to the scatter of the concentrations of sample. The measurement of MCP at 110 K in the magnetic field of 1 Tesla was also carried out and is shown in figure 3. The MCP at 110 K resembles to that at 10 K and no temperature dependence of MCP is observed.
2019-04-05T03:28:33.250Z
2009-03-01T00:00:00.000
{ "year": 2009, "sha1": "928946801e8481e1e55895b9b1faecc608a11162", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/150/4/042239", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2636ab06d9088c6fe45b9fc312b175a9af7f8529", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Chemistry" ] }
122891796
pes2o/s2orc
v3-fos-license
Asymptotic modeling of thin plastic oscillating layer The inclusion of a very thin layer of very rigid material into a given elastic body has been widely considered, and in the classic literature. For more details, we can refer to [6], [7], [10] and [11]. In general, the computation of solution using numerical methods is very difficult. In one hand, this is because the thickness of the adhesive requires a fine mesh, which in turn implies an increase of the degrees of flexible than the adherents, and this produces numerical instabilities in the stiffness matrix. To overcame this difficulties, thanks to Goland and Reissner [12] find a limit problem in which the adhesive is treated on this theoretical approach, see for example A. Ait Moussa and J. Messaho [1], Acerbi, Buttazzo and Perceivable [2], Licht and Michail [4] and A. Ait Moussa and L. Zlaïji [8]. In this present work, we consider a structure containing a plastic thin oscillating layer of thickness, rigidity, and periodicity parameter depending on ε, where ε is Introduction The inclusion of a very thin layer of very rigid material into a given elastic body has been widely considered, and in the classic literature.For more details, we can refer to [6], [7], [10] and [11].In general, the computation of solution using numerical methods is very difficult.In one hand, this is because the thickness of the adhesive requires a fine mesh, which in turn implies an increase of the degrees of flexible than the adherents, and this produces numerical instabilities in the stiffness matrix.To overcame this difficulties, thanks to Goland and Reissner [12] find a limit problem in which the adhesive is treated on this theoretical approach, see for example A. Ait Moussa and J. Messaho [1], Acerbi, Buttazzo and Perceivable [2], Licht and Michail [4] and A. Ait Moussa and L. Zlaïji [8]. In this present work, we consider a structure containing a plastic thin oscillating layer of thickness, rigidity, and periodicity parameter depending on ε, where ε is a parameter intended to tend towards 0. In a such structure, we have treated the scalar case for a thermal conductivity problem in [3].The aim of this work is to study the limit behavior of an elasticity problem with a convex energy functional posed in a such structure. This paper is organized in the following way.In section 2, we express the problem to study, and we give some notation and we define functional spaces for this study in the section 3.In the section 4, we study the problem (4.1).The section 5 is reserved to the determination of the limits problems and our main result. Statement of the problem We consider a structure, occupying a bonded domain Ω in R 3 with Lipschitzian boundary ∂Ω.It is constituted of two elastic bodies joined together by a rigid thin layer with oscillating boundary, the latter obeys to nonlinear elastic law of power type.More precisely, the stress field is related to the displacement's field by The structure occupies the regular domain and Ω ε = Ω \ B ε represent the regions occupied by the thin plate and the two elastic bodies, see Figure 1, ε being a positive parameter intended to approach 0, and Σ = {x = (x ′ , x 3 ) / |x 3 | = 0}.The structure is subjected to a density of forces of volume f : Ω → R 3 , and it is fixed on the boundary ∂Ω.Equations which relate the stress field σ ε , σ ε : Ω → R 9 S , and the field of displacement u ε , u where a ijkh are the elasticity coefficients, and R 9 S the vector space of the square symmetrical matrices of order three, e ij (u) are the components of the linearized tensor of deformation e(u).ϕ ε is a bounded real function and ]0, ε[ 2 -periodic.In the sequel, we assume that the elasticity coefficients a ijkh satisfy to the following hypotheses : We begin by introducing some notation which is used throughout the paper x = (x ′ , x 3 ), where ). We set In the following C will denote any constant with respect to ε and [v] is the jump of displacement field v through Σ. Functions First, we introduce the following space : ε is the jump of u on Σ ± ε defined by s ), and u = 0 on ∂Ω} we easily show that V ε is a Banach space with respect to the norm Our goal in this work is to study the problem (P ε ), and its limit behavior when ε tends to zero. Study of Problem The problem P ε is equivalent of the minimization problem To study problem P ε , we will study the minimization problem (4.1).The existence and uniqueness of solutions to (4.1) is given in the following proposition.Proof: From (2.1) and (2.3), we show easily that the energy functional in (4.1) is weakly lower semicontinuous, strictly convex and coercive over V ε , Since V ε is not reflexive, so we may not apply directly result given in Dacorogna [13], but we can follow our proof by using the compact imbedding of Sobolev for the LD 0 (Ω) space in the reflexivity space L q (Ω), or q ∈]1, 3 2 ] for more information see Temam ( [5] p.117).On the other hand, let u n be a minimizing sequence for (4.1), to simplify the writing let Using the coercivity of F ε , we may then deduce that there exists a constant C > 0, independent of n, such that then u n bounded in L q , therefore a subsequence of u n , still denoted by u n , there exists u 0 ∈ V ε such that u n ⇀ u 0 in V ε .The weak lower semi-continuity and the strict convexity of F ε imply then the result.✷ Lemma 4.2.Assuming that for any sequence Then, taking advantage of the fact that u ε vanishes on ∂Ω : otherwise since LD 0 ֒→ L q (Ω, R 3 ) for all q ∈ [1, 3 2 ], in particular for q 0 = 3 2 , we denote by q ′ 0 the conjugate of q 0 , by Hölder inequality, we obtain since u ε = 0 on ∂B ε , one has, according to Pioncaré's type inequality see [5], such as ϕ ε is Y -periodic and for a small enough ε, than we have : According to (4.4), and using (4.5), (4.6), then we obtain Therefore, we will have (4.2) and (4.3).According to (4.2), (4.3) and for a small enough ε the sequence u ε is bounded in LD 0 (Ω).✷ We give some lemmas that will be used in the sequel. ). Lemma 4.4.Let u be a regular function defined in a neighborhood of Σ, then This lemme is a consequence of ( [2] Proposition 2). To apply the epi-convergence method, we need to characterize the topological spaces containing any cluster point of the solution of the problem (4.1) with respect to the used topology, therefore the weak topology to use is insured by the Lemma 4.2.So the topological spaces characterization is given in the following proposition. Let us The solution u ε of the problem (4.1) possess a cluster point u * in LD 0 (Ω), with respect to the weak topology and u * |Σ is a weak cluster point of w ε in LD 0 (Σ, R 3 ). Proof: According to a (4.2), (4.3) and for a small ε, the solution u ε is bounded in LD 0 (Ω), then It's relatively compact in L 1 (Ω), this is consequence of ( [5], Theorem 1.4 p.117), and e(u ε ) so for a subsequences of u ε , still denoted by u ε , there exists there exists u * ∈ L 1 (Ω), such that Thanks to Lemma 4.2 and the Young's inequality, so we have Then lim Remark 4.6.The Proposition 4.5 remains true for any weak cluster point u of a sequence u ε in LD 0 (Ω, R 3 ) satisfies (4.2) and ( 4.3). To study the limit behavior of the solution of the problem (4.1), we will use the epi-convergence method, (see Annex, definition ). Limit Behavior In this section, we are interested to the asymptotic behavior of the solution of the problem (4.1) when ε close to zero.In the sequel, we consider the following functionals We design by τ f the weak topology on the space.In the sequel, we shall characterize, the epi-limit of the energy functional given by (5.1) in the following theorem : Theorem 5.1.Under (2.1), (2.2), (2.3) and for f ∈ L ∞ (Ω, R 3 ), there exists a functional F : where F is given by . We are now in position to determine the upper epi-limit.Let u ∈ LD 0 (Ω), as C ∞ (Ω) is dense in LD 0 (Ω) see ( [5], p.116), so there exists a sequence Let us consider the sequence where θ is a regular function satisfies : we have As ϕ ε is bounded, therefore Otherwise, ϕ ε → m(ϕ) in L 1 (Ω) see Annex, so by passing to the upper limit, we obtain : Since u n → u in C ∞ (Ω), there fore according to the classic result diagonalization's Lemma see [9], there exists a real function n(ε) : We are now in position to determine the lower epi-limit.Let as 2 Ω a ijhk e ij (u)e hk (u)dx + m(ϕ) Otherwise, we suppose lim inf ε→0 F ε (u ε ) < +∞, there exists a subsequence of F ε (u ε ), still denoted by F ε (u ε ) and a constant C > 0, such that Then χ Ωε e(u ε ) is bounded in L 2 (Ω), so for a subsequence of χ Ωε e(u ε ), still denoted by χ Ωε e(u ε ) we then show easily, like in the proof of the above proposition, that From the subdifferentiability's inequality of u → 1 2 Ωε a ijhk e ij (u)e hk (u)dx and passing to the lower limit, we obtain According to the diagonalization's Lemma ( [9], Lemma 1.15 p.32), there exists a function η(ε) : R + → R + decreasing to 0 when ε → 0 such that According to a Lemma 4.4, and let w ε be the sequence define before the Proposition 4.5, we have where according to the Lemma 4.3, let g ∈ D(Σ, R 9 ) we have thanks to a Proposition 4.5 and ϕ ε → m(ϕ) in L 1 (Σ)(see Lemma 7.1 Annex), so passing to limit, we obtain ). (5.5) By passing to the limit (η → 0) in (5.4) we have From the definition of B η with (5.3), we deduce that Asymptotic modeling of thin plastic oscillating layer 105 after (5.2) and (5.6), So there exists a constant C > 0 and a subsequence of F ε (u ε ), still de noted by Hence the proof of the Theorem 5.1 is complete.✷ In the sequel, we determine the limit problem linked to (4.1), when ε approaches to zero.Thanks to the epi-convergence results, see Annex Theorem According to the uniqueness of solutions of problem (5.6), so u ε admits an unique τ f -cluster point u * , and therefore u ε ⇀ u * in LD 0 (Ω) ✷ Since ϕ is bounded in Σ, so for evry s ≥ 1, there existes a constant C > 0, such that
2018-12-21T02:05:49.595Z
2014-09-11T00:00:00.000
{ "year": 2014, "sha1": "9b934ef76848f3af096f2d378bff673767f70053", "oa_license": "CCBY", "oa_url": "http://periodicos.uem.br/ojs/index.php/BSocParanMat/article/download/18812/11413", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9b934ef76848f3af096f2d378bff673767f70053", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
103144388
pes2o/s2orc
v3-fos-license
Dyeing Fabrics by Using Extracts from Mulberry Branch / Trunk 1 . Dyeability and Fluorescence Property Th ị The dyeing of wool, silk, cotton, ramie, nylon, acrylic and polyester fabric by using the extracts from mulberry branches and trunks was tried and the dyeability was studied. While the dyeability of the ethanol-extracts from mulberry is low, that of the water-extracts is high for wool, nylon and silk fabrics. They are dyed brownish and yellowish colours. The obtained colours depend on the extracts concentration in the dye solution, dyeing time, dye solution pH and dyeing temperature. Wool, nylon and silk fabrics are dyed deeper with an increase in the dyeing temperature. The mulberry extracts show fluorescence and reducing property. The results indicate that the mulberry extracts contain flavonols such as morin, kaempferol or quercetin, which form complexes with Al and show fluorescence. The wool treated with the mulberry extracts or AlCl3/mulberry extracts shows fluorescence with ultraviolet light irradiation. Introduction Mulberry (Morus) trees belonging to the family of Moraceae are important plants that have been used for sericulture to produce silk fibres as it is generally well known.The leaves, fruits and root barks of mulberry trees have long been *The part of the data of this study were presented by the authors at the Annual Meeting of The Textile Machinery Society Japan on 5-6 June (2015)at Osaka, Japan (Proceedings: 68, 100-101), the 13th Asian Textile Conferenceon 3-6 November (2015) at Geelong, Australia (Proceedings ID: 3(C), 1007-1009), Kuroda, A. (2016) Master Thesis, Kyoto Institute of Technology, Kyoto, Japan, 3rd International Symposium on Advances in Sustainable Polymers on 4-6 August (2016) at Kyoto, Japanand 9th International Conference onFiber and Polymer Biotechnology on 7-9 September (2016) at T. K. T. Nguyễn et al. used widely in the field of silk production, food industries and medicines [1].On the other hand, while the mulberry branches and trunks have been used for wood products and paper to a limited extent [1], the great mass of them are treated as industrial wastes.However, the peels of black mulberry (Morus nigra) were used to, for example, colour woods [2].Pigment ingredients were extracted from the peels with water in an ultrasonic bath and wood samples were treated by immersing into the solution containing the extracts.The colours (absolute colour values) obtained in the study were not described in the article, and it is estimated from the change in ΔL*, Δa* and Δb* values that the wood may be coloured brown. The exploitation and development of novel sustainable dyestuff materials and the effective utilization of industrial wastes are very important to establish a sustainable society and achieve environmental preservation.Under such a situation, the authors tried to dye fabrics with mulberry extracts in the study.The dyeing of fabrics by using a dyestuff obtained from waste mulberry branches and trunks has not been studied aiming to apply the dyestuff to industrial uses.If a useful dyestuff could be obtained from the mulberry extracts, it is expected that the technique will contribute to the efficient use of mulberry wastes.The characteristics of mulberry trees are as follows: 1) the photosynthetic rate and the growing rate of tree are high [3], 2) useful parts are many in the whole plant, 3) ecological [1], 4) mulberry is generally grown without pesticides and so on.Therefore, it is a great advantage to take dyestuffs from mulberry branches and trunks from the viewpoints of productivity, sustainability, ecology and safety.It can be said that the mulberry branch dyestuff could be a useful dyestuff in the future. In the study, the dyeability of the extracts from mulberry branches and trunks for natural and chemical fibres such as wool, silk, cotton, ramie, nylon, acrylic and polyester fabrics was investigated as a first step.The properties of the mulberry extracts were also examined. Extraction from Mulberry Trees The mulberry branches and trunks (Morus australis and Morus lhou) were obtained from the mulberry field of Kyoto Institute of Technology.The woods with barks were crashed by a mill (Osaka Chemical Wonder Blender WB-1) and were extracted with ethanol (purity: 99.5%) at 78˚C or distilled water at 100˚C for 4 h.The extracts were concentrated and dried.The dried mulberry extracts were ground into powder. Dyeing The oily mulberry extracts (0.50 g), which were obtained from the extraction with ethanol, were dissolved into 49.5 g of ethanol/distilled water mixed solvent (1:1 of mass ratio).Wool fabric sample was immersed first into distilled water at room temperature (RT) for 10 s and then into the mulberry extracts solution at 40˚C for 3 h.The dyebath was shaken at 80 strokes per minute.The powder mulberry extracts, which were obtained from the extraction with distilled water, were dissolved into distilled water to prepare 2.0 wt% solution.Each of the fabric samples was immersed first into distilled water at RT for 10 s and then into the mulberry extracts solution at fixed temperature (30˚C -90˚C) for 3 h.The dyebath was shaken at 80 strokes per minute.The liquor ratios were 179:1 for silk, 66.0:1 for wool, 90.6:1 for cotton, 80.1:1 for ramie, 160:1 for nylon, 108:1 for acrylic and 157:1 for polyester.Each of the fabric was washed with 50 ml of 2.0 wt% marseille soup solution at 40˚C for 10 min, rinsed with 100 ml of distilled water at 40˚C for 5 min twice and air-dried. Colour Measurements The obtained colour of the fabric samples was measured by using a Konica Mi- Ultraviolet-Visible Absorption Spectrophotometry and Fluorescence Spectroscopy The AlCl 3 (0.050 M) were dissolved into freshly distilled water to prepare solutions. Reducibility of Mulberry Extracts As one of the evaluation techniques of the reducibility of mulberry extracts, the free radical scavenging method using 1,1-diphenyl-2-picrylhydrazyl radical (DPPH) was adopted.The DPPH method is a common antioxidant assay widely used [6].When a reductant reacts with DPPH, the DPPH radical form turns into a protonated (non-radical) form and the absorbance of a signal of DPPH solution spectrum decreases.The slope of the relationship between the concentration of a reductant and the absorbance is associated with the reducibility.The negative steeper slope corresponds to higher radical scavenging ability, that is, redu- and the index of reducibility (R AC ) was estimated from the slope.R AC is determined from the absolute value of the slope.show that the dyeabilities are due to such the chemical characteristics and higher ordered structures of the fibres.In fact, the fibres are dyed with acid dyes and their dyeability against a sort of dye molecules is very similar. Dyeability of Mulberry Extracts The dependence of the dyeability of wool on the mulberry extracts concentration or the dyeing time was also examined [7].The results show that the higher dyeability (lower L* and higher b* values) is obtained by using higher concentration of the mulberry extracts and with longer dyeing time, as expected. The colour fastness for the fabrics dyed by the mulberry extracts to washing and light are studied by the authors and will be reported. Dyeability According to Solution pH It is well known that the colour of natural pigments change with pH [8] or the copigmentation [9].The dyeability of wool, silk, and nylon is affected by pH in the case of the dyeing with acid dyes [10].Then, it is interesting to investigate the pH effect of the mulberry extracts dyeing solution on the dyeability.The The colour of such the flavonoids changes under basic condition [13] and turns into duller and/or darker one.If the pH effect on the charge and structures of wool keratin would dominate the dyeability, the dyeing results must differ from those obtained.The negative charge becomes predominant for keratin protein at higher pH and the positive one is increased at lower pH.If the dye molecules would be chiefly anionic, lower pH of the solution is suitable for higher dyeability and if they would be cationic, higher pH is dye suitable.Only one of the dyeability obtained at lower or higher pH should be increased if the charge of keratin would primarily control the colour.Therefore, it concludes that the pH dependence of the obtained colour of the wool may be due principally to the colour change of the pigments. Dyeability According to Dyeing Temperature It is also generally well known that the dyeing results are significantly influenced by the dyeing temperature [14].Therefore, it is important to investigate the temperature dependence of the dyeability for the mulberry extracts.The dyeability of wool, silk and nylon using the mulberry extracts was examined.The amount of dyestuffs adsorbed onto fibres, their distribution in fibre materials, the sort and the composition of pigments adsorbed and so on are strongly controlled by dyeing temperature and then the dyeing results (obtained colours) are associated with them.If the mulberry extracts contain pigments, which work as a reductant, the colour of the pigment could be changed by oxidation.The oxidation reaction of the pigments may be promoted by heating.Therefore, there is a possibility that the higher temperature during the dyeing accelerates the oxidation of the pigments of the extracts.However, the change in the colour of the dyeing solution was not observed even at higher temperatures.Figure 5 shows the absorption spectra for the mulberry extracts aqueous solution before heated and 3 h after heated.Both of the spectra are very similar.The results show that the pigments contained in the mulberry extracts do not change chemically during the dyeing at higher temperatures and the differences in the resulting fabric colour may be induced by an another mechanism.The dyeing rate increases with increasing temperature within a certain dyeing time [14].It is estimated that the increase in the diffusion rate of pigment molecules in wool, silk and nylon fibres might contribute to deepen the dyeing colours.Further investi gation is needed to clarify the mechanism. Properties of Mulberry Extracts (Fluorescence and Reducibility) Morin, isorhamnetin, kaempferol, quercetin and myricetin show fluorescence, when they form a complex with Al 3+ [15].If such the flavonols that have 3-hy- tering light from water [16].A highest emission for the mulberry extracts solution is observed with 310 nm excitation light as seen in Figure 7 the colour shift observed in Figure 6(f), because the wavelength of 365 nm light is near to 410 nm.In fact, the 365 nm light source irradiates short wavelength visible lights. The phenolic substances contained in trees belonging to Moraceae family were studied and analyses were made [17].It was reported that the mulberry species such as Morus (M.) alba, M. indica, M. serrata, M. laevigata and M. rubra The results indicate that the mulberry extracts contain flavonols such as morin and quercetin as described previously. If they contain such the flavonols, they show reducing property.It is known that the many flavonoids show antioxidant characteristic.Then, the reducibility of mulberry extracts was examined by DPPH method. Figure 10 shows the plot of DPPH method to determine the reducibility of the mulberry extracts.The obtained R AC value that is an index indicating reducibility, for the mulberry extracts is 16.1.The obtained R AC value for a standard reductant compound, DL-α-tocopherol, under the experimental condition is 546. The R AC value for the mulberry extracts is smaller than that of DL-α-tocopherol.However, the result shows that the mulberry extracts have reducing property. The results indicate that flavonoids contained in the extracts may play an important role for the dyeing as dyestuffs.Further analytical study is needed to know the composition of mulberry extracts. nolta CM-2600d spectrocolourimeter and the resulting colour was expressed in L*a*b* standard colourimetric system (CIE 1976).The colour measurements were made employing CIE standard illuminant D 65 , 10˚-view angle and SCI (specular component included) mode.All the reflected lights from the sample including the regular reflection are integrated under the SCI mode.The a* and b* are the chromaticity coordinates, and L* is the lightness index in the L*a*b* system.The positive values of a* indicate reddish colours and the negative values of that indicate greenish ones, and the positive values of b* indicate yellowish and the negative values indicate bluish.The C* is the chroma calculated as C* = {(a*) 2 + (b*) 2 } 1/2 [4] [5]. measurements of the ultraviolet-visible (UV-Vis) light absorption spectra for the mulberry extracts aqueous solutions were made by a Hitachi U-3900H spectrophotometer at RT.The sample solutions were prepared by dissolving mulberry extracts powders into freshly distilled water.Acidic or basic mulberry extracts solution was prepared by dissolving the powder into 2.0 × 10 −2 M citric acid aqueous solution or 1.0 M Na 2 CO 3 /NaHCO 3 aqueous solution, respectively.All of the sample aqueous solutions were measured at RT.The fluorescence spectra of the mulberry extracts solution samples were measured by a JASCO FP-6500 fluorescence spectrophotometer at RT.The mulberry extracts powder (1.0 × 10 −2 wt%) or the powder (1.0 × 10 −2 wt%) and T. K. T. Nguyễn et al. cibility.DPPH (M w = 394.32,Tokyo Chemical Industry), 2-(N-morpholino) ethane-sulfonic acid (MES, M w = 213.25,Nacalai Tesque (NT)) and DL-α-tocopherol (M w = 430.71,NT) were used without further purification.MES was dissolved into freshly distilled water and 0.1 M NaOH aqueous solution (7.14 × 10 −5 M, pH = 6.0) and DPPH was dissolved into ethanol (250 μM).The mulberry extracts powder was dissolved into freshly distilled water and the sample aqueous solutions with each concentration were prepared.Ethanol (42.2 g) was added into 49.0 g of the each mulberry extracts aqueous solution.DL-α-tocopherol was dissolved into ethanol and the solutions with each concentration were prepared.The MES solution (1.0 g) and DPPH solution (7.8 g) were mixed with each of the mulberry extracts or DL-α-tocopherol solution sample (91.2 g) and were stirred for 20 min at 23˚C in the dark.The UV-Vis absorption spectra for sample solutions were measured by a Hitachi U-3900H spectrophotometer at RT.The absorbance at 520 nm (A 520 ) was plotted against the sample concentration (c) lower pH aqueous treatment solution was prepared with the mulberry extracts and citric acid (2.0 × 10 −2 M).The medium pH solution was prepared with only the mulberry extracts.The higher one was prepared with the mulberry extracts and Na 2 CO 3 /NaHCO 3 (1.0M).Figure2shows the photographs of the wool fabrics treated with the mulberry extracts solutions of each of the pH (2.5, 6.5 and 9.5).The colour of the wool dyed at pH = 2.5 is more yellowish as compared with that of the sample at pH = 6.5 and that at pH = 9.5 is a little reddish.The L* and b* values of the sample at pH = 2.5 (83.0 and 36.7,respectively) are higher and those at pH = 9.5 (80.7 and 20.5, respectively) are lower than those at pH = 6.5 (81.8 and 27.7, respectively).The results show that the wool is dyed slightly yellowish at lower pH and slightly reddish at higher pH by the mulberry extracts.It is expected that the pH dependence of the obtained colour of the dyed wool may be caused by the change in colour of the extracts in the dyeing solution. Figure 3 Figure 2 . Figure3shows the UV-visible absorption spectra of mulberry aqueous solution, of which pH are 2.5, 6.5 and 9.5.While the spectrum for the solution of pH = 2.5 is similar to that of pH = 6.5, the intensity of the acidic solution is lower than that of the neutral solution and their spectrumshapes in the region between 360 to 460 nm are different.A slight difference in the colour between the neutral and acidic solutions is recognised.On the other hand, considerable difference in the spectrum shape between the solutions of pH = 6.5 and 9.5 is observed, and especially it is detected in the region from 280 to 600 nm.In fact, the colour of the basic solution is different from that of the neutral one.The colour of the natural pigments such as anthocyanins (including anthocyanidins) changes according to pH[11] [12].The results suggest that the colour change of pigments contained in the mulberry extracts with the solution pH induces the different colours of T . K. T. Nguyễn et al.The change in the resulting colours of dyed fabrics depending upon dyeing temperature was observed.The colours of the three kinds of dyed fabrics become commonly more brownish and darker with an increase in temperature.The deepest colours for dyed wool, silk and nylon are obtained at 90˚C.The obtained colour values are summarised in Table2and the sequences of the values ac- droxyl and 4 - carbonyl groups would be contained in the mulberry extracts, they could show fluorescence with the addition of AlCl 3 into their solution.The fluorescent complexes form by the coordination of 3-hydroxyl and 4-carbonyl groups of the flavonoids to Al 3+ .Figure 6 shows the pictures of AlCl 3 , mulberry extracts and mulberry extracts/AlCl 3 solutions, which are irradiated with UV lights, of which centre wavelengths (λ) are 312 or 365 nm.The UV light sources are not monochromatic ones.While no light emission is observed for AlCl 3 solution, mulberry extracts and mulberry extracts/AlCl 3 solutions emit fluorescent light by the UV irradiation.It is found that more intense emission from the mulberry extracts solution is obtained with 312 nm UV irradiation (Figure 6(b)) than with 365 nm one (e), and the emission light colours from the mulberry extracts/AlCl 3 solution with 312 nm (c) and 365 nm UV (f) irradiation are different.The emission is naturally not observed for AlCl 3 solution.The results show that the mulberry extracts may contain the flavonoids as mentioned above and they might contain also Al compounds or some other substances, which show fluorescence.Then, fluorescence spectra were measured to get information on the optical properties of the mulberry extracts. Figure 9 . Figure 9. Photographs of wool fabrics treated with AlCl 3 aqueous solution (a) and (d); with mulberry extracts aqueous solution (b) and (e) or with first AlCl 3 aqueous solution and second mulberry extracts aqueous solution (c) and (f).Samples were under visible lights (a)-(c) and under 365 nm UV light (d)-(f). Figure 10 . Figure 10.Plot of absorbance at 520 nm (A 520 ) of DPPH buffer (MES) solution mixed with mulberry extracts against the concentration of mulberry extracts (c). Table 1 . which means the colours include a green component.The effective dyeing results were obtained for wool, silk and nylon samples and it is concluded that the mulberry extracts dye the three kinds of fibres.The dyeable three fibres have charges and amide bonds in their molecular chains in common.The results The colour values for wool, silk, cotton, ramie, nylon, acrylic and polyester fabrics before and after the treatment with mulberry extracts aqueous solution.Conc. of mulberry extracts: 2.0 wt%, dyeing temperature: 40˚C, dyeing time: 3 h, pH: 6.5.
2019-04-09T13:03:19.822Z
2017-07-03T00:00:00.000
{ "year": 2017, "sha1": "b4932e7732915cf2a72082a313f84ae6706a12a3", "oa_license": "CCBYNC", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=77851", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "b4932e7732915cf2a72082a313f84ae6706a12a3", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
13314532
pes2o/s2orc
v3-fos-license
Evaluation of a Computer‐aided Lung Auscultation System for Diagnosis of Bovine Respiratory Disease in Feedlot Cattle Background A computer‐aided lung auscultation (CALA) system was recently developed to diagnose bovine respiratory disease (BRD) in feedlot cattle. Objectives To determine, in a case–control study, the level of agreement between CALA and veterinary lung auscultation and to evaluate the sensitivity (Se) and specificity (Sp) of CALA to diagnose BRD in feedlot cattle. Animals A total of 561 Angus cross‐steers (initial body weight = 246 ± 45 kg) were observed during the first 50 day after entry to a feedlot. Methods Case–control study. Steers with visual signs of BRD identified by pen checkers were examined by a veterinarian, including lung auscultation using a conventional stethoscope and CALA that produced a lung score from 1 (normal) to 5 (chronic). For each steer examined for BRD, 1 apparently healthy steer was selected as control and similarly examined. Agreement between CALA and veterinary auscultation was assessed by kappa statistic. CALA's Se and Sp were estimated using Bayesian latent class analysis. Results Of the 561 steers, 35 were identified with visual signs of BRD and 35 were selected as controls. Comparison of veterinary auscultation and CALA (using a CALA score ≥2 as a cut off) revealed a substantial agreement (kappa = 0.77). Using latent class analysis, CALA had a relatively high Se (92.9%; 95% credible interval [CI] = 0.71–0.99) and Sp (89.6%; 95% CI = 0.64–0.99) for diagnosing BRD compared with pen checking. Conclusions CALA had good diagnostic accuracy (albeit with a relatively wide CI). Its use in feedlots could increase the proportion of cattle accurately diagnosed with BRD. A ccurate diagnosis of bovine respiratory disease (BRD) in feedlot cattle is crucial for effective treatment and implementation of prevention strategies. 1 Furthermore, because BRD treatment relies mainly on use of antimicrobials, an accurate BRD diagnosis should promote prudent use of antimicrobials by reducing unnecessary treatments. Unfortunately, current diagnostic methods to identify feedlot cattle affected with BRD are not always accurate. 2 Indeed, these methods, based on visual inspection by pen checkers, are highly subjective, even when combined with measurement of rectal temperature. 3 Based on a latent class analysis using clinical inspection throughout the feeding period and presence of lung lesions at slaughter as tests for BRD diagnosis, the sensitivity (Se) and specificity (Sp) of clinical inspection were 62 and 63%, respectively. 2 Several methods including lung ultrasonography, radiographs, lung auscultation, determination of serum haptoglobin concentration have been used to improve accuracy of BRD diagnosis. 3 Among these methods, lung sound auscultation is inexpensive, can be conducted chute side and is highly specific in dairy calves compared with ultrasonographic assessment of lung lesions. 4 Unfortunately, lung auscultation is also subjective and requires a well-trained person with good acoustic abilities to correctly recognize abnormal sounds. To overcome these drawbacks, a computeraided lung auscultation (CALA) system a has been developed. By automatically classifying acoustic patterns in lung scores, this system could increase accuracy of BRD diagnosis. However, to be useful its accuracy to diagnose BRD must be critically evaluated in a case-control study. The objectives were to: (1) determine the level of agreement between CALA and lung auscultation by an experienced veterinarian and; (2) evaluate using Bayesian latent class analysis the diagnostic accuracy (Se, Sp) of CALA for BRD in feedlot cattle. We hypothesized that a moderate to substantial agreement exists between CALA and veterinary auscultation and that CALA is an accurate method to diagnose BRD. Animals All management and procedures were reviewed and approved by the University of Calgary Animal Care Committee (AC13-0212) and were in accordance with guidelines of the Canadian Council on Animal Care. 5 Angus cross-steers (n = 561; initial body weight = 246 AE 45 kg) at high risk of developing BRD because of recent weaning, commingling and being auction-market derived were studied during the first 50 day after their arrival at a commercial feedlot in Western Canada. Upon arrival, steers were allowed to rest for at least 12 h (with ad libitum access to hay and water) before processing. At processing, steers received a subcutaneous injection of a longacting macrolide b and were vaccinated against infectious bovine herpes virus-1, c bovine viral diarrhea virus (types I and II), c bovine parainfluenza-3, c bovine respiratory syncytial virus, c Mannheimia haemolytica, d Histophilus somni, e and clostridial pathogens. e Steers were also dewormed with pour-on ivermectin solution. f Steers were fed in 2 large outdoor dirt-floor pens (67 9 61 m with a 64-m fence-line concrete feed bunk) with approximately 280 steers per pen. Steers were fed twice daily, at 0630 and 1430 hours, a 55-63% concentrate receiving/growing diet formulated to meet or exceed nutrient requirements. 6 Each morning before feeding, bunks were visually evaluated and feed deliveries were adjusted to ensure that sufficient feed was available for ad libitum consumption. On day 50, steers were revaccinated g and implanted. h Study Design: Case-Control Study During the study period, steers were observed daily by experienced pen checkers for detection of clinical illness. Steers with visual signs of BRD including one or more of depression, nasal or ocular discharge, cough, increased respiratory rate, and labored breathing were removed from the pen by pen checkers and examined by an experienced veterinarian. For each steer suspected of having BRD, 1 apparently healthy steer with no visual signs of BRD or other disease was conveniently selected based on proximity to the gate or apparently sick animal as pen-matched contemporary control and similarly examined. Clinical examination included measurement of respiratory rate using a stopwatch and rectal temperature, complete lung auscultation using a conventional stethoscope i to detect abnormal lung sounds including increased bronchial sounds, crackles and wheezes, 7 and focused lung auscultation using the CALA system. The veterinarian who performed the clinical examinations did not know which animals were pulled as BRD cases or control and veterinary auscultation was always performed before CALA to avoid potential bias (i.e., human auscultation blinded to CALA results). Steers with visual signs of BRD and a rectal temperature ≥40°C received flunixin meglumine and florfenicol SC. j Computer-aided Lung Auscultation Computer-aided lung auscultation consisted of holding the diaphragm of an electronic stethoscope a over the 5th intercostal space of the right thoracic wall, approximately 10 cm above the elbow and recording lung sounds for 8 s (as per manufacturer's instructions). Recorded lung sounds were then automatically transmitted wirelessly to a computer located within 3 m of the stethoscope and analyzed by software provided by the manufacturer. a This program: (1) displayed spectrogram of recorded sounds; (2) preprocessed lung sounds to remove heart sounds and potential interference from the environment (chute noise, etc.); and (3) classified acoustic patterns in lung scores ranging from 1 to 5 (1 = normal, 2 = mild acute, 3 = moderate acute, 4 = severe acute, and 5 = chronic). Lung scores were transmitted back to the stethoscope and displayed. Serum Haptoglobin Determination In addition to clinical examination, a blood sample was collected from each steer to detect inflammation by measurement of serum haptoglobin (Hap) concentration. Serum haptoglobin concentrations were determined in duplicate using a commercial kit. k ,8 Data Analysis Clinical findings (rectal temperature, respiratory rate per minute, serum Hap concentrations) between cattle examined for BRD and cattle selected as controls by pen checker were compared using nonparametric (Mann-Whitney U-test) and parametric tests (Student's t-test). 1 The level of agreement between lung auscultation by an experienced veterinarian and CALA (using a CALA score ≥2 as a cut off) was compared using Kappa statistic. l The strength of agreement for the Kappa coefficient was interpreted using the scale of Landis and Koch 9 : ≤0 = poor, 0.01-0.20 = low, 0.21-0.40 = fair, 0.41-0.60 = moderate, 0.61-0.80 = substantial, and 0.81-1 = almost perfect. Because of the absence of a reference test to identify the true BRD status of cattle (i.e., no gold standard), Bayesian latent class analysis was used to evaluate the Se and Sp of CALA for BRD diagnosis in feedlot cattle. 10 For this analysis, results of CALA were compared with pen checker classification. A CALA score ≥2 was considered positive for BRD, whereas a CALA score = 1 was considered negative. Pen checker classification and accuracy were based on a previous study, 2 with cattle detected with visual BRD signs defined as BRD positive and cattle with no visual BRD signs defined as BRD negative (i.e., cattle selected as controls in this study). Prior probability distributions of tests' Se and Sp and BRD prevalence used for the Bayesian analysis are shown (Table 1). Because no prior information on CALA's Se and Sp (Se CALA and Sp CALA ) was available, uninformative prior probabilities in the shape of uniform distribution between zero and one (modeled using a Beta (1,1) distribution) were chosen for Se CALA and Sp CALA . Prior probability distributions for pen checkers' Se and Sp (Se p and Sp p ) were chosen based on a previous study. 2 Prior probability distributions chosen for BRD prevalence were fairly noninformative (ranging from 30 to 70%, with a best guess of 50%, because of the case-control design). The final model used 2 tests and 1 population and assumed conditionally independence of tests. 10 Visual appraisal by pen checker and CALA were considered conditionally independent, as they were not based on similar biological principles. Notwithstanding, independence between these 2 tests was nevertheless confirmed by demonstrating that covariances in healthy steers (11.6%; 95% credible intervals [CI], À0.4 to 20.6) and steers with BRD (8.6%; 95% CI, À2.7 to 20.6) crossed the value of 0 using Markov Chain Monte Carlo methods with Gibbs sampler. m ,11 Bayesian computations were implemented using free software. m The first 5,000 iterations were discarded as burn-in, whereas the next 100,000 were used to obtain caudal distributions. Convergence of the model was assessed by visual inspection of the time series plots of selected variables and Gelman-Rubin diagnostic plots (after running multiple chains with various starting values). The caudal distributions of tests sensitivities and specificities and disease prevalence were reported as medians and corresponding 95% CI. Results Of the 561 steers, 35 (6.2%) were detected with visual BRD signs and 35 were selected as pen-matched controls. All steers with visual signs of BRD had abnormal lung sounds including one or more of increased bronchial sounds, crackles, and wheezes detected by auscultation by a veterinarian. Interestingly, 9 steers selected as controls had also abnormal lung sounds. Rectal temperatures, respiratory rates per minute, and serum Hap concentrations differed (P < .05) between steers detected with visual signs of BRD and those selected as controls ( Table 2). A CALA score was obtained from all examined steers (n = 70), with scores ranging from 1 to 5 (Fig. 1). Comparison of CALA results with auscultation by a veterinarian (using a CALA score ≥2 as a cut off) revealed a substantial agreement (kappa = 0.77; 95% CI, 0.62-0.92), with 62 concordant results out of the 70 clinical examinations ( Table 3). The 8 discordant results were attributed to the presence of abnormal lung sounds detected by auscultation by a veterinary, but not by CALA. Pen checker classifications and CALA results were crossed classified into a 2 9 2 table (Table 4), which was used for the Bayesian latent class analysis. Caudal estimates (median and 95% CI) for Se CALA , Sp CALA , Se p , Sp p , and prevalence of BRD are shown (Table 1). A uniform probability over the range 0-100 was used for the priors of CALA's Se and Sp. Computer-aided lung auscultation had good diagnostic accuracy with relatively wide CI with Se CALA and Sp CALA estimated at, respectively, 92.9% (95% CI, 0.71-0.99) and 89.6% (95% CI, 0.64-0.99). Compared with CALA, pen checker's accuracy was lower with Se p and Sp p estimated at, respectively, 63.5% (95% CI, 0.58-0.69) and 63.5% (95% CI, 0.60-0.66). Discussion In this study, there was a substantial level of agreement between CALA and lung auscultation performed by an experienced veterinarian. Compared with pen checking using Bayesian latent class analysis, CALA also had a relatively high Se (92.9%; 95% CI = 0.71-0.99) and Sp (89.6%; 95% CI = 0.64-0.99) for diagnosing BRD in feedlot cattle. The substantial agreement between CALA and veterinary auscultation was expected as CALA's algorithm was initially trained to correctly classify abnormal lung sounds detected by experienced veterinarians (R. Geissler, personal communication). In this study, veterinary auscultation nevertheless detected abnormal lung sounds more often than CALA. This finding could be explained by a higher sensitivity of veterinary ausculta-tion. Indeed, moderate sensitivity is a common drawback of computerized lung sounds analysis. In a metaanalysis, 12 algorithms for classification of lung sounds had an overall Se of 80% (95% CI = 72-86%) for detection of abnormal lung sounds (wheezes and crackles) in humans when compared with auscultation by a trained person. However, further research is needed to confirm this hypothesis, as Se of auscultation by a veterinarian was not calculated in this study. In the absence of a perfect reference test (gold standard), the use of latent class analysis is considered to be the best method to estimate the accuracy of a new diagnostic test. 13 Indeed, latent class analysis refers to the idea that true disease status of animals is unknown and needs to be estimated from the data. If classification errors in the reference test are ignored, serious bias can be introduced in assessment of the accuracy of the new test. For example, in a case of a reference test with a Se <100% (as pen checking, which has a Se estimated at 62.0%), 2 samples which are falsely classified as negative by this imperfect test could be correctly detected as positive by a more sensitive new test, thus leading to a biased estimate of Sp (in this case, too low) of the new test. Furthermore, the Bayesian model used in this study allowed for incorporation of prior scientific information on variables to estimate (test accuracies and disease prevalence). However, because we had a relatively small sample size, we choose noninformative prior probability distributions for Se CALA and Sp CALA . Although the use of noninformative prior distributions allows caudal densities to be impacted more by the data than by the prior distributions, this could also explain why the 95% CI for Se CALA and Sp CALA were relatively wide. Further research is therefore needed to narrow the CI around CALA's Se and Sp and consequently have more confidence in the results provided by this technology. It is noteworthy that the prior probability distributions chosen for pen checkers' Se and Sp were based on a previous study and thus might not represent the Se and Sp of the pen checkers involved in this study, which could influence the accuracy of CALA. However, additional analyses were conducted using modified prior distributions and similar results for the CALA's Se and Sp were obtained. Indeed, by using a pen checker's Se and Sp ranging from 50 to 100% with a best guess of 75% (i.e., beta distribution 9.63-3.88), we obtained a CALA's Se and Sp of 91.9% (95% CI, 74.0-99.6) and 90.3% (95% CI, 71.1-99.5), respectively (data not show). Therefore, the authors are confident that the choice of prior probability distributions based on a previous study did not bias the findings of this study. The sensitivity obtained in this study for CALA was higher than anticipated. In a recent study on dairy calves, Se of lung auscultation to diagnose BRD (defined as lung consolidation detected with ultrasonography) was only 5.9% (range, 0-16.7%). This difference in Se can be explained by the fact that CALA's algorithm included increased bronchial breath sounds for calculation of lung scores, whereas in this previous study, only crackles, wheezes or absence of respiratory Cattle with a CALA score ≥2 were considered BRD positive (+) whereas those with a CALA score = 1 were considered BRD negative (À). Table 3. Agreement between lung auscultation by an experienced veterinarian using a conventional stethoscope i and computer-aided lung auscultation (CALA) for detection of abnormal lung sounds (e.g., increased bronchial sounds, crackles, and wheezes) 7 Cattle with a CALA score ≥2 were considered BRD positive (+), whereas cattle with a CALA score = 1 were considered BRD negative (À). sounds was interpreted as abnormal. Indeed, in this previous study, investigators did not interpret bronchial breath sounds (although highly Se to diagnose BRD) 7 as these sounds were considered too subjective. The main advantage of CALA resides in its algorithm that can provide an objective lung score and thereby minimize bias. On the basis of the higher specificity of CALA compared with pen checker, we inferred that this technology has the potential to decrease the proportion of cattle falsely diagnosed with BRD and thus, could promote prudent use of antimicrobials in commercial feedlots by reducing unnecessary treatments. Interpretation of CALA results in cattle previously identified by pen checkers as BRD-affected which is serial interpretation scheme with conditional independence, could increase the overall Sp of BRD diagnosis in feedlot cattle (Sp p+CALA = Sp p + Sp CALA -(Sp p *Sp CALA ) = 96.1%) compared with pen checking alone (Sp p = 63.0%). 2 Furthermore, CALA does not require experience in lung auscultation and therefore could be easily used by the feedlot employees who have primary responsibility for diagnosis and treatment of BRD. In conclusion, this study showed that CALA was a promising technology to improve accuracy of BRD diagnosis in feedlots. Its use could increase the proportion of cattle accurately diagnosed with BRD by a reduction in false-positive diagnoses.
2017-09-06T08:57:03.174Z
2015-06-08T00:00:00.000
{ "year": 2015, "sha1": "45a2542d4dd5474092791f6708b4d0f7e9553848", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jvim.12657", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e0296cae66bcfe77a5bb35b47e730fbb557ebb7", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238170672
pes2o/s2orc
v3-fos-license
Laboratory evaluation of vasoreactivity; asymmetric dymethylarginine, nitric oxide, fibrinogen and high sensitive C-reactive protein in patients with polycystic ovary syndrome CURRENT STATUS: POSTED Background-Aim We aimed to compare of fibrinogen, high sensitivie C reactive protein (hsCRP), asymmetric dymethylarginine (ADMA) and nitric oxide (NO) levels as laboratory parameters of vasoreactivity in patients with PCOS. Material and Methods Thirty patients with PCOS and 30 women with normal ovulating cycles were enrolled. Serum levels of NO, ADMA, fibrinogen, FSH, LH and hsCRP were assessed and compared with the control group. Results The mean ADMA, fibrinogen and hsCRP levels were significantly higher, and NO concentrations were lower in the patient group. A significant positive correlation was observed between ADMA and NO levels, ADMA and fibrinogen (r=0.838, p<0.001), and ADMA and hsCRP concentrations. Fibrinogen and NO, and NO and hsCRP levels were significantly negatively correlated in the patient group. In the control group, there was a positive correlation between fibrinogen and age, and there was a negative correlation between NO and FSH. Conclusion The present study determined positive relation between ADMA levels and vasoreactive parameters of patients with PCOS. Women with PCOS have elevated levels of ADMA, fibrinogen and decreased NO that make them candidates for cardiovascular disease. Further studies are required to establish the association of ADMA with vascular reactivity. Introduction Polycystic ovary syndrome (PCOS) is a common endocrine disorder affecting approximately 8-13% of women of reproductive age (1,2). Although there are different criteria for defining the disorder, more important is the need to clearly define the phenotype of the patient being considered. PCOS phenotypes can generally be categorized into four types: a) phenotype A, demonstrating evidence of hyperandrogenemia (HA), either clinical, such as hirsutism, and/or biochemical, i.e., HA, ovulatory dysfunction (OD), often reflected by menstrual dysfunction, and (polycystic ovarian morphology) PCOM; b) phenotype B, which includes HA and OD, but not PCOM; c) phenotype C, including HA and PCOM, but not OD; and d) phenotype D, with OD and PCOM, but not HA. Metabolically, phenotypes A and B (also called ''classic PCOS'') behave similarly, with approximately 75% to 85% demonstrating insulin resistance (IR) and some form of metabolic dysfunction. These individuals have an increased risk of glucose intolerance and diabetes. PCOS women with phenotype D do not demonstrate overt evidence of androgen excess, have little evidence of metabolic dysfunction and are at low risk of developing disorders of glucose intolerance. Patients with phenotype C (often referred to as ''ovulatory PCOS'') have levels of metabolic dysfunction and risk that are somewhat less than those with the classic forms of PCOS but still measurably higher than those of control subjects or nonhyperandrogenic PCOS women (3,4). Asymmetric dymethylarginine (ADMA) is a novel regulator of nitric oxide (NO) production by inhibiting NO synthesis (5,6). Recent observative studies revealed out that ADMA has significant effect on systemic vascular resistance (7). Plasma levels of ADMA apparently enhanced in atherosclerotic process and it was established that ADMA is an independant determinant of intima-media thickeness (IMT) (8). Elevated levels of ADMA stimulate vasoconstriction and platelet adhesion that facilitate the proatherogenic effect of the molecule (6,9). Fibrinogen has been considered as an independent risk factor for cardiovascular disease (10). A number of studies have linked higher plasma fibrinogen concentrations with an increased risk of cardiovascular disease. (11). Women with PCOS have elevated levels of fibrinogen (12) L-arginine derivate, NO, is a potent vasodilatator and protective against atherogenic states that acts by inhibiting thrombocyte aggregation, smooth muscle proliferation and inflammatory mediator production (13). Hyperinsulinemia and IR lead to decreased release of NO and increased production of fibrinogen that both reflect atherosclerosis (14). Inflammation and oxidative stress in PCOS condition is also manifested by the increase of high sensitive C reactive protein (hs-CRP), Il-6 and chitotriosidase (ChT). Inflammatory state accompanies another element of PCOS in pathogenesis, which is the disorder in action and release activity of insulin. The growth of insulin level increases the production of androgens through the activity of ovarian theca cells (18). Patients with PCOS exhibit increased vascular IMT and decreased flow mediated dilatation (1). Guzick et al. stated that women with PCOS have greater carotid IMT than healthy individuals. (19). Recent parameters of vascular reactivity like ADMA, NO and fibrinogen may provide complementary data on the evaluation of vascular complications of this disorder (8). PCOS is associated with low-grade systemic inflammation as evidenced by elevation of multiple markers of inflammation; such as Il-6, ChT, hs-CRP and white blood cell count which also represent endothelial dysfunction and increased oxidative stress (17)(18)(19)(20). In this presented study, we aimed to compare of fibrinogen, hs-CRP, ADMA and NO levels as laboratory parameters of vasoreactivity in patients with PCOS and healthy women, and to evaluate the possiable relation between these parametes and hormones of patients, to predict their possible cardiovascular effects. Patients And Methods Power analysis was performed using the mean values from the paper by Rashidi et al (21). The minimum sample size was calculated to be 29 patients in each group with %90 power and 5% alpha error. Age and BMI matched 30 patients with non-obese PCOS and 30 non-obese women with normal ovulating cycles were enrolled to this cross-sectional study with a comparison group. Diagnosis of PCOS was based on the revised Rotterdam criteria (PCOS Consensus Workshop 2004) (22). Patients with following criteria were included; amenorrhea or oligomenorrhea (< 6 cycles for each year), polycystic ovaries on ultrasonographic examination (presence of 12 or more follicles in each ovary measuring 2-9 mm in diameter, and/or increased ovarian volume (>10 mL)) and clinical and/or biochemical evidences of HA. None of participants had been receiving any medication during the last 3 months. Patients with pregnancy, liver or renal dysfunction, smoking, obesity, diabetes mellitus, hypertension, hyperprolactinemia or thyroid disease were excluded. The study was approved by the ethical review committee of Dicle University, Diyarbakır, Turkey. Written informed constant was obtained from all participants. All the participants were examined by the same physician, and ultrasonographic examination was performed by the same radiologist. BMI was determined as the ratio of weight in kilograms to height in meters (kg/m 2 ). Blood samples were obtained after 12-hour fasting period. Serum NO levels were measured by using a colorimetric method based on the Griess reaction, in which nitrite is reacted with sulphanilamide and N-(1-naphthyl) ethylenediamide to produce an azo dye that can be detected at 540 nm. This was carried out after enzymatic reduction of nitrate to nitrite with nitrate reductase. ADMA was measured by high performance liquid chromatography (HPLC) according to the method described by Chen et al. (23). Fibrinogen concentration was measured in heparinized plasma and hs-CRP in serum by nephelometric method in BN ProSpec (Siemens) analyzer. Biochemical variables were analyzed by photometric method in Siemens Advia 1800 device. Statistical Analysis: Statistical calculations were performed using the SPSS for Windows computer program (release 21.0; SPSS Inc., Chicago, IL, USA). Shapiro-Wilk test was used for determining whether variables are normally distributed. Normally distributed variables (age, BMI, FSH and fibrinogen) were analyzed with the student t test. Non-normally distributed variables (LH, LH/FSH ratio, ADMA, NO, hsCRP) were analyzed with the Mann Whitney U test. Data are expressed as mean ± standard deviation or median (minimummaximum) according normality. Pearson or Spearman correlation coefficients (depending normality) were calculated to evaluate relationships between the various parameters studied. A P-value < 0.05 was considered to be statistically significant. Results The mean age of patients and control subjects were similar (p=0.560). Patients with PCOS had slightly higher BMI when compared to control subjects, however this difference did not achieve statistical significance (p=0.505), and there was no significant difference between groups in terms of FSH levels (p=0.908) as shown in table 1. LH levels of the patients were significantly higher than the control subjects (p<0.005). The mean ADMA levels were significantly higher in the patient group than in control group (1.22 vs 0.52, respectively). Plasma concentrations of fibrinogen and hsCRP were significantly higher in patients with PCOS than control group (p<0.001 for both). As expected, NO concentrations were significantly lower in patient group than patient group (p<0.001). A significant positive correlation was observed between ADMA and NO levels in the patient group (r=0.916, p<0.001). Also, there was a significant correlation between ADMA and fibrinogen (r=0.838, p<0.001) and, between ADMA and hsCRP concentrations (r=0.889, p<0.001). Fibrinogen and NO, and NO and hsCRP levels were significantly correlated in the patient group (r=0.783 and r=0.799; respectively and p<0.001 for both) (Table 2) ( Figure 1 to 6). In the control group, there was a positive correlation between fibrinogen and age (r=0.404, p=0.027) and there was a negative correlation between NO and FSH (r=-0.411, p=0.024) ( Table 3). Discussion PCOS is a complex endocrine disorder which has hormonal and metabolic components. Although hormonal disturbances of PCOS is relatively well-established, discrepancy still exist in the pathogenesis of metabolic component. We determined a positive correlation between ADMA and fibrinogen or hsCRP, and inverse correlation between ADMA and NO in PCOS patients. To the best of our knowledge, this is the first report that examined the (29). There was a serious heterogeneity in age distribution and wide range in BMI in patient group in a previous report (30). Obesity have additional deleterious effects on glucose tolerance and endothelial functions in patients with PCOS (31). Our study group consisted of relatively young (< 26 years) nonsmoker and non-obese individuals, and we obtained age and BMI matched control group to eliminate the negative effect of age, smoking and obesity on hormonal and metabolic parameters. Elevated LH concentrations suggest the presence of HA which is a result of increased synthesis and/or decreased excretion of androgens or androgen precursors (32). Although HA is regarded as a contributor of endothelial dysfunction, there are controversial results with respect to effects of androgens on vascular functions (33,34). Our results were in line with previous reports that mentioned to higher levels of LH and increased LH/FSH ratio in patients with PCOS when compared to healthy controls (10). Additionally, the relation of LH and FSH with ADMA level was nonsignificant. A growing number of evidences have linked elevated fibrinogen levels with increased risk of cardiovascular disease (11). Some recent reports indicated that increased fibrinogen level is a marker of endothelial dysfunction (18,35). Several experimental and epidemiologic studies; both in human and animal models, demonstrated an association between fibrinogen and atherosclerosis-related disorders including hypertension, diabetes mellitus, myocardial infarction and stroke (36). As expected, we observed significantly higher levels of fibrinogen in patients with PCOS and significant correlation of fibrinogen with ADMA and NO levels. (37). Preliminary data suggest that; in the absence of obesity and smoking, elevated hsCRP is likely to be associated with low grade inflammation resulted with endothelial dysfunction and atherosclerosis in patients with PCOS (38). Currently available data from experimental studies document that NO regulate blood flow and has multiple endocrinologic and metabolic functions on human physiology such as ovulation, pubertal maturation, embryogenesis and timing of menopause as well as potentiating tissue responsiveness to insulin (24,25,39). Decreased NO production by endothelial cells contributes to elevated blood pressure and systemic vascular resistance (40). In our study, patients have significantly lower NO levels than control subjects; consistent with recent reports. ADMA is a competitive antagonist of NO synthesis and proposed to implicate in disorders related with NO dysfunction including DM, chronic kidney disease, congestive heart failure and atherosclerosis (41). ADMA is considered as an independent marker of cardiovascular morbidity and mortality (33). Charitidou et al. stated that ADMA has an advantage of predicting atherosclerotic process and development of cardiovascular events which was not attributed to traditional risk factors (7). However, Pamuk et al. and Demirel et al. determined similar levels of ADMA in patients with PCOS and control subjects (30,42). Current clinical and experimental data showed that slight increase in ADMA concentrations; particularly those with PCOS, resulted with higher risk of cardiovascular events that make candidates for ischemic vascular diseases (43). We failed to demonstrate a correlation between NO and LH levels. Mather et al. determined that HA or hypergonadotrophinemia have no direct influence on NO concentrations (44). We observe a close relation between hsCRP and ADMA and, fibrinogen and ADMA. Our results were in agreement with those reported by Heutling et al. and Krzyzanowska et al. (33,45). Also, NO levels were significantly but inversely correlated with ADMA, fibrinogen and hsCRP. The present study has some limitations. The major limitation was low sample size due to strict exclusion criteria that obese, smoker, old patients or received any medication in last 3 months were not enrolled. Second, more than single point measurement may enhance significance of the results. Third, long term follow-up needed to demonstrate possible cardiac effects of these inflammatory changes. Finally, the effect of medical therapy on these parameters may help better understanding. Conclusion We determined higher circulating ADMA and fibrinogen and, lower NO levels which are associated with worse metabolic profile that may reflect endothelial dysfunction in PCOS Data are given as mean ± standard deviation or median (minimum -maximum) according normality Figure 1 Relationship between NO and ADMA for the patients group Relationship between fibrinogen and ADMA for the patients group Relationship between HsCRP and ADMA for the patients group Relationship between Fibrinogen and NO for the patients group Relationship between HsCRP and NO for the patients group Relationship between HsCRP and Fibrinogen for the patients group
2020-01-30T09:15:19.180Z
2020-01-28T00:00:00.000
{ "year": 2020, "sha1": "f0d5c22734881a46fd797e18966d564205479b0b", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-12609/v1.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "e2031df7d5393446756437b346e3efee123378aa", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
232129737
pes2o/s2orc
v3-fos-license
Clinical performance of a point‐of‐care Coccidioides antibody test in dogs Abstract Background Point‐of‐care (POC) Coccidioides antibody assays may provide veterinarians with rapid and accurate diagnostic information. Objectives To determine the agreement of a POC lateral flow assay (LFA), sona Coccidioides (IMMY, Norman, Oklahoma) with the current diagnostic standard, the immunodiffusion assay (agar gel immunodiffusion [AGID]; Coccidioidomycosis Serology Laboratory, University of California, Davis, California). Animals Forty‐eight sera specimens from 48 dogs. Methods Sera specimens were collected from client‐owned dogs that had a clinical suspicion for coccidioidomycosis. Animals were classified as Coccidioides antibody‐positive (n = 36) based on a positive AGID or Coccidioides antibody‐negative (n = 12) based on a negative AGID. The performance of the LFA assay was determined by comparing results to AGID results. Results The LFA assay demonstrated agreement in 32 of 36 Coccidioides antibody‐positive specimens and 12 of 12 Coccidioides antibody‐negative specimens, resulting in a positive percentage agreement of 88.9% (95% confidence interval [CI], 74.7‐95.6%) and negative percentage agreement of 100% (95% CI, 75.8‐100%) as compared to AGID. A receiver operator characteristic curve was constructed, and the area under the curve was 0.944 (CI, 0.880‐1.000). Conclusion and Clinical importance This LFA is a rapid alternative to the traditional AGID. The LFA provides excellent predictive value for positive results. Positive agreement was lower in dogs with low AGID titers; therefore, confirmatory testing is recommended if a high index of suspicion exists. Arizona, New Mexico, Texas, and northern Mexico. 1 The dimorphic fungi exist in the soil as mycelium and can lead to infection in a wide range of mammals when the arthroconidia become aerosolized in dust and are inhaled. Once inhaled, spherules form and establish infection that can lead to a wide range of clinical presentations. Coccidioidomycosis most commonly is characterized by respiratory infection that can range from subclinical to severe disease. 2 Disseminated infection occurs in approximately 25% of individuals and can involve the skeletal system, central nervous system, eyes, skin, lymphatic system, and pericardium. 3 Cases of coccidioidomycosis have increased dramatically in the Southwest United States over the past decade, with a record number of cases in humans diagnosed in 2018 (the most recent year with data available). 4 Case numbers in veterinary medicine are not widely available for comparison, but an increase in newly diagnosed cases of coccidioidomycosis in dogs was noted at our institution with a peak in 2018 (unpublished data). The increase in coccidioidomycosis cases has been attributed to climate changes, increased population in endemic regions, increased soil disturbances and construction activities, and increases in disease awareness and testing. 5 Detection of anti-Coccidioides antibodies provides the laboratory basis for diagnosis of coccidioidomycosis in most cases. Organism detection by cytology, histopathology, or culture identification of fungal organisms is considered the gold standard diagnostic methods. These methods however are invasive and insensitive, and fungal culture poses a risk to laboratory personnel. The serologic reference standard in dogs is the agar gel immunodiffusion (AGID) assay. This assay's sensitivity and specificity at selected institutions approaches 100%. 6,7 However, the AGID performance varies among institutions, and false positives and false negatives have been reported in other geographical locations. 8,9 The AGID can detect immunoglobulin M (IgM) against the protein tube precipitin (TP) antigen or immunoglobulin G (IgG) against the protein complement fixation (CF) antigen. 10 Performance of the AGID is complex, labor-intensive, expensive, and incubation times of up to 1 week are required to provide results in some cases. Coccidioides antibody LFA has been evaluated in a cohort of dogs residing in Arizona. 12 The LFA results were compared to AGID results submitted to 1 of several reference laboratories, and an overall agreement of 87.5% was noted. 12 Here, we aim to assess the diagnostic performance of the sona Coccidioides LFA as compared to a standardized AGID performed in a single reference laboratory in dogs suspected of having coccidioidomycosis and residing in a wider geographic area. | Sera specimens Sera specimens from client-owned dogs were collected both prospectively and from stored specimens submitted to the UC Davis Coccidioidomycosis Laboratory for Coccidioides antibody testing. If sufficient volume of serum remained after AGID, the specimens were stored at −80 C until further analysis. A cohort was chosen for LFA analysis using convenience sampling. Complete medical records were not available for patients that had serum submitted to the UC Davis Coccidioidomycosis Laboratory from veterinarians that practiced outside of our institution. | Agar gel immunodiffusion performance The AGID assays were performed as previously described by a single laboratory, the UC Davis Coccidioidomycosis Laboratory (Davis, California). 10 Samples were placed in a well within the agar plate, and the corresponding purified antigen (TP or CF) placed in an opposing well. The plates were incubated for up to 96 hours and monitored daily for development of an antigen-antibody precipitation line. If a precipitation line was noted, quantitative immunodiffusion was performed to determine the IgG titer. | Lateral flow assay performance The LFA was performed according to the manufacturer's instruction by a single investigator (KR). The kit was brought to room temperature for 30 minutes before testing. The specimen was diluted 1:441 in specimen diluent using microcentrifuge tubes. Next, 100 μL of the diluted specimen was placed into a flat-bottom 96-well plate. The LFA test strip tip was inserted into the well containing the specimen. The plate then was incubated at room temperature for 30 minutes. Concurrently, a positive control specimen (manufacturer supplied) and negative control (specimen diluent only) were assayed. Test results were recorded as negative (red control line present), positive (red control and test lines present), or invalid (absence of control line regardless of test line presence). | DISCUSSION We found a high degree of agreement between the LFA for detecting Coccidioides antibodies compared to AGID in dogs suspected of Our results show similar overall agreement to a study assessing this LFA in dogs residing in Arizona. 12 A small number of negative LFA tests with positive AGID results were noted in both studies, and all but 1 discordant result was associated with AGID titers ≤1:4. This observation suggests that the LFA may not be as sensitive when Coccidioides antibody titers are low, such as early in an infection. 12 However, in our study, 5 patient specimens were determined to have AGID titers ≤1:4 and had corresponding positive LFA results. In our study, no positive LFA with negative AGID discordant results was noted, which differs from a previous study that found that 15% of the dogs with negative AGID results had positive LFA results. 12 In the previous study, 2 of the discordant results were from dogs that previously were diagnosed with coccidioidomycosis and were receiving antifungal treatment. The other 2 discordant results were from dogs that had clinical disease highly suspicious for coccidioidomycosis, and the attending clinicians recommended convalescent AGID titers to assess for seroconversion that was not pursued by the clients. 12 The cause of this difference in discordant results with a positive LFA and negative AGID is unknown. The AGID assay performance in the previous study was not standardized. Therefore, differences in assay performance among reference laboratories may have been present, making comparisons between the studies difficult. Our results show similar overall agreement to preliminary studies conducted in people diagnosed with coccidioidomycosis. [15][16][17] However, a more recent study assessing the LFA performance on specimens from people early in the course of infection only had a sensitivity of 30% to 40% compared to EIA or AGID. 18 The chronology of clinical signs was not assessed in our study, and determination of LFA performance in dogs early in the course of coccidioidomycosis should be further evaluated. The major limitation of the AGID assay is the turnaround time between collection of patient specimens and availability of diagnostic results. One study conducted in a reference laboratory for human medicine determined that implementation of LFA screening decreased turnaround time from up to 10 days to <24 hours. 17 as histoplasmosis, which also is endemic in our geographic area. In conclusion, the LFA has a positive percentage agreement of 89% and negative percentage agreement of 100% compared to the AGID. This assay may allow for the swift initiation of treatment, decrease the need for more invasive and costly diagnostic testing, and improve antimicrobial stewardship by preventing empirical antifungal treatment while waiting for diagnostic results. Further assessment of this assay is warranted early in the course of coccidioidomycosis and to determine its utility in therapeutic monitoring. ACKNOWLEDGMENT No funding was received for this study. CONFLICT OF INTEREST DECLARATION Diagnostic test strips were a generous donation from IMMY; however, they were not involved in study design, the acquisition of data, the preparation of this manuscript, or the decision to publish the results.
2021-03-07T06:16:27.196Z
2021-03-06T00:00:00.000
{ "year": 2021, "sha1": "be9ab6de226dff708c5ec578413796fd020d8e0e", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jvim.16087", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ffac978764bad6320fedbff20d41870c03a0a760", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234524062
pes2o/s2orc
v3-fos-license
Complex Variations in Branching Pattern of the Axillary Artery and Hands with the Persistent Median Artery : We report bilateral multiple variations in branching pattern of the axillary artery and superficial palmar arch of the hand in an 84-year-old Korean female cadaver. First, we identified an aberrant trunk with high bifurcation of the deep brachial artery from the left axillary artery. Second, the persistent median artery accompanied with a median nerve and formed the superficial palmar arch in the left hand. Third, a common trunk was originated from the second part of the right axillary artery, and divided into lateral thoracic, subscapular, and circumflex humeral arteries, respectively. Finally, the superficial palmar branch of radial artery lies superficial to the thenar muscles and gave rise to a common trunk of the princeps pollicis and radialis indicis arteries on both hands. This case report alerts clinicians and anatomists to the possibility of concurrent complex bilateral variations in the upper limb. INTRODUCTION The axillary artery is bounded from lateral border of the first rib to lower margin of the teres major muscle and continues as brachial artery. It is divided into three parts by the pectoralis minor and gives off six major branches [1]. In general, the superior thoracic artery is the only branch of the first part. The second part has two branches, the thoracoacromial and lateral thoracic arteries. The third part gives off three branches including the anterior and posterior circumflex humeral and subscapular arteries. Nevertheless, a myriad of studies have reported the variations of the branching pattern of the axillary artery [2][3][4]. In the palmar side of the hand, the ulnar artery ends by anastomosing with the superficial palmar branch of radial artery (SPBRA). That forms the superficial palmar arch, which gives rise to three common palmar digital arteries. The radial artery, meanwhile, mainly continues to the deep palmar arch by combining with the deep branch of ulnar artery. Before continuing the deep palmar arch, the radial artery gives off two branches, the princeps pollicis and radialis indicis arteries [1]. Various anomalous arterial patterns of the hand have been reported including the presence of the persistent median artery (PMA) [5][6][7], and the displacement of the superficial palmar arch [8,9] Unusual arterial patterns are more vulnerable to iatrogenic injury during vascular transplantation or reconstruction procedures and may interfere with reliable interpretation of the angiographic images [10]. In this case report, a rare case of bilateral yet asymmetric variant arteries on the upper limb is described and their embryological backgrounds and clinical significance are discussed. CASE REPORT During a routine dissection in the gross anatomy class, the upper limb of an 84-year-old female cadaver was exposed, and the arterial branches were identified. This female cadaver had no specific medical history in both upper limbs. Out of 20 cadavers dissected from 2017 to 2019, it was the only case that had bilateral arterial variations in both arms and hands concurrently. The photos were taken by digital camera and illustrations were made by Adobe Photoshop (Adobe System Inc., San Jose, CA, USA). Herein, we delineate in the order of the left axilla, right axilla, PMA in the left hand, and large and subcutaneous SPBRAs in both hands with regard to arterial variations. Left axillary artery with high origin of the deep brachial artery An aberrant common trunk arose at the third part of left axillary artery and it divided into four branches including the anterior and posterior circumflex humeral artery, subscapular artery, and deep brachial artery (Fig. 1). The deep brachial artery traversed through the radial groove accompanying with the radial nerve. The axillary artery located typical position among the brachial plexus and continued as brachial artery, which showed normal anatomy in the arm. Right axillary artery variation with a common trunk at the second part A large common trunk arose at the second part of right axillary artery and gave off four branches including lateral thoracic artery, subscapular artery, and common stem of the posterior and anterior circumflex humeral artery (Fig. 2). The continuing axillary artery did not give off any other branches at the third part and continued as brachial artery. Persistent median artery in the left hand The PMA emerged from the anterior interosseous artery at the middle of the left forearm and coursed accompanying within the neurovascular sheath of the median nerve. PMA and median nerve traversed through the carpal tunnel with the median nerve without any other branches, and terminated by anastomosing with the superficial palmar arch (Fig. 3). Aberrant course of SPBRA in both hands The SPBRAs in both hands lay superficial to the abductor pollicis brevis (APB) to make the superficial palmar arch, unlike the normal course which run between the opponens pollicis and the APB (Arrowheads in Figs. 3 and 4). In addition, SPBRAs gave rise to a common trunk of the princeps pollicis and radialis indicis arteries, which are normally arising from the radial artery near the anatomical snuffbox immediately before it continues into the deep palmar arch (Arrows in Figs. 3 and 4). DISCUSSION In the present case, the bilateral yet asymmetric common trunk was observed in the second part of the right axillary artery, and in the third part of the left axillary, respectively. And the SPBRAs ran superficial to the thenar muscles on both hands with direct branches including the princeps pollicis and radialis indicis arteries. Moreover, the PMA was traversed with median nerve and formed the superficial palmar arch in concert with the radial and ulnar arteries on the left hand. These multiple variations encountered in a single cadaver are thought to be very rare over the previously reported anomalies of the upper limb. In general, anatomical variations of the axillary artery are relatively common. There have been efforts to sort out branching patterns of the axillary artery, as well as the subclavian artery [3,4,11]. Astik et al. reported the incidence of variations in the arterial branching pattern of the upper limb Fig. 2. Photograph of the right axillary artery. A large common trunk arose from the second part of the axillary artery, which was divided into four branches. 1, Axillary artery; 2, Superior thoracic artery; 3, Thoraco-acromial artery; 4, Aberrant common trunk; 5, Lateral thoracic artery; 6, Subscapular artery; 7, Thoracodorsal artery; 8, Circumflex scapular artery; 9, Common trunk of circumflex humeral artery; 10, Brachial artery. A B Multiple Bilateral Arterial Anomalies in the Upper Extremity 197 was 62.5%, and a common trunk from the third part of the axillary arteries in 14.7% of the cases [2]. It has also been reported that the bilateral double axillary arteries which were divided at the second part, and deep brachial artery had a high origin at the third part [4]. Nevertheless, present case differs from other previous reports in that the common trunk of both axillary arteries arose around the pectoralis minor and the distribution pattern was asymmetric. So far, there have been many reports of PMA, but its incidence was highly variable, ranging from 1.1 to 20% [5,6,12]. This may be due to the difference between the number of cadavers investigated and the method of investigation. In addition, Feigl and colleagues showed that only 0.4% of cases in which PMA extends to the palm and forms a superficial palmar arch with radial and ulnar arteries [10]. In this case, PMA was surrounded by the sheath of the median nerve and traversed under the carpal tunnel, and the bifid of the median nerve was not observed, so it is estimated that there was no pain or other symptoms. However, some literature reported that the PMA can cause pain and other symptoms such as carpal tunnel syndrome [14]. The patterning process of arterial branches in the upper limb commences when the capillary plexus transformed to the axial artery, positioning the axillary, brachial and anterior interosseous arteries in terms of proximal to distal, respectively [4]. Variations of the axillary artery in present case may be arose spontaneously by unusual choice of paths in the primitive capillary plexuses during development. On the other hand, the anterior interosseous artery gives rise to the median, ulnar, and radial arteries [13]. After this, the median artery usually regresses after the 8 th week of gestation and forms the ulnar and radial arteries during the late stage of fetus [1,9]. However, the median artery can occasionally remain after the birth and passes along the median nerve toward the digits of the hands. In the present case, the median artery may be failed regression and formed the complete superficial palmar arch with the other arteries as radio-mediano-ulnar type [10]. Also, superficial course of the SPBRA is related to a hemodynamic mechanism between deep and superficial arteries in the palmar side of hand [9]. The unusual course of the radial artery appears to be due to chance variations in these hemodynamic factors, and this variation may lead to regression of the deep part and persistence of the SPBRA. Since our case did not include deep palmar arch dissection, we could not analyze the size and morphology of the deep palmar arch. Detecting arterial variations has an important clinical implications for surgical interventions [2,4,11]. Knowing about unusual branching patterns of the axillary artery is necessary during reconstructing the axillary artery after trauma, catheterizing or cannulating the axillary artery and treating the thrombosis of the axillary artery [4]. Also, the anatomical relationship of the SPBRA is important. Most of cases describing aberrant courses of the SPBRA is repor ted from cadaveric studies. Therefore, it may be inferred that abnormal positioning of the SPBRA does not cause clinical symptoms and no intervention is required. However, if the thenar compartment is injured, superficial course of SPBRA could lead severe bleeding and it can also cause partial ischemia in fingers when grasping object by deliver ing external compression [15]. Also, the PMA can cause anterior interosseous nerve syndrome which the median artery penetrates median nerve, it can also cause carpal tunnel syndrome [7,14]. The present study examines concurrent occurrence of the aberrant common trunk of the axillary arteries, PMA, and superficial course of SPBRAs. To date, despite numerous reports on each topic, there have been no cases in which these three variations have been reported in a single case. Thus, obtained reports of complicated variations may serve as a useful guide for hand surgeons and clinical anatomists. Knowledge of this unusual arterial variations in upper limb may help a diagnosis in vasculature of the axillary region and hand.
2021-05-16T00:03:59.301Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "29443f88a315c758122a1614a7fbf10cc64bfb15", "oa_license": "CCBYNC", "oa_url": "http://e-aba.org/Synapse/Data/PDFData/2107ABA/aba-33-193.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "89c2be47b3b282815547156ac2da637417c42486", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
47109534
pes2o/s2orc
v3-fos-license
Anogenital Distance: Bailey and Renner Respond In his letter, Weiss misrepresents the arguments presented in our letter (McEwen and Renner 2006) regarding the study of Swan et al. (2005). We pointed out that a value for “normal” anogenital distance (AGD) is not known and that without this information, “abnormal” AGD values cannot be determined. Swan et al. (2005) measured AGD in a limited number of subjects (134 boys) who varied widely in age, height, and weight. This small sample size is inadequate to determine a normal AGD value, and there are no historical control data for AGD in male human infants using a definition of AGD comparable to the one used by Swan et al. Although the significance of AGD values in humans, if any, is unknown, it is clear that a meaningful study with AGD as the end point of interest requires knowledge of normal values as a prerequisite. Further, the lack of knowledge of normal AGD values is only one of the significant limitations of the study by Swan et al. (2005); others were identified in our previous letter (McEwen and Renner 2006). In their letter to EHP, McEwen and Renner (2006) dismissed the findings of Swan et al. (2005), who reported a significant relationship between a measure of anogenital distance (AGD) in boys and levels of phthalate metabolites in their mothers' urine during pregnancy. AGD is a sexually dimorphic index that, on average, is twice as great in males as in females, so it serves as a marker of proper male development. McEwen and Renner based their argument on an idiosyncratic form of logic. They asserted that All male infants evaluated in the study appeared normal … there is no evidence for potential adverse effect in the test population. … no conclusion can be drawn whether the reported values are normal or abnormal. The range of AGD values … likely represents typical biologic variation that would be expected to occur among normal study subjects. McEwen and Renner seem to be wholly unfamiliar with the meaning of a modest or even a slight shift in the mean of an index that reflects the distribution of susceptibility in a population. I have pointed out (Weiss 1988) that even a 5-point (5%) reduction in mean intelligence quotient in a population of 100 million increases the number of individuals classified as retarded from 6 million to 9.4 million. It is this kind of relationship that eventually prompted the Centers for Disease Control and Prevention (CDC) to lower its definition of elevated lead risk levels in blood, set at 40 µg/dL in 1970, to 10 µg/dL in 1991 (CDC 1991). Bellinger (2006 put it this way: A small change in the mean signals predictable accompanying changes in the proportions of individuals in the source population who fall into the tails of the distribution, where individuals who meet diagnostic criteria are found. Thus, the importance of a shift in group mean lies not in what it indicates about the average change among members of the study sample, but what it implies about the changes in the tails of the distribution in the population from which the study sample was drawn. He noted, based on Rose (1981), that in a population with a prevalence of clinically defined hypertension of 15%, a 5-mm reduction in mean systolic blood pressure would result in a 33% decrease in prevalence (Bellinger 2006). Epidemiologists recognize that a slight decrease in mean blood pressure in a population is translated into a major decrease in the incidence of serious cardiovascular events such as heart attacks. We already know that shortened AGD at birth is one element, the leading edge, as it were, of the "phthalate syndrome" in rats, which is marked by testicular pathology, reduced spermatogenesis, hypospadias, and cryptorchidism, a compilation of signs indicating disordered male development that Sharpe (2001) and others have noted to be on the increase in industrialized nations. An almost imperceptible shift to a lower mean AGD in the human male would foreshadow a heightened prevalence of reproductive system dysfunction. Is that the connection now emerging in the clinic? If McEwen and Renner's (2006) criteria for "normal" were to govern the way in which we define the health risks of lead exposure, we would be basing our criteria on the number of children brought into hospital emergency rooms with lead poisoning rather than on the threats it poses to their neurobehavioral development. No parent, and no community, would tolerate such a definition these days. Anogenital Distance: Bailey and Renner Respond In his letter, Weiss misrepresents the arguments presented in our letter (McEwen and Renner 2006) regarding the study of Swan et al. (2005). We pointed out that a value for "normal" anogenital distance (AGD) is not known and that without this information, "abnormal" AGD values cannot be determined. Swan et al. (2005) measured AGD in a limited number of subjects (134 boys) who varied widely in age, height, and weight. This small sample size is inadequate to determine a normal AGD value, and there are no historical control data for AGD in male human infants using a definition of AGD comparable to the one used by Swan et al. Although the significance of AGD values in humans, if any, is unknown, it is clear that a meaningful study with AGD as the end point of interest requires knowledge of normal values as a prerequisite. Further, the lack of knowledge of normal AGD values is only one of the significant limitations of the study by Swan et al. (2005); others were identified in our previous letter (McEwen and Renner 2006). concluded that prenatal methylmercury (MeHg) exposure is reducing children's IQs (intelligence quotients), costing $8.7 billion/year. They achieved this high estimate a) by assuming that IQ reductions occur at MeHg exposures near or even below the 5.8 µg/L reference dose (RfD), although there is no evidence for IQ reductions even at much higher exposures; and b) by overstating by nearly a factor of three the fraction of newborns with MeHg exceeding the RfD. I believe that their analysis is flawed, invalid, and not appropriate as an input to policy decisions. assumed that 10% of newborns are exposed prenatally to MeHg exceeding the RfD. However, the appropri- Perspectives Correspondence The correspondence section is a public forum and, as such, is not peer-reviewed. EHP is not responsible for the accuracy, currency, or reliability of personal opinion expressed herein; it is the sole responsibility of the authors. EHP neither endorses nor disputes their published commentary. A 399 Correspondence ate value is 3.6%. Trasande et al. made two errors. First, they used a lower RfD than 5.8 µg/L, based on the observed enrichment of MeHg in umbilical cord blood relative to maternal blood. However, the current RfD already accounts explicitly for this enrichment through an uncertainty factor of 3.15 applied to the benchmark dose lower limit [U.S. Environmental Protection Agency (EPA) 2001]. Second, they assumed that women 16-49 years of age measured during 1999-2000 accurately represented MeHg levels in pregnant women . National Health and Nutrition Examination Survey (NHANES) data collected during 1999-2002 (Jones et al. 2004), available before submitted their manuscript, show the 95th percentile MeHg level for pregnant women to be 32% below Trasande et al.'s value. If any MeHg exposure above the RfD reduced IQ, there would still be cause for concern. However, there is no evidence for IQ reductions even at exposures several times the RfD. Previous studies in the Seychelles Islands (Myers et al. 2003) and New Zealand ) did not find IQ reductions at any MeHg exposure. A study in the Faroe Islands ) did not measure IQ. Many children in these studies had prenatal MeHg exposures exceeding 10 times the RfD. The claim of IQ reductions in Americans is even weaker because Americans' MeHg exposures are far lower. Of 629 pregnant women measured by NHANES, the highest exposure was 3.7 times the RfD (Centers for Disease Control and Prevention 2005). Among those exceeding the RfD, 75% were below twice the RfD. cited results from the Faroe Islands ) to claim IQ reductions, but this study is less compelling than the Seychelles study (Myers et al. 2003b) for assessing Americans' risks: a) the Seychellois are exposed to MeHg through ocean fish, similar to Americans, whereas the Faroese are exposed through whale meat (Myers et al. 2003b); b) the Seychellois are ethnically diverse, but the Faroese are homogeneously Scandinavian (Rice et al. 2003); and c) the Seychelles study used hair MeHg to measure exposure, and the Faroes study used cord blood. Hair MeHg has been calibrated with fetal brain levels, but cord blood has not (Cernichiari et al. 1995;Myers et al. 2003a). Despite the advantages of the Seychelles study, dismissed it, claiming that the National Research Council (NRC 2000) "opined that the most credible of the three prospective epidemiologic studies was the Faroe Islands investigation." In reality, referring to all three studies, the NRC (2000) concluded that "each of these studies was well designed and carefully conducted." Nevertheless, the NRC "concluded that a well-designed study with positive effects provides the most appropriate public-health basis for the RfD." The NRC thus excluded the Seychelles study not because of the quality of the study but because the study found that MeHg did not cause any harm. also made other errors: • They claimed that the New Zealand study reported IQ reductions, citing Kjellstrom et al. ( , 1989. However, they omitted Crump et al.'s (1998) reanalysis, coauthored with Kjellstrom, which superseded previous reports and found no IQ reduction. • They claimed that the Seychelles study had only half the statistical power of the Faroes study. The studies actually have similar power (Myers et al. 2003;NRC 2000). • They claimd the NRC concluded that MeHg reduces IQs even at exposures lower than the RfD. However, the NAS cautioned that the cohort studies were incapable of assessing effects of exposures near the RfD, because hardly any children had such low MeHg exposures (NRC 2000). The weight of the evidence indicates that MeHg, even at exposures substantially greater than the highest U.S. levels, does not reduce children's IQ. The evidence against IQ reductions is particularly strong for MeHg exposures from fish. relied on mistaken assumptions regarding exposures to and effects of MeHg, and misinterpreted or omitted contrary evidence. Therefore, I consider their analysis to be fundamentally flawed and invalid. Children's IQs: Trasande et al. Respond Schwartz makes a number of claims regarding our methodology that are inaccurate and based on a selective reading of the literature. In our article , we estimated the health and economic consequences of prenatal methylmercury (MeHg) exposure in the 2000 U.S. birth cohort. Our major findings were that at least 316,588 children in that birth cohort suffered IQ (intelligence quotient) loss of 0.2-24.4 points as a result of MeHg toxicity sustained in utero. This loss of intelligence causes diminished economic productivity that will persist, and this lost productivity is the major monetary consequence of methylmercury toxicity. We used the most up-to-date publicly available data on mercury exposures and health outcomes, applied a risk assessment approach developed by the National Research Council (NRC 1994), and made conservative assumptions throughout. To compute decrements in IQ that resulted from prenatal mercury exposures, we used data from on percentages of women of childbearing age in 1999-2000 with mercury concentrations ≥ 3.5, 4.84, 5.8, 7.13, and 15.0 µg/L. These data most closely reflect exposure to women in the years 1999-2000, when toxicity to the developing brains of children in the 2000 birth cohort would have occurred. We then applied logarithmic and linear models to these data, and we calculated a range of IQ decrements for each subpopulation born with a cord blood mercury concentration > 5.8 µg/L. To assess a range of possible outcomes, we conducted a sensitivity analysis in which we applied a range of IQ decrements for each increase in mercury concentration. We described our methods in great detail . Through this series of calculations, we generated upper and lower ranges of possible IQ decrements for each subpopulation among the most highly exposed children in the 2000 U.S. birth cohort. In his letter, Schwartz asserts that it is impossible to impute effects on children's intelligence of prenatal exposures to mercury near the U.S. Environmental Protection Agency's (EPA) reference dose (RfD). In proffering this assertion, he appears to ignore a recent meta-analysis of the three studies that confirmed a dose-response relationship between low-level prenatal MeHg exposure and IQ (Cohen et al. 2005). A recent U.S. cohort study has also detected decrements in visual recognition memory among children exposed prenatally to MeHg (Oken et al. 2005). Schwartz suggests that we should have used the U.S. EPA benchmark dose level (BMDL) of 58 µg/L as a cutoff. He apparently assumes that no injury occurs to fetal brains from exposure to MeHg below that level. That approach does not reflect biologic or epidemiologic reality. We based our selection of 5.8 µg/L as a no adverse effect level on the epidemiologic evidence, not on the U.S. EPA's regulatory documents Kjellstrom et al. , 1989. We relied especially upon the NRC's report on prenatal exposure to MeHg (NRC 2000), which concluded that the likelihood of subnormal scores on neurodevelopmental tests increased as cord blood mercury concentrations increased from levels as low as 5 µg/L. Methylmercury exposure has also been associated with persistent delays in peak I-III brainstem-evoked potentials at cord blood levels < 5 µg/L (Murata et al. 2004). Schwartz misrepresents Crump et al.'s findings (1998), stating that they "superseded previous reports and found no IQ reduction." In fact, the NRC (2000) stated that Crump et al. reported nonsignificant results from a regression analysis on all the children in the New Zealand cohort, but [that these results became significant] after omission of a single child whose mother's hair Hg concentration was 86 ppm (4 times higher than that of the next highest exposure level in the study). Schwartz misrepresents our characterization of the Seychelles Islands study (Landrigan and Goldman 2003;Myers et al. 2003), accusing us of stating that it had half the statistical power of the Faroe Islands study . In actuality, we stated that the Seychelles study "had only 50% statistical power to detect the effects observed in the Faroes" . Schwartz asserts that the NRC's choice not to apply the Seychelles data in setting an RfD represents equivocation about the health effects of MeHg. In actuality, the NRC came to the same conclusion as we did: "[t]he weight of the evidence of developmental neurotoxic effects from exposure to MeHg is strong" (NRC 2000). Recent work (Trasande et al. 2006) suggests that our calculation of the economic costs ) may, in fact, be an underestimate. The new study indicates that downward shifts in IQ are also associated with thousands of excess cases of mental retardation (defined as IQ < 70) in the United States each year. Care of these children is associated with needs for health care, special education, and other services that impose a great burden on society. All of these adverse consequences can be prevented by prevention of prenatal exposure to MeHg.
2019-03-10T13:07:57.111Z
2006-07-01T00:00:00.000
{ "year": 2006, "sha1": "22edb4b7272b5b7cc530e0c61bd83b536b7fbc97", "oa_license": "CC0", "oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.114-1513310", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "554d4ea54b5963df64da088c01820caa20d313fc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222146448
pes2o/s2orc
v3-fos-license
RET Receptor Tyrosine Kinase: Role in Neurodegeneration, Obesity, and Cancer Rearranged during transfection (RET) is the tyrosine kinase receptor that under normal circumstances interacts with ligand at the cell surface and mediates various essential roles in a variety of cellular processes such as proliferation, differentiation, survival, migration, and metabolism. RET plays a pivotal role in the development of both peripheral and central nervous systems. RET is expressed from early stages of embryogenesis and remains expressed throughout all life stages. Mutations either activating or inhibiting RET result in several aggressive diseases, namely cancer and Hirschsprung disease. However, the physiological ligand-dependent activation of RET receptor is important for the survival and maintenance of several neuronal populations, appetite, and weight gain control, thus providing an opportunity for the development of disease-modifying therapeutics against neurodegeneration and obesity. In this review, we describe the structure of RET, its signaling, and its role in both normal conditions as well as in several disorders. We highlight the differences in the signaling and outcomes of constitutive and ligand-induced RET activation. Finally, we review the data on recently developed small molecular weight RET agonists and their potential for the treatment of various diseases. Introduction Receptor tyrosine kinases (RTKs) are transmembrane proteins conveying extracellular stimulus inside the cell. The members of RTKs are expressed in almost every if not all cells in the organism and play pivotal roles in different cellular functions such as proliferation, cellular differentiation, cell survival, cell migration, and metabolism. There are 58 different RTKs in humans with similar molecular structure, which are activated by ligands binding to their extracellular domain [1]. All RTKs have an extracellular domain that interacts with a ligand (directly or indirectly) and an intracellular kinase domain that is activated upon ligand binding and catalyzes autophosphorylation. These two domains are connected by a transmembrane domain. Apart from these three structural domains, there is a juxtamembrane domain that was initially thought to be just a mechanical linker between two parts of the protein. However, recent studies show that it may also regulate the function of at least some RTKs [2]. Ligand interaction with extracellular domains of RTKs promotes their dimerization or oligomerization and triggers the phosphorylation of tyrosine residues in their kinase domains. Phosphorylated tyrosine residues recruit adapter proteins and trigger the activation of intracellular signaling cascades [1]. The main objective of this review is to highlight the importance of rearranged in transfection (RET) in health and disease conditions. The present review is focused on the structure, function, and role of RET in neurodegeneration, obesity, and cancer. Furthermore, this review updates recent findings on how RET can be targeted with small molecules for the treatment of various disease conditions. RET is unique as unlike other RTKs, it does not bind to the ligand directly. Instead, it forms a tripartite complex consisting of a dimeric ligand, two molecules of ligand-binding co-receptors (either glial cell line-derived neurotrophic factor (GDNF) family receptor alpha (GFRα) or GDNF family receptor alpha-like (GFRAL) and two molecules of RET. This, on one hand, provides an opportunity to target it selectively in diseases via co-receptors or the interaction surface between a co-receptor and RET. This is important because the kinase domain of RET is structurally similar to the kinase domains of other RTKs; therefore, it is difficult to find selective molecules acting via the kinase domain. On the other hand, GFRα1 can regulate RET signaling in a way that for a particular stimulus, signaling bias may exist [3], and this may allow RET to orchestrate cellular processes more precisely. However, disturbances in cellular levels of RET and GFRα co-receptors can lead to undesirable consequences, such as RET activation in the absence of a ligand, which can potentially result in the formation of malignant tumors [4,5]. RET Receptor Tyrosine Kinase RET was identified as an oncogene activated by the recombination of DNA [6,7]. Unlike other RTKs, RET contains cadherin-like repeats in the extracellular domain ( Figure 1). The N-terminal region of RET consists of four cadherin-like domains (CLDs 1-4) each of 110 amino acid residues and a cysteine-rich region. The calcium-binding site is present between CLD2 and CLD3. The N-terminal region of RET encodes a signal sequence (1-28 residues) that directs RET to the endoplasmic reticulum (ER). The extracellular domain of RET possesses 12 glycosylation sites that undergo extensive glycosylation in ER to form 150 KDa RET. The further modification of RET occurs in Golgi to form 170 KDa mature RET. Glycosylation increases the stability of the mature RET [8,9]. Inactivation mutations in the intracellular and extracellular domains are associated with Hirschsprung's disease (HSCR disease), which is explained later in the text (see Section 3.3). The binding of ligand is calcium dependent, and calcium ions are required for the RET-ligand complex formation, which further induces RET autophosphorylation [10,11]. Furthermore, calcium is necessary for the proper folding of RET in the ER [12]. The extracellular domain of RET contains four cadherin-like repeats and a cysteine-rich domain. Ca 2+ ions bind to the extracellular cadherin-like domains of RET, which is required for its activation. The intracellular domain of RET contains a typical kinase domain. RET has three isoforms (RET9, RET43, and RET51), which differ in their carboxy-terminal amino acids. RET9 and RET51 are evolutionarily highly conserved. RET is phosphorylated at multiple tyrosine residues when activated by different ligands. Phosphorylated tyrosine residues serve as docking sites for various adaptor proteins that induce the activation of downstream signaling pathways essential for cell growth, proliferation, survival, differentiation, or appetite control. The black line indicates the binding of adapter protein and the activation of downstream signaling pathways. The red line indicates mutations in the RET region that are responsible for diseases such as multiple endocrine neoplasia (MEN) syndromes 2A and 2B and Hirschsprung disease (HSCR). The extracellular region also includes 120 residues of the cysteine-rich region, which is adjacent to the transmembrane domain. The intracellular domain of RET contains a typical kinase domain. The alternative splicing of RET results in three different protein isoforms, i.e., RET9 (1072 amino acids), RET43 (1106 amino acids), and RET51 (1114 amino acids) [13,14]. In most tissues, all these isoforms are co-expressed. However, the expression of the RET9 isoform is much higher than that of the RET51 isoform, while the expression of RET43 is much lower compared to the RET51 isoform [14]. The targeted mutagenesis of the mouse genome, which either expresses RET9 or RET51, revealed that mice lacking RET51 are viable and appear normal, whereas mice lacking RET9 have defects in the innervation of the gut and renal development [14,15]. The ligand-induced dimerization of RET leads to the autophosphorylation of various tyrosine residues and further activates intracellular signaling cascades, which affects a number of cellular processes [3]. In normal conditions, RET is activated by glial cell line-derived neurotrophic factor (GDNF) family ligands (GFLs). GFLs belong to the transforming growth factor beta (TGFb) superfamily. The extracellular domain of RET contains four cadherin-like repeats and a cysteine-rich domain. Ca 2+ ions bind to the extracellular cadherin-like domains of RET, which is required for its activation. The intracellular domain of RET contains a typical kinase domain. RET has three isoforms (RET9, RET43, and RET51), which differ in their carboxy-terminal amino acids. RET9 and RET51 are evolutionarily highly conserved. RET is phosphorylated at multiple tyrosine residues when activated by different ligands. Phosphorylated tyrosine residues serve as docking sites for various adaptor proteins that induce the activation of downstream signaling pathways essential for cell growth, proliferation, survival, differentiation, or appetite control. The black line indicates the binding of adapter protein and the activation of downstream signaling pathways. The red line indicates mutations in the RET region that are responsible for diseases such as multiple endocrine neoplasia (MEN) syndromes 2A and 2B and Hirschsprung disease (HSCR). The extracellular region also includes 120 residues of the cysteine-rich region, which is adjacent to the transmembrane domain. The intracellular domain of RET contains a typical kinase domain. The alternative splicing of RET results in three different protein isoforms, i.e., RET9 (1072 amino acids), RET43 (1106 amino acids), and RET51 (1114 amino acids) [13,14]. In most tissues, all these isoforms are co-expressed. However, the expression of the RET9 isoform is much higher than that of the RET51 isoform, while the expression of RET43 is much lower compared to the RET51 isoform [14]. The targeted mutagenesis of the mouse genome, which either expresses RET9 or RET51, revealed that mice lacking RET51 are viable and appear normal, whereas mice lacking RET9 have defects in the innervation of the gut and renal development [14,15]. The ligand-induced dimerization of RET leads to the autophosphorylation of various tyrosine residues and further activates intracellular signaling cascades, which affects a number of cellular processes [3]. In normal conditions, RET is activated by glial cell line-derived neurotrophic factor (GDNF) family ligands (GFLs). GFLs belong to the transforming growth factor beta (TGFb) superfamily. Traditionally, four proteins-GDNF, neurturin (NRTN), artemin (ARTN), and persephin (PSPN)-were referred to as GFLs. Recently, another protein, GDF15, was shown to signal via RET. GDF15 is a distant member of the TGFb superfamily with a close relationship with GFLs. Similar to other TGFb members, GDF15 also includes a highly conserved pattern of seven cysteine residues in its mature domain. Of the seven cysteine residues, six form highly stable intra-chain disulfide bonds, and the remaining one forms an inter-chain disulfide bond. Similar to other members, GDF15 is secreted as a dimeric protein. Therefore, it can be considered as a 5th GFL [16,17]. GDNF at first binds glial cell line-derived neurotrophic factor family receptor alpha 1 (GFRα1) and consequently forms a tripartite complex with RET. Other members of GFLs such as NRTN binds to GFRα2, ARTN binds to GFRα3, and PSPN binds to GFRα4 in order to form a complex with RET and induce signaling [18,19]. However, preferences of the ligand and co-receptor might change, making the co-receptor unselective. For example, GDNF can also bind to GFRα2 and NRTN, ARTN, and PSPN can also bind to GFRα1 [20][21][22]. GDF15 binds to a distant orphan member of the GFRα family called GFRAL and further forms a complex with RET. Interactions between GDF15 and GFRα co-receptor are not reported [23][24][25][26]. The binding of GFLs to GFRα co-receptors recruits two molecules of RET receptor into lipid rafts [27,28]. As a result, the formation of a signaling complex is completed and the trans autophosphorylation of tyrosine residues on the intracellular domain of RET occurs. The intracellular domain of RET contains 12 autophosphorylation sites: Y687, Y752, Y806, Y809, Y826, Y900, Y905, Y928, Y981, Y1015, and Y1062 ( Figure 1). The phosphorylated tyrosine residues serve as a docking site for several adapter proteins, which in turn activate internal signaling. Y905 is the docking site for Grb7/10. Y1096, which is unique and present only in RET51 long isoform, is the docking site for Grb2, Y1015 is the docking site for phospholipase C, and Y981 is the docking site for c-Src. Y1062, which is present at the carboxy terminal of RET, serves as a docking site for several adapter proteins such as Shc, insulin receptor substrate 1/2 (IRS1/2), fibroblast growth factor receptor substrate 2 (FRS2), downstream of tyrosine kinase 4 /5 (DOK1/4/5), and Enigma [18,29]. The phosphorylation of Y1062 activates multiple downstream signaling pathways such as mitogen-activated protein kinase (MAPK) pathways RAS/extracellular signal-regulated kinase (ERK), p38MAPK, phosphatidylinositol-3-kinase (PI-3K)/AKT, and Rac/c-Jun N-terminal kinase (JNK) pathways. The activation of these downstream signaling pathways is necessary for cell survival, differentiation, proliferation, motility, and functioning [16,18,30]. Role of RET in Various Disease States RET has an important role for the normal development of both peripheral and central nervous systems and also has functions outside the nervous system. In the central nervous system, RET is expressed in the ventral midbrain, ganglia layer of retina and olfactory epithelium, undifferentiated neuroepithelial cells of the ventral neural tube, spinal cord, and the hindbrain [31][32][33]. In the adult brain, the expression of RET is restricted to the midbrain, cerebellum, pons, and thalamus [34]. The expression of RET is also observed in the kidney, thyroid, and lungs [35]. Mutations in RET change the activity of the receptor and result in various diseases. Mutations that lead to the constitutive activation of RET result in human multiple endocrine neoplasia (MEN) syndromes 2A and 2B, while mutation that inhibits RET activation can cause Hirschsprung disease (HSCR) [5,36]. In addition, the ligand-dependent activation of RET can be important for treating various disease conditions caused by neuronal degeneration or the disturbances of functional activity of neurons, e.g., Parkinson's disease (PD), neuropathic pain, retinitis pigmentosa (RP), and obesity ( Figure 2). Here, we present a detailed review on the role of RET in various disease states. Ligand-based activation of RET is essential for the development of both peripheral and central nervous systems and also outside the nervous system. Therefore, targeting RET with agonists can be a useful approach in the treatment of neurodegenerative diseases and obesity, and RET antagonists may have a role in the therapy of RET-dependent cancers. Normal Function of RET in DA Neurons and Its Implication in Parkinson's Disease The physiological role of RET has been extensively studied in dopamine neurons because GDNF was discovered as a survival factor for these cells [37]. Later, RET was identified as the receptor of GDNF through which it triggers the neurite outgrowth and the survival of the central nervous system neurons [38]. In mice, RET is expressed in ventral midbrain dopamine neurons from 12.5 days postcoitum (dpc) until birth and it remains expressed throughout the lifespan [31]. Constitutive Ret knockout mice die shortly after birth due to the absence of kidney. However, these mice have normal midbrain dopamine neurons [39,40], suggesting RET as a dispensable receptor for the embryonic development of dopamine system. Since RET is the major receptor for GDNF signaling, the function of RET was studied in adult midbrain dopamine neurons by the selective ablation of RET genes. Among two different studies conducted, both groups reported no change in the survival of dopamine neurons during the first 9 months of mouse life [41,42]. However, Kramer et al. reported the progressive and late degeneration of dopamine neurons in RET conditional knockout mice compared to the age-matched controls when experimental animals were monitored for a period of two years. Further, they reported that a loss of neurons was accompanied by inflammation and gliosis. These data delineate RET as an important regulator for long-term maintenance of the nigrostriatal adult dopamine system [42]. MEN2B, an inherited cancer syndrome that is described in more detail in Section 3.6, is often caused by the presence of constitutively active RET as a result of point mutation in the gene encoding this RTK. In mice overexpressing a variant of the RET gene with a mutation causing MEN2B, the levels of dopamine and dopamine metabolites were found to be increased in different brain regions, including the striatum. In addition, the level of tyrosine hydroxylase (TH, a key enzyme of dopamine synthesis) protein, and TH mRNA levels were also increased along with the number of TH-positive cells in the substantia nigra pars compacta (SNpc), suggesting the importance of RET activity in maintenance of the dopamine system [35,43]. We have also shown that RET is required for the survival of naive cultured dopamine neurons as well as for neuroprotection when challenged with neurotoxin. Both RET agonist BT13 and GDNF do not promote the survival of cultured embryonic dopamine neurons lacking RET. Furthermore, both BT13 and GDNF protect dopamine neurons from 6-OHDA and MPP+ neurotoxin-induced cell death only when they express RET [44,45]. Recently, RET signaling activated by its ligand GDNF has been shown to prevent Lewy pathology in midbrain dopamine neurons, which further highlights the importance of RET for the maintenance of dopamine systems [46]. Ligand-based activation of RET is essential for the development of both peripheral and central nervous systems and also outside the nervous system. Therefore, targeting RET with agonists can be a useful approach in the treatment of neurodegenerative diseases and obesity, and RET antagonists may have a role in the therapy of RET-dependent cancers. Normal Function of RET in DA Neurons and Its Implication in Parkinson's Disease The physiological role of RET has been extensively studied in dopamine neurons because GDNF was discovered as a survival factor for these cells [37]. Later, RET was identified as the receptor of GDNF through which it triggers the neurite outgrowth and the survival of the central nervous system neurons [38]. In mice, RET is expressed in ventral midbrain dopamine neurons from 12.5 days postcoitum (dpc) until birth and it remains expressed throughout the lifespan [31]. Constitutive Ret knockout mice die shortly after birth due to the absence of kidney. However, these mice have normal midbrain dopamine neurons [39,40], suggesting RET as a dispensable receptor for the embryonic development of dopamine system. Since RET is the major receptor for GDNF signaling, the function of RET was studied in adult midbrain dopamine neurons by the selective ablation of RET genes. Among two different studies conducted, both groups reported no change in the survival of dopamine neurons during the first 9 months of mouse life [41,42]. However, Kramer et al. reported the progressive and late degeneration of dopamine neurons in RET conditional knockout mice compared to the age-matched controls when experimental animals were monitored for a period of two years. Further, they reported that a loss of neurons was accompanied by inflammation and gliosis. These data delineate RET as an important regulator for long-term maintenance of the nigrostriatal adult dopamine system [42]. MEN2B, an inherited cancer syndrome that is described in more detail in Section 3.6, is often caused by the presence of constitutively active RET as a result of point mutation in the gene encoding this RTK. In mice overexpressing a variant of the RET gene with a mutation causing MEN2B, the levels of dopamine and dopamine metabolites were found to be increased in different brain regions, including the striatum. In addition, the level of tyrosine hydroxylase (TH, a key enzyme of dopamine synthesis) protein, and TH mRNA levels were also increased along with the number of TH-positive cells in the substantia nigra pars compacta (SNpc), suggesting the importance of RET activity in maintenance of the dopamine system [35,43]. We have also shown that RET is required for the survival of naive cultured dopamine neurons as well as for neuroprotection when challenged with neurotoxin. Both RET agonist BT13 and GDNF do not promote the survival of cultured embryonic dopamine neurons lacking RET. Furthermore, both BT13 and GDNF protect dopamine neurons from 6-OHDA and MPP+ neurotoxin-induced cell death only when they express RET [44,45]. Recently, RET signaling activated by its ligand GDNF has been shown to prevent Lewy pathology in midbrain dopamine neurons, which further highlights the importance of RET for the maintenance of dopamine systems [46]. PD is a progressive neurodegenerative disease that affects most profoundly the dopamine neurons in the substantia nigra pars compacta (SNpc) [47]. The loss of dopamine neurons results in a deficiency of dopamine, which then induces motor impairment. Motor disturbances serve as diagnostic symptoms of PD. Other neuronal populations all over the body are also affected, and their loss or dysfunction cause non-motor symptoms that can precede motor symptoms by several years and even decades. There are no drugs to cure PD. Current therapy provides only symptomatic treatment to the PD patients. Due to the importance of RET signaling in the dopamine system as highlighted above, GFLs have been tested both in preclinical and clinical settings. GFLs were found to promote the survival of midbrain dopamine neurons both in vitro and in vivo [37,48,49]. Furthermore, GFLs provide both neuroprotection and neurorestoration when studied in various toxin-based models of PD in rodents and primates [48,[50][51][52][53][54][55][56][57]. Based on the promising results in preclinical studies, clinical trials were conducted with GDNF and NRTN. However, the outcomes of the clinical trials performed with GFLs in PD patients are inconclusive. PhaseI/II clinical trials conducted using recombinant GDNF and adeno-associated virus 2 encoded NRTN (AAv2-NRTN, CERE-120) indicated that both treatments were well-tolerated. The improvement in motor performance of at least some patients was seen along with an increase in [18F] DOPA uptake in the brain [58][59][60]. The last parameter indicates an increase in function and likely the level of dopamine transporter, which suggests the restoration of dopamine neuron terminals into putamen. Despite promising preliminary data, in double-blinded placebo-controlled trials with both of these GFLs, a statistically significant improvement in motor function of patients was not achieved. However, an increase in [18F] DOPA uptake in the brains of PD patients was detected [61][62][63][64]. According to the data from the double-blinded placebo-controlled study carried out with AAv2-NRTN, early-stage PD patients benefited from the treatment more when compared to advanced-stage PD patients [65]. Moreover, post hoc analysis of recent clinical trials with GDNF revealed an improvement in motor function in 43% of the patients treated with GDNF [63,64]. We have provided a detailed review of the results of clinical trials conducted with GDNF and CERE-120 in our previous review [66]. While GFLs proteins had limited success in clinical trials in PD patients, targeting RET can still be a valid approach for PD treatment. The poor tissue distribution of GFLs caused by their binding to heparan sulfate proteoglycans might have resulted in partial coverage of the putamen in PD patients, which was insufficient to observe statistically significant improvement in motor scores. The main participants of clinical trials with GFLs were late-stage PD patients. As a result of ethical reasons associated with the invasiveness of GFL delivery, it is very difficult to recruit early-stage patients into these clinical trials. In the brains of these patients, most of the dopamine cell bodies and fibers have already degenerated, and hence they are unlikely to benefit from GFL-based therapy [53]. The problems associated with GFLs delivery into the brains of PD patients can be solved by developing small molecule RET agonists with better pharmacokinetics and pharmacodynamics properties crossing the blood-brain barrier. This will allow including early-stage PD patients into clinical trials. Thus, targeting RET in PD patients can be a disease-modifying strategy, but further research is needed to reach this goal. Retinitis Pigmentosa and Other Eye Diseases RP is a rare genetic disorder with a prevalence of approximately 1:4000, which is caused by the degeneration of photoreceptors in retina [67,68]. Degeneration starts from rods on the periphery, but at the latter stage, cones in the macula and fovea are also affected. Symptoms include loss of night and peripheral vision, which worsen with time, leading eventually to complete blindness. The death of photoreceptors is accompanied by the accumulation of pigment on the periphery of retina seen during ophthalmological examination [67,69]. The condition is incurable. Some reports suggest protective effects of vitamin A and fish oils in RP patients, but the recent Cochrane systematic review concludes that the benefits of these treatments are uncertain [68]. The genetics of RP is diverse and complex; mutations in more than 40 genes were found to be associated with the disease [70]. This complicates the development of gene-therapy based approaches to treat RP. In some animal models of RP, AAv-encoded GDNF slowed down the morphological and functional deterioration of retina [71,72]; however, high levels of GDNF secretion accelerated the degeneration of photoreceptors. Other authors failed to see protective effects of GDNF in animal models of RP [73,74], while detecting an effect of RET activation by e.g., small molecular weight agonist [73]. A lack of GDNF efficacy in RP can be related to the level of transgene expression [73] or poor diffusion of the protein in the eye [74]. GDNF also exerted trophic effects toward axotomized retinal ganglion cells [75], thus unraveling its potential usefulness for the treatment of glaucoma. These results establish RET as a target for novel therapeutics in above-mentioned eye diseases. GDNF-supportive effects toward photoreceptors are indirect and mediated by retinal glial cells (Müller cells), which express both GFRα and RET [30,31]. Müller cells secrete various trophic factors and are critical for the survival of retinal neurons in diabetes. Therefore, targeting GDNF receptor RET with either protein, peptide, or small molecule ligand potentially can also slow down the progression of diabetic retinopathy [76,77]. Hirschsprung Disease (HSCR) HSCR is a rare disease (population incidence 1:5000 live births) caused by a disturbance in the development of the enteric nervous system and characterized by the absence of enteric ganglion cells in a part of the lower gastrointestinal tract that is variable in length [36,78]. The main treatment strategy is surgical removal of the affected portion of intestine, but the motility problems remain, thus limiting the long-term therapeutic efficacy of this approach. Genetic factors play a major role in the pathogenesis of HSCR, with RET being the primary gene associated with the disease. Mutations in RET were found in approximately 50% of patients with familial HSCR and up to 20% of sporadic cases [78]. According to recent metaanalysis data, mutations associated with HSCR can occur almost in any site of Ret, but they are most commonly found in exons 13 (11.32%), 15 (7.55%) (both coding RET kinase domain), and 10 (7.55%) (coding a part of cystein-rich domain) [79,80]. These mutations are inactivating; they abrogate RET signaling, which leads to the prevention of neural crest cell migration and distortions of the enteric nervous system. The earlier in development mutation occurs and neural crest cell migration is blocked, the longer aganglionic segment will be [80]. In animal models, the down-regulation of GFRα1 expression also resulted in HSCR [81] and in biopsies of a subset of HSCR patientsa reduced level of GFRα1 protein was detected, further supporting the role of the GFL/GFRα/RET axis in the development of this condition [82]. Neuropathic Pain Neuropathic pain is defined by the International Association for the Study of Pain as a "pain that arises as a direct consequence of a lesion or diseases affecting the somatosensory system" [83]. It affects up to 10% of adults [84] and imposes significant economic burden on the society. According to Schaefer et al., the estimated total annual costs of neuropathic pain were equal to 27,259 USD per patient [85]. Neuropathic pain can appear as a result of traumatic nerve lesion, disease e.g., diabetes, viral infection, cancer or as a side effect or treatment with e.g., anticancer drugs or opioids [86,87], and it is more common in women and the elderly [86]. Thus, due to an increase in the prevalence of underlying conditions and aging population, the number of affected people is expected to grow in the future. The treatment of neuropathic pain is a challenge for healthcare professionals. Available drugs poorly manage the condition. Any given analgesic produces at least 50% pain in less than 30% of patients [88] and with any combination of existing drugs, adequate pain control can be achieved in approximately half of patients. Tolerance and dependence are common side effects of currently available analgesics. Neither of the drugs used nowadays to treat neuropathic pain is considered disease-modifying. In neuropathic pain states, sensory neurons are damaged. RET and GFRα co-receptors are expressed in a significant portion of healthy sensory neurons, and their expression is upregulated after lesion in rodents [89]. Up to 80% of human sensory neurons express RET [90]. GFLs promote the survival of sensory neurons and therefore have a disease-modifying potential in neuropathic pain. However, their involvement in nociception is complex, and effects can depend on the dose, administration schedule, administration site, condition of animals, and a disease model. In our recent review, we described these issues in detail [66]. In several models of neuropathic pain, GDNF and ARTN were shown to provide analgesic effect and restore lesioned sensory neurons [91][92][93], thus showing disease-modifying potential. However, in inflammatory models, they seem to increase pain. In recent clinical trials, good tolerability and the efficacy of ARTN in neuropathic pain patients was observed. However, the dose-response curve was biphasic [94]. Importantly, ARTN provided pain relief in a population of patients resistant to the therapy with at least two standard analgesics [94]. These patients are difficult to treat and truly in need of novel drug classes. Adverse events seen in clinical trials mainly included changes in temperature perception, headache, pruritus, and rash, and they were mild or moderate in severity [95]. Since GFLs can signal also through different receptors than RET, some effects of GFLs in the sensory system, e.g., cold-induced pain, can be non-RET mediated [96,97]. A single cell transcriptome analysis of mouse sensory neurons revealed 11 subtypes of these cells. RET was expressed in low-threshold mechanoreceptors responsible for pain elicited by mechanical stimulation (which is often tested in preclinical models), neurons responsive for itchy feeling, and some others [98]. Thus, pruritus reported in some patients treated with ARTN in clinical trials as an adverse event is likely RET mediated. It is clear that RET-and GFL-signaling plays an important role in pain and analgesia. However, more data are needed to understand the exact action of each component of the GFL/GFR/RET axis in these processes. The results of clinical trials are promising. Research focused on understanding the molecular and cellular consequences of RET activation in the sensory system, as well as on the evaluation of efficacy and safety of RET targeting molecules in preclinical and clinical settings, is important for the development of novel disease-modifying treatments against neuropathic pain. Role of RET in the Non-Homeostatic Regulation of Body Weight Obesity and overweight are the conditions defined as excessive fat deposition that can result in diabetes, cardiovascular diseases, osteoarthritis, and cancer. According to the World Health Organization (WHO), in 2016, 1.9 billion people were overweight, and among them, 600 million people were obese. Body weight and feeding behavior are regulated both by homeostatic and non-homeostatic control mechanisms. Under homeostatic conditions, feeding behavior and energy metabolism are controlled by hypothalamic neural circuits by integrating nutrient and hormonal signals from the periphery [99]. However, during stress conditions, an organism uses an alternative program in order to achieve metabolic changes [100]. Recently, GDNF receptor alpha-like (GFRAL), which is expressed in the neurons of the area postrema and nucleus of the solitary tract, has been identified as the target receptor for GDF15 that regulates food intake during stress conditions. GFRAL-GDF15 requires RET as a signaling receptor through which it regulates body weight [23,24,26]. Intriguingly, RET is expressed in the area postrema and nucleus of the solitary tract of rodents and human [101]. However, RET phosphorylation and its downstream signaling events in GFRAL-positive neurons remain to be elucidated. Role of RET in Cancer RET was discovered as a protooncogene, and its oncogenic potential has always been acknowledged. A lot of research has been conducted on mutated constitutively active forms of RET, which play a major role in thyroid cancer, pheochromocytoma, and parathyroid hyperplasia, as well as in the development of lung cancer in a subset of patients. In recent years, the reports regarding the role of wild-type RET activated by its cognate ligands in the progression of tumors originating from other tissues started to appear [102,103], but this field is much less studied, and final conclusions are yet to be made. Clinical features of RET-dependent cancers and extensive data on RET expression in different tumor types are reviewed elsewhere [29,[102][103][104]. In the present review, we focus rather on neglected aspects of GFL/GFRα/RET signaling in the context of oncogenic transformation providing only a minimal background on the above-mentioned issues. In particular, we discuss the differences between the constitutive and ligand-induced activation of RET and possible involvement of GFL co-receptors in tumor progression and invasion. Oncogenic Potential of Constitutively Active Oncogenic Forms of RET The ligand-independent activation of RET is caused by gain-of-function mutations manifesting clinically as MEN2 or the formation of a fusion protein containing the intracellular kinase domain of RET and N-terminal domain from another protein with the ability to dimerize, resulting in the development of papillary thyroid carcinoma (PTC) [102,105]. RET bearing gain-of-function mutations is constitutively active and continuously stimulates signaling cascades such as ERK and PI3K/Akt in the cells promoting proliferation, survival, and metastasis [106]. In contrast to physiological conditions, the activation mechanisms in the above-listed intracellular cascades in the presence of mutated RET are not balanced by the negative regulation mechanisms, further contributing to the process of oncogenic transformation of the cell [107]. MEN2 is diagnosed in 5-10% of thyroid cancer patients and includes three conditions: MEN2A accounting for the vast majority of MEN2 cases, familial medullary thyroid carcinoma (FMTC) occurring in 10-20% of MEN2 patients, and MEN2B identified in approximately 5% of MEN2 patients. All MEN2 patients have medullary thyroid carcinoma, and approximately 50% of MEN2A and MEN2B patients also develop pheochromocytoma. In addition, 20-30% of MEN2A patients also have parathyroid disease. The MEN2B phenotype is the most aggressive, has early onset, and if untreated by thyroidectomy, it leads to death in half of patients by the age of 25 years. Patients with MEN2B mutation in RET (mainly Met918Thr) are recommended to undergo prophylactic thyroid surgery during their first year of life, while for others, surgery is suggested within the first 5 years of life or even later [104]. Mutations in RET are identified in approximately 50% of patients with sporadic medullary thyroid carcinoma [25]. In patients with MEN2A, mutations occur in the extracellular domain of RET and lead to the ligand-independent formation of a covalent dimer, whereas in patients with MEN2B, the mutations typically occur in the RET kinase domain and are accompanied by an activation of monomeric form. The most common mutation leading to MEN2A is C634X, and the most common mutation leading to MEN2B is M918T [103]. In FMTC, mutations are found in both intracellular and extracellular domains of RET [103,105]. PTC is the most common thyroid cancer. It is associated with RET rearrangements in 35% of patients from North America, and in other populations, it can vary from 25 to 65% [103,108]. A higher incidence of RET/PTC rearrangement is seen in children, and upon exposure to radioactive iodine isotopes [109], for instance, RET/PTC rearrangements were identified in 51.3-77% of tumor specimens collected from 5-18-year-old children exposed to radiation after Chernobyl reactor meltdown, while in non-exposed children, their prevalence was below 40% [109]. RET kinase domain fusion with kinesin family member 5B was identified in about 1-2% of patients with non-small-cell lung cancer (NSCLC), who were negative for mutations or rearrangements in other common oncogenic drivers such as EGFR, HER, ERBB2, BRAF, KRAS, ALK, etc. [110][111][112]. In addition, in some patients with lung cancer M918T (MEN2B) RET mutation and fusion with other proteins was identified [102,103]. Also approximately 3% of melanocytic neoplasms are positive for RET fusion [113]. RET/PTC isoform expression was detected in breast cancer tumors where it correlated with estrogen receptor (EsR). In breast cancer cell lines, RET/PTC was expressed mostly in EsR-positive cell lines. However, RET/PTC expression was not detected in most of the EsR-negative cell lines. Estrogens were shown to transcriptionally upregulate RET/PTC expression [114]. It is important to note here that the signaling elicited by mutated RET is different in nature compared to the signaling produced by RET ligands such as GFLs. Mutated isoforms of RET are constitutively active for a long period of time. The signaling elicited by GFLs is pulsatile and self-limiting via degradation of the ligand and receptor by proteases, activation of silencing mechanisms, e.g., triggering the activation of phosphatases dephosphorylating RET [115] and negative feedback loops in intracellular signaling cascades [116][117][118]. The combination of these events leads to rapid quenching of the signal elicited by GFL. Interestingly, in the presence of the constitutively active forms of RET, the mechanisms of its negative regulation are also activated [115]. However, in this case, the persistent presence of receptor stimulation leads to oscillatory patterns of intracellular signaling (e.g., ERK signaling cascade) activation (Sidorova et al., unpublished observation), which is also predicted to occur in the presence of natural ligand. Nevertheless, in the presence of natural ligands, these oscillations are difficult to detect experimentally due to their small amplitude, short duration, and rapid changes [115]. Importantly, a constitutively active RET signals not only on the cell surface but also in ER during the process of protein maturation. RET MEN forms are already active in ER and signal on their way to the cell surface. RET/PTC variants signal in various cellular compartments [105]. Wild-type RET signaling occurs in lipid rafts where it is recruited by GFRα co-receptors. Transition to rafts is necessary for the efficient activation of intracellular signaling pathways and subsequent events on the cell and tissue level, e.g., cell survival, organ formation [28,119]. Mutated RET variants can also trigger intracellular cascades being outside lipid rafts, since they signals in the absence of co-receptor and the process of their recruitment to raft is GFRα -dependent; therefore, the pattern of activated secondary messengers can be different for wild-type and mutated RET. Despite mechanistic spatio-temporal differences in signaling, ligand-activated RET is considered to be able to contribute to the invasion of tumor cells and the progression of oncogenesis, as described in the next chapter. Oncogenic Potential of Wild-Type RET Extensive in vitro data unequivocally demonstrate that in breast cancer cell lines, GDNF promotes cell migration and survival in an RET-dependent manner, rendering cells insensitive to anticancer drugs targeting EsR or aromatase. Similarly, the proliferation and survival of pancreatic and prostate cancer cell lines that often express GFRα1 and RET can be promoted by GDNF [120][121][122][123]. The pharmacological inhibition of RET with panspecific kinase inhibitors restores the sensitivity of breast cancer cell lines to tamoxifen, fulvestrant, and letrozole [124][125][126]. In animal models, additive effects of treatment with a combination of anti-EsR agent and RET inhibitor on tumor size were not detected, although the metastatic index in lungs was lower in the case of dual inhibition of RET with panspecific kinase inhibitors and EsR with tamoxifen [126]. There is also a link between inflammation, which often accompanies oncogenesis, and RET expression. The effects of inflammatory cytokine interleukin 6 (IL-6) on the migration of breast cancer cell lines were abolished in the presence of kinase inhibitors, although this interleukin does not activate RET directly [126]. At least in breast cancer cells, inflammatory mediators may upregulate GDNF expression, thus indirectly triggering RET signaling [125]. However, specific RET inhibitors are not available, and existing molecules target broad spectrum of various kinases, although with different affinity. IL-6 signals via glycoprotein 130, which activates multiple intracellular signaling cascades that are heavily dependent on the processes of protein phosphorylation [127]. Therefore, the treatment of the cells with a kinase inhibitor can abolish IL-6 signaling independently of RET as well. Analysis of clinical samples collected from patients with breast tumors also demonstrates the overexpression of RET in a significant portion of these specimens. However, there is a discrepancy in the percentage of the breast tumors overexpressing RET between the data collected using immunohistochemical and mRNA-level assessment methods. Gatteli et al. reported the presence of RET protein overexpression in 74% of breast tumors and found a positive correlation between the level of RET protein and metastasis-free and overall survival. At the same time, an elevated mRNA level of RET was detected only in 30-40% of breast cancer biopsies, and this parameter did not correlate with lymph nodes metastasis or lymphovascular invasion [4,128]. On the contrary, elevated levels of GFRα1 mRNA were detected in almost 60% of patients' samples, and they correlated with the invasion and metastasis of breast cancer cells. Only 18.1% of tumors were double positive for RET/GFRα1 based on mRNA analysis data [4]. However, RET can also transmit a signal from GDNF in a complex with soluble GFRα1 [9] produced by e.g., neuronal cells. The percentage of GFRα1-negative RET positive breast tumors in the study by Essiger et al. was 0.9%; thus, the other three GFR co-receptors are unlikely to have a major contribution in GFL-mediated effects in breast cancer [4]. While the discrepancy between immunohistochemistry and mRNA level data can be explained by the difference in the patient populations and the data analysis setup, it is also possible that technical artifacts related to the unspecific binding of RET antibodies to breast biopsies led to the overestimation of GDNF/GFRα1/RET role in the breast cancer. Many antibodies against GDNF, GFRα1, and RET are not specific and produce staining also in tissue sections from knockout animals [129]. The specific antibodies to RET have only been characterized in rodents a few years ago [113]. Therefore, it is important to support immunohistochemical findings with the data on the transcription of these genes. The overexpression of RET was also detected by immunohistochemical methods in 40-65% of samples from pancreatic tumors and 20-75% of samples from prostate cancer as well as in samples from other cancers (reviewed in detail by Mullican, 2019 [24]), and it is generally correlated with worse prognosis and more advanced tumor stages [102,121,122]. There are also immunohistochemical data showing the overexpression of GFRα1 and co-expression of GFRα1 and RET in these specimens, at least in some cases. However, similar to breast cancer data, no significant correlation between the expression of other GFL co-receptors and prognosis for pancreatic cancer patients was identified. Taking into account the data for breast cancer samples described above, it is obvious that a more detailed characterization of biopsies from patients with pancreatic, prostate, and other cancers for the expression of components of GFL signaling complex using more reliable methods of mRNA level analysis can actually change the overall impression regarding the role of RET in these malignancies. RET differs from other receptor tyrosine kinases in regard to kinase domain activation by phosphorylation. RET has intrinsic catalytic activity, and its enzymatic activity is only slightly increased upon the phosphorylation of tyrosine residues, at least in in vitro settings [5]. This can imply the presence of inhibitory mechanisms in the cells limiting the intrinsic activity of RET. These mechanisms can be overloaded in the case of RET overexpression, and RET can become activated in the absence of ligand. Evidence collected in cell cultures showing the effects of pharmacological RET inhibition on survival and proliferation should be interpreted with caution. Specific or even highly selective RET inhibitors are yet to be developed. Compounds used in such research target multiple intracellular kinases. Considering the central role of phosphorylation processes for cell functioning and division, it is not surprising that in the presence of panspecific kinase inhibitors, the survival of cancer cells is diminished. With this, we would like to stress that we by no means try to belittle the relevance of these kinase inhibitors in cancer therapy. However, the data produced with these inhibitors in cancer cell lines shed little light on the role of RET in oncogenesis. It is important to remember that GDNF in complex with GFRα1 can also signal RET independently via neural cell adhesion molecule (NCAM) and GFRα1 independently through syndecan-3 [130,131]. Generally, GFRα1 is expressed in the organism more widely than RET. Since, based on an analysis of mRNA levels, the overexpression of GFRα1, but not RET, in the breast cancer has been shown to be associated with cancer metastasis and invasion [4], it is possible that RET-independent, GDNF, and GFRα1-dependent events play a significant role in the tumor malignization process. Further studies are needed to clarify the role of each component in the GDNF/GFRα1/RET pathway in regard to its oncogenic potential. Recent evidence obtained in mice overexpressing GDNF in moderate levels (by 2 fold compared to wild-type littermates) revealed no enhancement of tumor formation during their life span [132]. In addition, infusions of GDNF protein or the overexpression of NRTN from viral vectors in the brain of PD patients as well as systemic injections of ARTN to neuropathic pain patients did not seem to be associated with oncogenesis [98]. Thus, the ligand-induced pulsatile activation of wild type non-overexpressed RET by natural or artificial ligands may not be related with tumor formation or progress and thus can be safe for patients. Targeting RET with Small Molecule for the Treatment of Diseases RET plays an important role in the maintenance and survival of both dopamine and sensory neurons, as well as retinal cells. In addition, activating RET in the brainstem region can be a therapeutic strategy for the treatment of obesity. GFLs are considered as potential therapeutics agents for the treatment of various diseases. However, they are not drug-like molecules. GFLs do not cross the blood-brain barrier (BBB); therefore, they have to be delivered via complicated brain surgery in PD. GFLs have higher affinity to the extracellular matrix and proteoglycans, which results in poor distribution in the tissues. In addition, production, stability and long-term storage are the other challenges of protein drugs. Protein drugs can very easily be susceptible to both physical and chemical damages, often making them biologically inactive. Further, recombinant protein drugs have a shorter half life, making them unsuitable for therapy [133]. GFLs often have more than one target receptor. For example, GDNF functions via heparan sulfate proteoglycan syndecan-3 [134], NCAM [91][92][93], and RET receptor. This might result in undesirable effects of GFLs. Therefore, developing small molecules targeting RET selectively would solve the drawbacks associated with GFLs as drugs. Small molecules can cross BBB and may have better tissue distribution then GFLs. In addition, small molecules can be given orally or through injection, by which complicated surgery needed for their delivery to the brain can be avoided. We have screened and developed the first and second generation of three structurally unrelated RET agonists (BT, HUS, and Q compounds) and tested them both in in vitro and in vivo assays [3,44,45,74,135,136]. Compounds from the BT scaffold were tested in animal models of PD and neuropathic pain. BT13 was shown to support the survival of naive cultured dopamine neurons, protect cultured dopamine neurons from toxin-induced cell death, and promote neurite outgrowth from cultured sensory neurons [3]. BT13 was also able to alleviate motor deficits in the 6-OHDA model of PD as well as attenuate neuropathy-induced pain-like behavior in the rat neuropathic pain model [135]. The second generation BT compounds, BT44, alleviated pain in surgery-based and diabetes-induced models of NP [136]. The second and the third group of RET agonists (Q and HUS compounds), which have better pharmacokinetic and pharmacodynamics properties than BT compounds, support the survival of photoreceptor neurons in ex vivo animal model of RP and activate prosurvival intracellular signaling in retina in vivo [3,74]. However, Q and HUS compounds have not been tested in other disease models such as PD, neuropathic pain, and the animal model of obesity. Further development of these compounds can eventually result in the generation of disease-modifying drugs against neurodegeneration and obesity. The potential mechanism of action on the molecular level was studied for BT compounds using molecular dynamic simulation and docking methods. The combination of in silico and in vitro data indicates that these small molecules most likely bind to RET on RET/GFRα interaction interface, thus mimicking a complex of GFL-GFRα [137]. This possibly results in a change in RET conformation and increase in RET kinase activity, resulting in RET phosphorylation and the subsequent activation of intracellular cascades. However, further studies are needed to understand if other agonists target the same binding site and to identify molecular changes occurring in the RET molecules after the stimulation with small molecular weight agonist. Due to the well-established role of RET in various types of cancers, its antagonists are important for antitumor therapy. The development of specific RET inhibitors is rather challenging, since the RET kinase domain is similar to that of other RTKs. However, a number of small molecular weight kinase inhibitors approved for anti-cancer therapy also act as RET antagonists. These molecules also target other RTKs, among those vascular endothelial growth factor receptors. Although this feature makes it difficult to dissect the effect of only RET inhibition in cancer treatment, it can be very useful from the therapeutic point of view, because such compounds can also reduce tumor vascularization. In addition, the process of tumor evolution may lead to the development of resistance to the compounds specifically targeting a single RTK. Thus, the identification of specific RET antagonists is an interesting scientific task, and it is important to understand the role and mechanism of RET involvement in carcinogenesis, but therapeutically, such inhibitors may be less attractive. Therefore, current efforts in this field are mainly focused on the development of polyspecific kinase inhibitors with acceptable safety profiles. A detailed review describing the effects, targets, and specificity of individual kinase inhibitors with a focus on RET was recently published by Falco and co-authors [79,138]. The limitation of the present review is in it's scope. Here, we mostly focused on various conditions where both RET and GFLs are extensively studied, and the clinical potential of RET modulation is well-established. Therefore, this review is limited to the potential role of RET in some neurodegenerative diseases, cancer, and obesity. However, RET is expressed in several various tissues and also in different neuronal populations such as dopamine neurons, motor neurons, sympathetic neurons, and parasympathetic neurons [31][32][33]139]. RET also regulates development of the kidney [39], but importance of RET in kidney diseases has not been reported yet. Further, RET-dependent signaling may play a role in other diseases and conditions e.g., amyotrophic lateral sclerosis (ALS) [140] and addiction [141]. In some tissues e.g., the hippocampus, the expression of RET is negligible in normal conditions but can be upregulated upon lesion [142]. RET can be also differently expressed in tissues of experimental animals and humans. Therefore, the modulators of RET signaling can on one hand be evaluated for efficacy in a number of different conditions, but on the other hand, the developed agonists may produce some target-related adverse effects. Therefore, further studies are needed to evaluate the role of each component of GDNF/GFRα/RET in health and disease states and develop efficient therapeutics targeting these proteins. Conclusions and Perspectives Due to the importance of RET-dependent signaling for neuronal survival and appetite control, targeting this pathway with agonists may result in the development of novel disease-modifying treatments against neurodegenerative disorders, chronic pain, and obesity, all of which represent major challenges for healthcare in the modern world. Attempts to use natural ligands of RET, which are GFL proteins for this purpose, achieved so far rather limited success because of their poor pharmacological characteristics. Obvious alternatives to GFLs for clinical use are small molecular weight agonists of RET, GFRα, or GFRα/RET as well as GFL-derived peptides, a few of which have recently been discovered. However, the development of these compounds has been hindered by concerns regarding the oncogenic potential of RET activation. Based on the data presented in the current review, it is clear that understanding the role of wild-type RET in oncogenesis requires further studies. Available data suggest that GFRα rather than RET may be involved in the malignization process, while the short-term pulsatile moderate activation of wild-type non-overexpressed RET by natural or artificial ligands can be safe for patients. Therefore, RET agonists targeting RET described in the review can represent an important step forward in the development of novel treatments for neurodegeneration, pain, and obesity. Acknowledgments: The authors thank Mart Saarma for critical comments and Khushbu Rauniyar for proofreading of the manuscript. Conflicts of Interest: Sidorova is a minor shareholder in GeneCode Ltd., a company owning IPRs for RET agonists from the BT13 family.
2020-10-06T13:34:06.651Z
2020-09-26T00:00:00.000
{ "year": 2020, "sha1": "89db158344f395d40f3b4a6d28c29770bea8503f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/19/7108/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "13d1ea4a840ab7f85fc64bf8ede8ef84e2ddcf3d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
159407093
pes2o/s2orc
v3-fos-license
Environmental impact analysis as a tool for state regulation of economic activity Relevance. The transition to a model of sustainable development requires an increase in the effectiveness of state regulation of environmental management, which in turn implies the use of effective regulatory tools, including a set of administrative and economic tools. Under the conditions of an oncoming environmental crisis and degradation, environmental impact analysis (EIA) becomes increasingly important, the purpose of which is to make environmentally sound decisions by evaluating the forecasted impacts and justifying measures to reduce or prevent them. Purpose of the study. Analysis and systematization of institutional support for EIA, identification of evolutionary changes in relation to environmental assessment, its importance, the order of implementation, the existing shortcomings. Results. While the study, the stages of institutional support for EIA in Russia were identified and the expediency of distinguishing four stages was justified. At the first stage, it is, actually, not about the assessment, but about the intentions of its setting. At the second stage, separate principles of environmental regulation are implemented when planning business activities. Ecological appraisal (EA) becomes compulsory. The second stage is considered as preparatory one for development and approval of laws regarding EIA. At the third stage, the legislative recognition of EIA is carried out (1994, 2000); the Federal Law “On Ecological appraisal” is adopted. A new edition of the Federal Law “On Environmental Protection” now has the Article 32 called “Conducting the environmental impact analysis”. The fourth stage (now in force) is a change in attitude towards the objects of ecological appraisal which takes place due to amendments in the Urban Planning Code of the Russian Federation. The implementation of provisions is differentiated with the project. At the same time, the list of facilities requiring EA is significantly reduced, which is absolutely illogical in the current situation of an impending ecological crisis. Conclusions. The staging of evolutionary changes in the institutional support for EIA makes it possible to solve the problem of its improvement most reasonably in the presence of “bottlenecks” identified in the research process. I ntroduction State regulation of economy, according to the author [1], is "the system of measures and activities of the state for sustainable functioning and development of the country's economy in compliance of the socio-economic and other goals approved by society". In contrast to management, it slightly aff ects a regulated object in order to maintain the direction of development and suppress negative situations that hinder the normal course of processes. Th e state regulation of environmental management fi ts into the framework of state regulation of economy having ecological-economic systems as regulated objects with a multi-target development of economic and environmental subsystems [2,3]. Hence, the decisions that are made under the infl uence of regulations introduced by the state require consideration of the laws governing the development of the biosphere. Th e tasks solved in the process of environmental management include: -implementation of measures for rehabilitation of natural resources and environmental protection; -formation of legal support for environmental management [4]; -stabilization of ecological situation and preventing its deterioration, etc. [5,6]. Regulatory instruments play an important role in solving the tasks set for the state regulation of environmental management; they include administrative and economic instruments [7]. Th e choice of certain instruments based on the most appropriate correlation of administrative and economic ones remains debated, although the eff ectiveness of government regulation largely depends on them. Nowadays, administrative instruments are of high-priority, including EIA -environmental impact analysis. EIA is defi ned as "a process that facilitates the adoption of an environmentally oriented management decision on the implementation of the designed economic and other activities by identifying possible adverse impacts, assessing environmental impacts, taking public opinion into consideration, developing measures to reduce and prevent impacts" (Regulations about the evaluation of the impact of the designed economic and other activities on the environment in the Russian Federation: they are approved by order of the State Committee for Environmental Protection of Russia dated 05. 16.2000, No. 372.). Th e eff ectiveness of EIA is largely determined by the presence of appropriate legal support, its completeness and timeliness of improvement [8]. Results Th e emergence of EIA in Russia is associated with 1985 when the country began to revise some regulatory and technical documentation while connecting the design with requirements of environmental protection. Particularly, in the designed system, the interaction between systems was considered in addition to the links of nature and technology (as well as the impact of the pro-jected system on people) [9]. A decree of the Supreme Soviet of the USSR "On compliance with the requirements of legislation for nature preservation and rational use of resources" has appeared. The State Committee for Construction of the USSR has adopted construction standards and regulations SNiP 1.02.01-85 called "Regulations on the composition, procedure for the development, coordination and approval of design specifications and estimates for construction of enterprises, buildings and facilities" 1 . According to them, the designers were required to assess the environmental situation within the territory of the proposed location of an object and forecast the impact of construction on the environment. However, as far back as 1980, based on materials from several research institutes of the Ukrainian Soviet Socialist Republic, a methodology was developed for the environmental and economic assessment of projects; it was included in the list of legislative, instructive and regulatory documents on environmental protection and rational use of natural resources [10]. The methodology involved a four-stage assessment work: evaluation of environmentally acceptable construction of new enterprises and facilities and reconstruction of existing ones, economic justification of projects, minimization of impact of the designed object on the environment, determination of the comparative environmental and economic effect of capital investments on the construction of new production facilities and reconstruction of existing ones. It acted though as temporarily agreed with the State Planning Committee of the Republic within the territory of Ukraine until 1982 only and was canceled due to cumbersomeness of the necessary calculations. This method can be considered as an attempt of preventive management of environmental effects, which, after appropriate revision, could be recommended for practical use [11]. In December, 1987, the State expert appraisal board of drafts and estimates of the State Committee for Construction of the USSR prepared and approved "Handbook on drawing up a section (working draft). Environmental Protection" for SNiP 1.02.01-85, which specified and developed some basic provisions and requirements for protection of environment components and contained the necessary reference material 2 . The following questions were subject to detailed consideration: protection of the atmosphere from pollution, protection of water resources from pollution and depletion and recultivation of disturbed lands. The manual confirmed the requirement to assess the initial state of the environment prior to construction of the designed object, to identify production factors that have a negative impact on the environment, to develop measures aimed at reducing the anthropogenic impact, and to predict possible changes in adjacent areas. However, there was no any authoritative document regulating the procedure for environmental impact analysis. The temporary regulations about conducting EIA when the development of technical and economic feasibility (calculations) and projects for construction of national economic facilities and complexes first appeared in May 1990 3 . Due to its temporary nature, its validity period was limited to 01.01.1992. In 1988, the State ecological appraisal began to function in the country, which corresponds with the formation of the National Committee of the USSR on nature conservation, a number of similar territorial divisions, as well as issuing the Decree of the Central Committee of the Communist Party of the USSR and the Council of Ministers (January, 1988) called "On the major reconstruction in the sphere of nature conservation in the country", which entrusted the State Committee for Nature Protection on conducting state ecological appraisal. For this purpose, a new subdivision was created (State Environmental Expert Administration). The Section 5 called "State environmental appraisal" was singled out in the Federal Law "On Environmental Protection" (1991), which implied the evaluation of consequences, but this type of activity was not disclosed in the Federal Law. The EIA procedure continued to be regulated by the State Environmental Committee based on the subordinate law. Later (upon termination of temporary regulations), Y. L. Maksimenko, I. D. Gorkina have prepared the Manual on environmental impact analysis (EIA) when development of technical and economic feasibility of investments and projects in construction of national economic facilities and complexes [12]. Unlike Regulations, the manual gives the detailed description of each of the five stages of EIA: development of the concept of the planned activity; determination of environmental impacts; environmental impact identification; project adjustment; preparation of a statement about environmental consequences. That same year, "Temporary rules for the environmental justification of economic activities in project documentation" were introduced. They were approved by the State Environmental Expert Administration 4 and characterized the content of information provided for environmental appraisal. These temporary rules just mentioned EIA. The resolution on the State environmental appraisal adopted by the Council of Ministers in September 1993, did not have even a mention about EIA. According to the authors, some assumptions for the regulatory activity of EIA are as follows: -increasing the anthropogenic environmental impact due to the growing demand for natural resources; -awareness of hazard consequences that are formed under the influence of the changed natural environment among recipients who perceive the effects, primarily among the population; -presence of the tried and tested way for appraisal of construction projects, preventing the implementation of design solutions without a positive expert opinion; -involvement of environmentalists in the panel of experts for preparation of decisions concerning large projects; -legislative regulation of the environmental appraisal, creation of a special subdivision within the Government office for nature preservation (State Environmental Expert Administration) for conducting the appraisal; -domestic experience in carrying out EIA due to performance of such studies (fragmentarily) by teams of research organizations based on contracts entered with customers. The draft for "Regulation on the environmental impact analysis in the Russian Federation" was developed in the spring of 1994; then it was approved in a revised form by the Order of the Ministry of Natural Resources of the Russian Federation in June of the same year 5 . The Regulation contained the scope of EIA, requirements for the content of EIA activity; it disclosed the obligations of the participants relatively EIA, the procedure for holding public hearings and liability for offenses. The Regulation was accompanied by a list of types and objects of economic and other activities. While preparation of justifying documentation for the construction EIA is carried out on a mandatory basis. The procedure for making an EIA decision for objects excluded from that list is not clearly defined. In contrast to the Temporary Regulations (1990), the information from the Regulation is more concise regarding the content of EIA. It is worth noting the lack of the EIA methodology as a drawback. At the same time, the Regulation specifies the applicable scope of EIA and the nature of public hearings. Initially, EIA is defined as a process of forecasting impacts and consequences of a project or an operating facility [13], or as a process of consideration the environmental requirements of the Russian Federation when preparing and making decisions about the socio-economic development of society. The year 1995 turns out to be fruitful for legislative instruments and subordinate laws in the sphere of State ecological appraisal. In November 1995, the Federal Law "On ecological appraisal" came into effect, which (with several amendments from 1998,2004,2005,2006. etc.) is still in force being the main legislative act in terms of ecological appraisal. Article 14 of the Federal Law, when determining the procedure for conducting the State ecological appraisal, indicates that the documentation to be examined should include materials for the environmental impact analysis of economic and other activities, but the Federal Law does not contain any requirements for preparation of these materials or the procedure for this analysis. Further, a few subordinate laws were adopted: -Regulation on the State appraisal procedure (1996); -Regulation on the environmental justification of economic and other activities (1995); -Rules of the State ecological appraisal (1997); -The list of regulatory documents for the State ecological appraisal, as well as for preparation of environmental justification of economic and other activities (1997). While the development of the Regulation "On EIA in the Russian Federation", several methodological provisions (guidelines), manuals, instructions at the sectorial level were developed, each of which covered the specifics of environmental impacts caused by peculiarities of technological processes. Two documents issued by the Ministry of Construction in 1995 are noteworthy: SP-11-101-95 "Procedure for the development, coordination, approval and composition of the rationale for investment in construction of enterprises, buildings and facilities" and SNiP 11-01-95 "Regulation on the procedure for the development, coordination, approval and composition of project documentation for construction of enterprises, buildings and facilities" 6,7 . The first of them defines the requirements for justification of investments and the need for the section "Environmental impact analysis" together with the pre-project documentation -a feasibility study or a working draft. There are no specific requirements for preparation of the section in the document, since it is assumed that they should be determined by regulatory documents of the State Ecology Committee. SNiP 11-01-95 contains requirements for project documentation, which should include the "Environmental Protection" section. In 1998, the Guide for the development of the "Environmental impact analysis" section was prepared for SP-11-101-95. Then, in 2000, the Handbook for preparation of the section called "Environmental protection" for the Regulation on the procedure of development, coordination, approval and composition of project documentation for construction of enterprises, buildings and facilities, SNiP 11-01-95, appeared. In 2000, in order to establish uniform rules for the appraisal in the Russian Federation and to determine the main provisions of EIA, the Government office for nature preservation of the Russian Federation approved the Regulation on the Environmental impact analysis of planned economic and other activities in the Russian Federation. The Regulation is more detailed document than the previous one. It covers some fundamental principles of EIA, discloses the content of stages of EIA; a great attention is paid to information and public participation in the EIA process. The final confirmation of the significance of EIA was the introduction of Article 32 into the new Federal Law "On Environmental Protection". Along with the legislative requirement for ecological appraisal (Article 33), the procedure for which was determined by the Federal Law "On ecological appraisal, " Chapter VI of the Federal Law "On Environmental Protection" implies the requirement for EIA, materials for which should be established by federal executive authorities in the field of environmental management. In the Regulation, the definition of EIA is given from the standpoint of a process that facilitates the adoption of an environmentally oriented management decision. The same aspect is expressed in the Federal Law "On Environmental Protection" and subsequent works [14 et al.]. Supporting the authors' point of view regarding the consideration of EIA from the position of the possibility of justifying environmentally oriented solutions, it should be noted that there is no need for economic evaluation of consequences in the EIA description, i.e., ensuring a balance between environmental and economic targets. Hence, the author's definition of EIA is a process that contributes to making environmentally oriented decisions in the design, planning, approval and implementation of economic and other activities by identifying possible adverse environmental impacts, assessing all types of impacts and economic evaluation of the latter, consideration of public opinion and preventing impacts. In 2006-2008, there were changes in the Town Planning Code of the Russian Federation (GKF of the Russian Federation) and in some legislative acts, including the Federal Law "On ecological appraisal", the Federal Law "On Environmental Protection". According to a new edition of the Civil Code, the composition of the project documentation was changed: instead of the section "Environmental Protection", the section "List of Environmental Protection Measures" was included, and thus the project was distinguished from substantiation package. All substantiation materials were moved to the pre-project stage, including environmental substantiation. According to the new version of the GKF of the Russian Federation, a project should have only those documents that are necessary for construction and control (allowing to monitor the construction progress), as well as for the environmentally sound operation of the construction object. In contrast to the legislative requirements that were in force earlier, the provision of pre-project documentation (feasibility study, investment justification, etc.) and EIA based on the results of pre-project development for EA is no longer required. A customer may determine completeness and need of pre-project developments. The project phase is regulated by law quite clearly. The project phase involves implementation of in-depth studies, including that on environmental issues. If EIA is not completed at the pre-project phase, it continues during planning stage with the help of engineering and environmental surveys, engineering-geological and geotechnical studies. The results of EIA are included in the "List of Environmental Protection Measures" section. If necessary (in accordance with Article 11 of the Federal Law "On ecological appraisal", the materials should be submitted for ecological appraisal), they are sent for ecological appraisal. In all other cases, the EIA materials as part of the project documentation are provided for the government expert review. The authors, based on systematization and analysis of regulatory documents, as well as experience in the regulation of EIA and EA procedures, have identified the evolution of environmental assessment in Russia and justified the expediency of distinguishing four stages. In such a case, the content of three stages of environmental assessment, was disclosed by Norman Lee in [15]. The first stage (1974)(1975)(1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983)(1984) is characterized by the presence of certain aspects of environmental project development and ecological appraisal. During this period, a section on the protection of nature and the rational use of natural resources appears in the composition of annual national economic plans and programs. Territorial Complex Scheme of Nature Protection has been developed since 1978; the section "Environmental Protection" appears in the project documentation. All these programs, planning documents are subject to the government expert review and construction is not allowed without a positive decision. Sometimes environmentalists are invited to the expert group to conduct EA in the case of implementation of large projects by them. In fact, this is not about evaluation, but about intentions for its implementation. A distinctive feature of the second stage (1985)(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994) is the introduction of some principles of environmental regulation in the planning of economic activities in accordance with SNiP 1-02-85 and the development of first guidance and advisory materials on EIA, as well as the adoption of government decrees and decrees on the need for energy efficiency and the creation of the State Environmental Expert Administration. The Federal Law "On Environmental Protection" confirmed EA as a mandatory activity. The second stage can be considered as preparatory for the development and approval of relevant laws, which cover changes in the composition of project documentation, the procedure for carrying out EIA and EA that leads to a weakening of environmental factor in the project development. The third stage (1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003) is distinguished by legislative recognition of EIA and EA. In 1994, the regulation on EIA in the Russian Federation was approved; in 2000 -it was amended and considered the fundamental recommendations of SP 11-101-95. In 1995, the Federal Law "On ecological appraisal" and some subordinate laws were adopted, including the Regulation on the environmental justification of economic and other activities (1995). The new edition of the Federal Law "On Environmental Protection" (2002), along with the mandatory nature of energy efficiency, pays attention to EIA (a specific article 32 "Conducting the environmental impact analysis" appeared in Chapter VI of the Federal Law). The fourth stage covers the period from 2004 to the present and is characterized by a change in attitude towards the composition of project documentation for the objects of ecological appraisal. Amendments were introduced to the Town Planning Code of the Russian Federation and, accordingly, to several legislative acts. The implementation of provisions was differentiated with the project. For pre-project materials, including EIA, a decision about a positive ecological appraisal is not required to obtain. The list of project materials sent for EA is significantly shortened compared to the previous one; it indicates an inexplicable change in attitude towards EA, which is illogical in the context of an impending ecological crisis and recognition of the need to move to a new development model. This situation remains debated. A customer reserves the right to determine the completeness of pre-project studies. The project documentation focuses on the development of measures to protect the environment. Currently, there are a lot of recommendations and manuals on EIA for various types of activities. At the same time there are no new regulations governing the composition, content and scope of pre-project developments (SP 11-101-95 and SNiP 11-01-95 are still in use). The Federal Law on EIA is still not prepared; the regulation of ecological appraisal has just legislatively executed. Conclusions The analysis of legal support of regulation presented by the environmental impact analysis, made it possible to systematize the analyzed information and to identify four stages in the evolution of environmental assessment in Russia. Changes in attitudes towards EIA and ecological appraisal are established, the completeness of legal documents that ensure the effectiveness of EIA are identified, the existing shortcomings are revealed. It is necessary to expand the list of project materials sent for EA, which was unfairly shortened, as well as the adoption of the Federal Law "On environmental impact analysis" (that is, legalization of the EIA regulation). In confirmation of the significance of EIA, we should mention that in the USA, EIA is carried out by federal departments (not customers), and the relevant department has responsibility for EIA and incurs financial expenses by means of taxes [16].
2019-05-21T13:05:38.528Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "0016e82c596b24b638942e61e76be172a2da5428", "oa_license": "CCBY", "oa_url": "https://iuggu.ru/download/2019/1-53-2019/Ivanov.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7353f1b4d3ccfd9b95a9df5ced5d9656e98239ae", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
22828480
pes2o/s2orc
v3-fos-license
Polymerization of Cyclic Esters Initiated by Carnitine and Tin (II) Octoate Low-molecular weight poly(ε-caprolactone), polylactides and copolymers of ε−caprolactone and lactides were obtained by the polymerization of cyclic esters in the presence of a carnitine/SnOct2 system. Their structures were proven by means of MALDI−TOF, IR and NMR studies. Effects of temperature, reaction time and carnitine dosage on the polymerization process were examined. Aliphatic polyesters are typical biomaterials, commonly used in medicine and pharmacy because of their good biocompatibility and lack of toxicity. The majority of the products are composed of homo- OPEN ACCESS and copolymers of lactides (LA, LLA) and ε-caprolactone (CL) [1,2,14,15]. Aliphatic polyesters are usually prepared by ring-opening polymerization (ROP) of the relevant cyclic monomers (e.g. D,L-, L,L-lactide, ε-caprolactone; abbreviations: LA, LLA, CL, respectively). PLA, PLLA and PCL have been successfully synthesized by ring opening polymerization in the presence of cationic or anionic initiators, as well as coordinating and enzymatic catalysts . The tin octoate (SnOct 2 ) is probably the most often used catalyst in the polymerization of cyclic esters. L-Carnitine (L-CA) is a hydrophilic amino acid derivative, naturally occurring in human cells. The compound is biosynthesized endogenously in the kidneys and liver from lysine and methionine, but it can also be delivered with red meat and dairy products of the diet. L-Carnitine plays an essential role in the transfer of long-chain fatty acids into mitochondria for beta-oxidation. Furthermore, L-carnitine binds acyl residues and helps in their elimination, decreasing the number of acyl residues conjugated with coenzyme A (CoA) and increasing the ratio between free and acylated CoA. Carnitine deficiency is a pathologic metabolic state in which carnitine concentrations in plasma and tissues are lower than the levels required for normal functioning of the organism [39]. Recently, we found that natural amino acids are satisfactory initiators for ROP of cyclic esters [34]. In the present paper, we describe a new effective synthesis of low-molecular weight aliphatic polyesters. It involves the ring opening polymerization of D,L-, L,L-lactide, ε-caprolactone in the presence of a L-carnitine/SnOct 2 system. We believe that thus obtained polymers can be practically applied as effective drug delivery systems. Results and Discussion The homo-and copolymerization reactions of CL, LA and LLA were carried out in the presence of the CA/SnOct 2 (2:1) system at 120-160°C. The molar ratio of CA to a given monomer was 1:25, 1:50 or 1:100. Reaction conditions, yields and average molecular mass values of polyesters are summarized in Table 1 Table 3. Main absorption bands of the synthesized polyestres (spectrum recorded from a KBr pellet). Wave number in cm -1 Group and band Poly(ε-caprolactone) 2943 (υ as CH 2 Insertion of the carnitine fragment into the polymer chain was confirmed by the proton NMR spectral analysis. The peaks at 2.83 (-CH 2 COOH), 3.51 ((CH 3 ) 3 N + -) and 3.43 (-CH 2 N + -) ppm were observed in all products obtained by homo-and copolymerization of CL, LA and LLA in the presence the carnitine/SnOct 2 system. Composition of the CL and LA (PCLLA) copolymers was deduced from the 1 H-NMR spectra. The CL content in the copolymer of CL and LA exceeded the CL feed ratio for PCLLA (amounts to 56-58 mol %). Probably, CL is the most active co-monomer in this reaction. The MALDI-TOF spectra of PCL contain double peaks, each component corresponding to a separate spectrum series. The most prominent series of peaks is characterized by a mass increment of 114 Da, which is equal to the mass of the repeating unit in the poly(ε-caprolactone) (Figure 3). It is assigned to PCL terminated with a hydroxyl group (residual mass: 57 Da, K + adduct) (A). The second series of peaks also corresponded to poly(ε-caprolactone), terminated with a hydroxyl group (residual mass: 40 Da, Na + adduct) (B). In the MALDI-TOF spectra of PLA there are also two series of peaks. The main series corresponds to PLA molecules, terminated with a hydroxyl group (residual mass: 41 Da, Na + adduct), and the second series of smaller peaks corresponds also to PLA terminated with a hydroxyl group (residual mass: 57 Da, K + adduct). In the MALDI-TOF spectrum of PLA both populations of chains of even and odd number of lactic acid m.u. can be observed. The odd number of acid m.u. shows that under the reaction conditions the polymer chain undergoes intermolecular transesterification (leading to an exchange of segments), which is a typical phenomenon for the polymerization of lactides [18]. The molecular mass of PCL, PLA and PLLA is dependent on the monomer/carnitine molar ratio ( Table 1). The influence of the monomer/carnitine feed ratio on the molecular weight of polyesters was studies at three levels (25:1, 50:1, 100:1). As shown in Table 1, the PCL products were obtained with M n (from GPC) of 1800, 3800 and 6600 Da for PCL-2, PCL-3 and PCL-5, respectively. For PLA, M n (from GPC) amounts to 1700, 3200 and 4600 Da for PLA-1, PLA-2 and PLA-5, respectively. It was found that the molar mass of the polyesters increased with the monomer/carnitine feed ratios. On the other hand, according to M n of polyesters, the PCL and PLA conversion had tendency to decrease with the increasing monomer/carnitine feed ratio. For PCL-2, PCL-3, PCL-5, PLA-1, PLA-2 and PLA-5 the corresponding monomer conversion values were 85%, 71%, 67%, 62%, 53% and 36%, respectively. The reaction yield was determined by the weight method. The homo-and copolymerization reactions of CL, LA and LLA were repeated twice for each combination. The results were in good agreement with one another (reproducibility of them was about 5-10%). Both, the conversion and molecular mass of the polymers increased, when the reaction temperature was raised from 120 to 160ºC. The The molecular mass values averaged over those of the obtained polymers were roughly in agreement with the theoretical molecular weights calculated from the feed ratio of the monomer to carnitine as well as the number average molecular mass determined from MALDI-TOF and GPC. Finally, it should be mentioned, that the carnitine/SnOct 2 system was quite effective in the polymerization of ε-caprolactone, L-lactide and rac-lactide. The yield of PCL was in the range of 62-93 %, and for PDLA in the range of 30-68 %. Relevant kinetic and mechanistic studies are underway. They will be presented in the next paper. Polymerization Procedure Polymerization of homo-and copolymers of cyclic esters were carried out in the same way. Monomers (CL, LA, LLA) and CA were placed in 10 mL glass ampoules under an argon atmosphere. The reaction vessels were then left standing at the required temperature in a thermostated oil bath for the appropriate time (Table 1). When the reaction was complete, the cold product was dissolved in dichloromethane, the obtained solution was washed with methanol and dilute hydrochloric acid (5% aqueous solution) under vigorous stirring. The latter operation was repeated three times. The isolated powdery or oily polymer was dried in vacuum for 72 h. Purity of the isolated polymers was tested by 1 H-NMR. Measurements The polymerization products were characterized by means of 1 H-and 13 C-NMR (Varian 300 MHz), and FT-IR spectroscopy (Perkin-Elmer). The NMR spectra were recorded in CDCl 3 . The IR spectra were measured from KBr pellets. Relative molecular mass values and molecular mass distributions were determined using MALDI-TOF and gel permeation chromatography (GPC). The MALDI-TOF spectra were measured in the linear mode on a Kompact MALDI 4 Kratos analytical spectrometer using a nitrogen gas laser and 2-[(4-hydroxyphenyl)diazenyl] benzoic acid (HABA) as a matrix. Molecular mass values and molecular mass distributions of polymers were determined at 308 K on a Lab Alliance gel permeation chromatograph equipped with Jordi Gel DVB Mixed Bed (250 mm x 10 mm) columns and a refractive detector, using THF or chloroform as eluent (1 mL/min). The molecular mass scale was calibrated with polystyrene standards.
2015-09-18T23:22:04.000Z
2009-02-01T00:00:00.000
{ "year": 2009, "sha1": "47e325dfd98f9d76bd90b6655d6dfba165658d7a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/14/2/621/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "47e325dfd98f9d76bd90b6655d6dfba165658d7a", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
221164120
pes2o/s2orc
v3-fos-license
Intrinsic control of neuronal diversity and synaptic specificity in a proprioceptive circuit Relay of muscle-derived sensory information to the CNS is essential for the execution of motor behavior, but how proprioceptive sensory neurons (pSNs) establish functionally appropriate connections is poorly understood. A prevailing model of sensory-motor circuit assembly is that peripheral, target-derived, cues instruct pSN identities and patterns of intraspinal connectivity. To date no known intrinsic determinants of muscle-specific pSN fates have been described in vertebrates. We show that expression of Hox transcription factors defines pSN subtypes, and these profiles are established independently of limb muscle. The Hoxc8 gene is expressed by pSNs and motor neurons (MNs) targeting distal forelimb muscles, and sensory-specific depletion of Hoxc8 in mice disrupts sensory-motor synaptic matching, without affecting pSN survival or muscle targeting. These results indicate that the diversity and central specificity of pSNs and MNs are regulated by a common set of determinants, thus linking early rostrocaudal patterning to the assembly of limb control circuits. Introduction Sensory-motor circuits within the spinal cord are essential for the coordinated control of limb muscle. In mammals, proprioceptive sensory neurons (pSN) process a continuous stream of data from approximately 50 forelimb or hindlimb muscles, and relay this information to the appropriate circuits tasked to orchestrate motor behaviors (Imai and Yoshida, 2018;Tuthill and Azim, 2018). The orderly arrangement of connections formed between pSNs and spinal circuits enable animals to seamlessly perform a vast repertoire of limb movements. A single pool of motor neurons (MNs) innervates an individual muscle and receives instructive feedback, not only from sensory neurons, but also descending inputs from supraspinal centers and local spinal interneurons (Arber, 2012;Plant et al., 2018). These inputs collectively modify the pattern of MN activity thereby choreographing appropriate muscle activation sequences during behavior. The simplest input pathway to MNs is the monosynaptic reflex arc, composed of a limb muscle innervated by a pool of alpha MNs and type Ia pSNs with stretch-sensing mechanoreceptor endings embedded within muscle spindles. Type Ia pSNs are uniquely poised to provide direct and immediate muscle status information through the monosynaptic connections they establish with MNs. During development, pSN axons navigate through the spinal cord, preferentially contacting MN pools innervating the same peripheral target, while avoiding MNs of functionally antagonist muscles (Eccles et al., 1957;Mears and Frank, 1997). These connections are remarkably selective, as a single pSN establishes monosynaptic connections with each of the~50-300 MNs that supply the same muscle target (Mendell and Henneman, 1968). While the mechanisms of pSN central specificity are largely unknown, they appear to be established independent of patterned neural activity (Mendelsohn et al., 2015;Mendelson and Frank, 1991), suggesting pSN-MN matching relies on genetic programs acting during neural development. After sensory neurons are born, nascent neurons advance through a hierarchical process of diversification in which expression of specific genes coincides with the acquisition of specialized neuronal characteristics, including peripheral target specificity and central projection pattern (Dasen, 2009;Lallemend and Ernfors, 2012). Sensory neurons generated at spinal levels derive from neural crest cells which coalesce outside the CNS to form dorsal root ganglia (DRG) (Butler and Bronner, 2015). As DRG develop, most sensory neurons co-express the homeodomain transcription factors Isl1 and Brn3a, which are necessary for deployment of pan-sensory neuron genetic programs (Dykes et al., 2011). At these early stages, pSNs can be discriminated from other sensory classes by expression of the transcription factors Runx3 and Etv1, the neurotrophin receptor Nrtk3, and the calcium binding protein Parvalbumin (PV). Genetic studies in mice indicate that Runx3 and Etv1 are essential for establishing and maintaining core features of pSN identity, including their survival and ability to extend central axons into the ventral spinal cord (Arber et al., 2000;Inoue et al., 2002). While the transcriptional programs governing features common to all pSNs have been characterized, understanding later developmental facets of sensory neuron specification, such as muscle target specificity and central connectivity, has been particularly challenging. In contrast to the topographic arrangement of spinal MN subtypes, sensory neurons of different modalities and attributes are intermixed within a DRG, with no clear organizational pattern, aside from restricted expression of early determinants involved in establishing a broader sensory neuron class identity (Honig et al., 1998;Jessell et al., 2011). A dearth of molecular markers for more nuanced neuronal features has made it challenging to characterize how pSNs and other sensory modalities further diversify into specific subtypes. One particular gap in our understanding is how the specificity of central connections between pSNs and MNs of the same muscle is achieved, since pSN axons must distinguish between vast numbers of potential postsynaptic targets within the ventral spinal cord. A significant contributing factor to the specificity of connections in sensory-motor circuits is the recognition of specific MN subtypes by pSN central afferents. As the limb develops, a network of~20 Hox transcription factors determines the molecular identities and peripheral target specificities of lateral motor column (LMC) neurons dedicated to limb control . Mutation in genes acting downstream of Hox function in MNs, including the transcription factor Pea3 and synaptic-specificity determinant Sema3e, leads to a disruption in the normal pattern of central connections between pSN and MNs (Fukuhara et al., 2013;Pecho-Vrieseling et al., 2009;Vrieseling and Arber, 2006). In addition, genetic transformation of thoracic MNs to a limb-level LMC fate, through mutation in the Hoxc9 gene, results in the formation of ectopic synaptic connections between limb pSNs and axial muscle-innervating MNs (Baek et al., 2017). By contrast, after MN-specific deletion of the Foxp1 gene, which encodes a factor required for all Hox activity in limb MNs, pSN axons maintain appropriate termination patterns within the ventral spinal cord (Sürmeli et al., 2011). However, because MN topographic organization is scrambled in Foxp1 mutants, limb pSNs form connections with inappropriate MN subtypes. The preservation of pSN central projection pattern in Foxp1 mutants suggests pSNs acquire specific features that enable them to target specific dorsoventral domains within the spinal cord. While studies provide evidence for an essential role for MN subtype identity in establishing sensory-motor synaptic specificity, the mechanisms that determine the central pattern of pSN postsynaptic connections are poorly understood. In contrast to MN specification, where key developmental features emerge largely independent of peripheral cues, sensory neuron development relies on extrinsic signals provided by limb mesenchyme and muscle (Arber, 2012;Sharma et al., 2020;Wu et al., 2019). Expression of the Nrtk3 receptor renders pSNs sensitive to peripheral neurotrophin-3 (Ntf3) signaling, and both Nrtk3 and Ntf3 are essential for the differentiation and survival of pSNs (Chen et al., 2003). Ntf3/Nrtk3 signaling regulates expression of Etv1 and Runx3, and muscleby-muscle differences in the level of Ntf3 expression appear to contribute to pSN subtype diversity (de Nooij et al., 2013;Patel et al., 2003;Wang et al., 2019). Moreover, it has been shown that signals originating from the limb mesenchyme can trigger expression of genes that mark muscle-specific pSN subtypes (Poliak et al., 2016). While certain molecular features common to all pSNs have been shown to be limb-independent (Chen et al., 2002), whether pSN diversity and synaptic specificity rely on neuronal-intrinsic specification programs remain to be determined. As such there are currently no known fate determinants of muscle-specific pSNs. We considered the possibility that the same Hox-dependent regulatory networks employed to specify spinal MN subtypes also contribute to the diversification of pSNs during sensory-motor circuit assembly. We show that selective expression of Hox proteins defines pSN populations generated at specific rostrocaudal levels, paralleling Hox expression in spinal MNs. Expression of Hox genes is maintained in both pSNs and MNs after removal of the developing limb bud, indicating that neuronal Hox pattern is initially established independent of target-derived cues. We found that distal forelimb flexor muscles, and the MNs that innervate them, are targeted by pSNs expressing the Hoxc8 gene. In the absence of Hoxc8 function, forelimb pSNs establish ectopic monosynaptic contacts on MNs innervating functionally antagonist forelimb muscles. These studies provide evidence for a neuronal-intrinsic program in which the selective activities of Hox proteins encodes key features of pSN diversification and target selectivity. Results Hox expression delineates subpopulations of pSNs along the rostrocaudal axis To explore a potential role for Hox genes in pSN diversification, we analyzed the expression of individual Hox proteins in spinal DRG during the early phases of sensory neuron development ( Figure 1). We examined Hox protein expression in relation to Runx3, Etv1, and PV, three markers predominately restricted to pSN subtypes. Because the patterns of Hox gene expression within the spinal cord are most thoroughly characterized in cervical (C) segments (Catela et al., 2016;Lacombe et al., 2013), we focused on the pattern of Hox expression in DRG generated between C2-C8. We began by analyzing the DRG expression of a subset of Hox4-Hox8 paralog genes between E12.5-E14.5, stages in which pSN axons have reached their muscle targets and central afferents have begun to invade the dorsal spinal cord (Hippenmeyer et al., 2002;Kramer et al., 2006). We found that subpopulations of cervical DRG neurons selectively coexpressed Hox proteins and molecular markers of pSN identity ( Figure 1, Table 1, Figure 1-figure supplement 1). Hox proteins expressed by pSNs included Hoxc4, Hoxa5, Hoxc6, Hoxa7, and Hoxc8, which also collectively define forelimb LMC neuron diversity (Figure 1a-e; Dasen et al., 2005). Each of these Hox proteins were detected at cervical levels and/or rostral thoracic segments, but were not present in caudal thoracic or lumbar DRG (data not shown). Within individual cervical DRG, Hox proteins were expressed by a subset of pSNs, and the fraction of pSNs expressing a given Hox gene within a single DRG varied along the rostrocaudal axis ( Figure 1-figure supplement 1a). Interestingly, members of the Hoxb gene cluster, including Hoxb4, were also restricted to cervical segments, but appeared to be more broadly expressed by DRG classes (Figure 1-figure supplement 1b, data not shown). These observations indicate that members of the Hoxa and Hoxc gene clusters are expressed by subsets of cervical pSNs. Within a single DRG, a proportion of pSNs also demonstrated co-expression of multiple Hox proteins. For example, within individual cervical DRG, a subset of Hoxc8 + cells coexpressed Hoxa7, a subset of Hoxa5 + pSNs co-expressed Hoxc4, and a subset of Hoxc6 + pSNs expressed Hoxc8 (Figure 1f,g, Figure 1-figure supplement 1c). Furthermore, DNA-binding cofactors known to be essential for Hox activity (Merabet and Mann, 2016), including Meis2 and Pbx3, were detected in pSNs ( Figure 1h, Figure 1-figure supplement 1d). Both Meis2 and Pbx3 were expressed by pSNs but also observed in non-proprioceptive sensory neuron subtypes, and lacked rostrocaudal specificity. These observations suggest that the combinatorial actions of Hox proteins and their co-factors could account for subtype diversity of cervical pSNs. We next compared the expression of individual Hox genes in sensory neurons along the rostrocaudal axis of the spinal cord. While certain Hox genes are coexpressed within the same sensory neuron, others demonstrate clear boundaries from one another and do not co-localize, despite being expressed in the same sensory class. For example, Hoxa5 expression was confined to pSNs in rostral cervical segments (C2-C5) while Hoxc8 expression was restricted to caudal cervical and rostral thoracic DRG (C6 to T2) (Figure 1i,j). Thus, Hoxa5 and Hoxc8 expression by pSNs is mutually exclusive and mirrors the restricted expression pattern of Hoxa5/Hoxc8 in forelimb-innervating MNs. These observations indicate that pSN subtypes can be delineated by differential Hox gene Neuronal expression of Hox genes is initially limb-independent During the early stages of neural tube development, expression of Hox genes is initiated by secreted morphogens acting on progenitors along the rostrocaudal axis (Bel-Vialar et al., 2002;Dasen et al., 2003;Liu et al., 2001). These early patterning signals induce Hox expression in the neural tube prior to limb bud formation. By contrast, studies of limb and non-limb innervating pSNs have shown that the molecular identities and central projection patterns of pSNs are established and maintained through extrinsic, target-derived, signals (de Nooij et al., 2013;Poliak et al., 2016). These findings raise the question of what the relative contributions of early patterning signals and target-derived cues are in regulating Hox expression in pSNs. To answer this question, we used limb-bud ablation assays in chick embryos to determine whether Hox expression in pSNs persists after removal of signals provided by limb mesenchyme and muscle. We first examined whether the expression of Hox proteins in sensory neurons is conserved between mouse and chick. We found that Hoxa5, Hoxc6, and Hoxc8 were selectively expressed by forelimb-level DRG by st31 (equivalent to E13.5 in mouse) (Figure 2a,b). As in mouse DRG, Hoxa5 was expressed by rostral cervical DRG, Hoxc8 was expressed by caudal cervical sensory neurons, while Hoxc6 was expressed in both rostral and caudal cervical DRG (Figure 2a,b). Co-staining with the pSN-restricted marker Runx3 indicated that Hox proteins are expressed by pSNs in chick, with some notable differences from mouse. In chick, Hoxa5 was broadly expressed by rostral cervical sensory neurons, while Hoxc8 was detected in a smaller fraction of caudal cervical pSNs (Figure 2- Figure 2 continued on next page figure supplement 1a-c). The reduced number of Hoxc8 + pSNs in chick versus mouse likely reflect evolutionary changes in the distribution and function of avian and rodent forelimb muscle. We next unilaterally ablated the forelimb bud of chick embryos at stage (St) 16-18, a phase when the initial rostrocaudal profiles of Hox gene expression have been established, but prior to the appearance of postmitotic pSNs and MNs (~E8.5 in mouse) ( Figure 2c). After limb bud extirpation, embryos were allowed to continue to develop for 3 days (to~st26-28 [~E11.5-12.5 in mouse]). At this age, all MNs and pSNs have been generated, but have not reached the phase where they rely on limb-derived neurotrophic support. To confirm successful removal of limb-derived cues, we examined expression of the ETS protein Pea3, which is expressed by subsets of pSNs and MNs in a limbdependent manner (Lin et al., 1998). After forelimb bud ablation, the number of sensory neurons and MNs expressing Pea3 markedly decreased relative to the non-ablated side (Figure 2e . The fraction of Isl1 + SNs expressing Pea3 was reduced to 5.5 ± 0.1% (mean ± SEM), compared to 31.8 ± 3.6% in controls (p<0.0001, Student's t-test) (Figure 2f). In addition, the number of SNs expressing Runx3 was reduced from 15.5 ± 2.4% in controls to 7.3 ± 0.9% after limb ablation (p=0.0038, Student's t-test) (Figure 2-figure supplement 1d). The decrease in Pea3 expression was not a result of the general loss of sensory neurons as the number of Isl1 + DRG neurons, a pan-sensory neuron marker, was comparable between the ipsilateral and contralateral sides of the ablated limb ( Figure 2e). In contrast to the loss of Pea3, expression of Hoxa5, Hoxc6, and Hoxc8 were unchanged in both sensory neurons and MNs after forelimb removal (Figure 2g-i, Figure 2-figure supplement 1a-c, e-g). The fraction of Isl1 + SNs expressing Hoxa5 (44.3 ± 4.7% in controls, 50.7 ± 3.2% ablated, p=0.27, Student's t-test), Hoxc6 (26.8 ± 2.0% controls, 26.3 ± 3.0% ablated, p=0.90), and Hoxc8 (6.4 ± 0.9% controls, 7.6 ± 0.8% ablated, p=0.32) was not significantly changed ( Figure 2j,k,l). Because expression of the pSN markers Runx3 and Etv1 are reduced after limb ablation, we were unable to quantify the fraction of pSNs that retain Hox expression. Nevertheless, these results are consistent with a model in which the pattern of Hox gene expression in sensory neurons and MNs is initiated through intrinsic genetic programs that operate independent of limb-derived cues. Hoxc8 expression in type-Ia pSNs during sensory-motor circuit maturation To further examine contribution of Hox genes to the diversification of sensory neuron subtypes, we performed a detailed characterization of Hox protein expression in relation to the ontogeny of sensory-motor circuit development ( Figure 3). We focused our studies on Hoxc8 for this analysis, due to its central role in establishing the molecular identities and muscle-target specificity of MNs innervating distal forelimb muscles of mouse and chick (Catela et al., 2016;Dasen et al., 2005). To determine at which phase of sensory-motor circuit development Hoxc8 might be required, we analyzed the ontogeny of Hoxc8 protein expression in pSNs in mouse. Since the levels of Hoxc8 protein expression in the CNS attenuate at later stages of embryonic development, we utilized a conditional Hoxc8 allele in which a LacZ reporter is expressed upon Cre-dependent excision of Hoxc8 coding sequence (Blackburn et al., 2009;Catela et al., 2016). To achieve sensory neuron-restricted Expression of Isl1 is unaffected at this stage. Images show cervical DRG of an individual embryo at the same segmental levels between ablated and non-ablated sides for each panel. (f) Quantification of loss of Pea3, as a fraction of total Isl1 + SNs. Controls, 31.8 ± 3.6%, N = 12 sections from three animals, ablated 5.5 ± 0.1%, N = 12 sections from three animals, p<0.0001, Student's t-test. (g-i) Top panels show expression of individual Hox expression. Bottom panels show Hox expression with Isl1. There is no difference in Hox expression between the ablated and non-ablated side for Hoxa5 + SNs in rostral cervical segments (g), Hoxc6 + SNs in rostral and caudal cervical segments (h), or Hoxc8 + SNs in caudal cervical segments (i). (j-k) Quantification of fraction of Hox + Isl1 + over total Isl1 + SNs in control and limb-ablated chick embryos. For each Hox protein, sections were obtained from three limb-ablated embryos, with non-ablated side of embryo serving as the control. Hoxa5 (44.3 ± 4.7% in N = 11 control sections, 50.7 ± 3.2% in N = 11 limb-ablated sections, p=0.27, Student's t-test), Hoxc6 (26.8 ± 2.0% in N = 15 control sections, 26.3 ± 3.0% in N = 13 limb-ablated sections, p=0.90), and Hoxc8 (6.4 ± 0.9% in N = 7 control sections, 7.6 ± 0.8% in N = 8 sections ablated, p=0.32). See also Figure LacZ reporter expression, we crossed Hoxc8 LacZ-flox/+ mice to an PLAT::Cre line. This line drives Cre expression in the neural crest cells from which spinal sensory neurons are derived, but is excluded from neurons in the central nervous system (Pietri et al., 2003). This breeding strategy allowed us to unambiguously identify Hoxc8 + pSNs at later postnatal stages, due to the persistence of bGal protein expression. At E14.5-E15.5 all bGal positive cells expressed Hoxc8 and Runx3 proteins ( To further confirm the specificity of Hoxc8 in pSNs we examined its expression in relation to Etv1 and PV between segmental levels C6-T1 at E14.5. The majority (>90%) of Hoxc8 + neurons expressed Etv1 and PV, consistent with a pSN-restricted expression pattern (Figure 3g, Figure 3-figure supplement 1a,b). At these segmental levels, however, Hoxc8 was expressed by only~50% of the total PV + population ( Figure 3g). Between segments C6-C8, 15-20% of PV + sensory neurons have been shown to co-express Ret, a marker for a subset of cutaneous sensory neurons (Niu et al., 2013). We found that all Hoxc8 + sensory neurons lacked Ret expression. These results indicate a fraction of cervical pSNs express Hoxc8, but that cervical PV + Ret + sensory neurons, likely cutaneous sensory neurons, are Hoxc8 - (Figure 3g,h). Peripheral muscle target specificity of Hoxc8 + pSNs In spinal MNs, the profile of Hox expression along the rostrocaudal axis is correlated with the position of muscles along the proximal-distal and anterior-posterior axes of the limb. Rostral cervical Hoxa5 + LMC neurons typically innervate more anterior/proximal forelimb muscles, while caudal cervical Hoxc8 + MNs project to distally and/or posteriorly located forelimb muscles (Catela et al., 2016). Because the rostrocaudal profile of Hox genes in pSNs mirrors that of spinal MNs, and Hoxc8 + MNs are known to innervate distal forelimb muscles, we sought to evaluate muscle target selectivity of Hoxc8 + pSNs. To identify the peripheral muscle targets of Hoxc8 + pSNs, we labeled pSNs through intramuscular injection of Cholera toxin B subunit (CTB) and examined Hoxc8/bGal protein expression in retrogradely labeled neurons. We performed retrograde tracing assays on nine specific forelimb muscles, ranging from proximal to distal positions, and varying in size, but sharing a common role in controlling arm, wrist, or digit movement (Figure 4a). In the distal forelimb, flexor muscles are positioned ventrally and act to adduct the wrist and flex the digits. Conversely, distal extensor muscles reside dorsally and act as antagonists to distal flexors. Although each of the major forelimb-controlling muscles were injected and processed for analysis, a few were excluded due to inaccessibility, as deeper muscles would require the removal of the overlying musculature. CTB was injected into single forelimb muscles of wildtype and PLAT::Cre; Hoxc8 LacZ-flox/+ mice at P4, thereby retrogradely labeling sensory neuron afferent fibers that have taken up CTB tracer through direct contact with muscle (N ! 3 animals/muscle). Spinal cords with attached DRG were then isolated at P7 to assess representative populations of sensory neurons innervating the injected muscle. Injections were performed no earlier than P4 due to the thin size of the distal forelimb muscles as well as the inefficiency of neonatal CTB labeling, a probable outcome of lower expression levels of the CTB receptor in neonates (Yu, 1994). The coincidence of CTB/Hoxc8/PV and CTB/bGal/PV labeling was then analyzed in DRG to determine if the muscle received innervation from Hoxc8 + pSNs. With the exception of the biceps brachii, all of the injected flexor muscles were found to be innervated by predominately Hoxc8 + pSNs, including the pectoralis major (PM), flexor carpi ulnaris (FCU), palmaris longus (PL), and flexor digitorum profundus (FDP). For the PM, FCU, PL, and FDP,~70% or more of the total CTB labeled pSNs were Hoxc8 + (Figure 4f-f,k, Figure 4-figure supplement 1ae). The biceps brachii is proximally located in relation to the limb, and innervated by Hox5 + MNs, while the latter three flexors inhabit the distal forelimb and are supplied by Hoxc8 + median and ulnar MNs (Catela et al., 2016). The PM is also considered a proximal forelimb muscle, though it is one of the largest arm flexion-controlling muscles responsible for a wide range of arm movements. Of the injected extensor muscles, including the proximally located triceps (Tri) and distally positioned extensor carpi radialis (ECR) and extensor digitorum (ED), a small to negligible percentage (3-6%) of CTB labeled pSNs expressed Hoxc8 (Figure 4g-k, Figure 4-figure supplement 1f-I). After injection into the spinodeltoideus, 32% of labeled neurons expressed Hoxc8, possibly reflecting innervation by pSNs with a mixed molecular profile. These results indicate that Hoxc8 + pSNs preferentially target muscles involved in distal forelimb flexion. Sensory neuron survival and differentiation in Hoxc8 SND mice We next evaluated the function of Hoxc8 during pSN development by generating homozygous Hox-c8 LacZ-flox/LacZ-flox mice expressing PLAT::Cre (referred to henceforth as Hoxc8 SND mice). In Hoxc8 SND animals, expression of Hoxc8 protein is selectively removed from SNs but maintained by neurons within the spinal cord ( Figure 5-figure supplement 1a). Because Hoxc8 has been shown to be essential for the survival of a subset of caudal cervical LMC neurons after E12.5 (Catela et al., 2016;Tiret et al., 1998), this posed the possibility that Hoxc8 is similarly involved in the selective survival or maintenance of caudal cervical pSNs during embryogenesis. Alternatively, since Hoxc8 expression persists through the 1 st postnatal week in sensory neurons this suggests a potential function in later aspects of pSN maturation and connectivity. We therefore examined the function of Hoxc8 during midgestation (E14.5-E15.5) and postnatally (P4-P7). To clearly visualize Hoxc8 + populations at postnatal stages, we used the inserted LacZ reporter which expresses bGal in lieu of Hoxc8, enabling us to track the fate of pSNs lacking Hoxc8. We compared the number of bGal + cells between Hoxc8 LacZ/+ and Hoxc8 SND animals in DRG C8, where a subset of Hoxc8 + pSNs reside. At P7, the percentage of PV + pSNs that expressed bGal was similar between control and Hoxc8 SND animals (42 ± 3% in N = 33 sections from three control mice, versus 45 ± 3% in N = 32 section from 3 Hoxc8 SND mice, p=0.43, Student's t test) (Figure 5a,c). We also compared the fraction of PV + pSNs that expressed Isl1, which was also unchanged (22 ± 1% in N = 68 sections from three control animals, versus 23 ± 1% in N = 74 section from 3 Hoxc8 SND mice, p=0.59, Student's t test) (Figure 5b,c). Moreover, the distribution of sensory neurons expressing Etv1, PV and Isl1 was grossly unchanged in Hoxc8 SND animals at E15.5 compared to that of control animals ( Figure 5-figure supplement 1e,f). All bGal + cells also lacked Ret expression in Hoxc8 SND animals, indicating that their fate had not been switched to that of PV + cutaneous sensory neurons ( Figure 5-figure supplement 1g,h). These observations indicate that Hoxc8 is not required for the survival or maintenance of early pSN molecular features. To determine if Hoxc8 is necessary for the ability of cervical pSNs to innervate their normal forelimb muscle targets, we examined the formation of muscle spindles in the palmaris longus (PL) and flexor carpi ulnaris (FCU), two distal forelimb flexor muscles that normally receive input from Hoxc8 + Figure 5 continued on next page pSNs. We found no discernible difference in the pattern of PV or vGlut1, which accumulate on the peripheral terminals of pSNs, indicating that PL and FCU muscle connectivity is unaltered in the absence of Hoxc8 (Figure 5d). We also tested whether the FCU receives innervation from appropriate pSN subtypes in the absence of Hoxc8. We injected CTB into the FCU of Hoxc8 SND mice at P4 and collected spinal cords with attached DRG at P7. We found that the fraction of pSNs that were bGal + CTB + was similar between controls and Hoxc8 SND mice (65.2 ± 5.1% for N = 4 controls; 62.3 ± 7.6% for N = 3 Hoxc8 SND mice, p=0.75, Student's t test) (Figure 5e,f). Loss of Hoxc8 therefore does not preclude the ability of cervical pSNs to reach their normal muscle targets, demonstrating that Hoxc8 is not essential for pSN peripheral projection and target specificity. Genetic ablation of early pSN fate determinants, including Runx3 or Etv1, leads to marked reduction in the extension of pSN central afferents into the ventral spinal cord (Arber et al., 2000;Inoue et al., 2002). Since deletion of Hoxc8 does not affect pSN survival or peripheral innervation, we next asked whether Hoxc8 is required for the ventral extension of pSNs towards MNs. Hoxc8 + pSN projections originating from DRG C8 terminate within the ventral spinal cord predominantly at this same segmental level (Baek et al., 2017). Thus, a noticeable loss of projections to the ventral spinal cord would be evident at this segmental position. We used PV labeling to measure the density of pSN collateral projections terminating in the ventrolateral area of the spinal cord. We observed no difference in the mean pixel intensity of PV fibers innervating the region occupied by forelimb MNs between Hoxc8 SND mice and controls (82.1 ± 2.4 for N = 3 controls; 81.1 ± 1.3 for N = 3 Hoxc8 SND mice, p=0.74, Student's t test) (Figure 5g,h). Collectively, these results indicate that Hoxc8 is dispensable for pSN survival, peripheral muscle target selection, and ability of pSNs to extend central axons ventrally. Altered topography of pSN central connections in Hoxc8 SND mice We next considered the possibility that deletion of Hoxc8 disrupts the pattern of central connectivity between muscle-specific pSNs and MNs. To examine pSN synaptic specificity, we employed a modified rabies labeling strategy which directs monosynaptically-restricted anterograde transfer of virus from pSNs to neurons within the spinal cord (Zampieri et al., 2014). We bred Hoxc8 SND mice with a Cre-dependent line (Gt(ROSA)26Sor CAG-loxp-STOP-loxp-rabies-G-IRES-TVA mice, henceforth referred to as RGT) expressing two rabies helper proteins: TVA, an avian-specific receptor protein, which permits infection to rabies virus pseudotyped with EnvA, and a rabies glycoprotein, which allows transsynaptic transfer of the virus, both produced in sensory neurons following recombination using the PLAT:: Cre line (Figure 6a). The injected RVDG-mCherry-EnvA (RabV) virus lacks its own glycoprotein rendering it incapable of spreading in the absence of the supplanted source. Sensory-restricted Cre expression confines the spread of mCherry-expressing rabies from the injected muscle to the connected pSNs, and subsequently their monosynaptically-coupled postsynaptic partners, while preventing infection of MNs directly from the muscle. An advantage of utilizing this method is that the RabV labels the entire soma of infected neurons making it relatively easy to identify coupled MNs. We tested the specificity of this tracing assay by injecting distal flexor muscles with RabV in both Cre + and Cre -RGT mice ( Figure 6-figure supplement 1a-d). In PLAT::Cre + RGT mice, RabV injected into flexor muscles labeled the connected pSNs as well as monosynaptically coupled MNs and interneurons, on the ipsilateral side in relation to the injection, via the sensory neuron terminals. By contrast, no RabV labeled sensory or spinal neurons were observed in control experiments where we injected modified rabies virus in RGT mice lacking Cre (Figure 6-figure supplement 1a-d). Figure 5 continued Quantification of CTB + bGal + PV + /total CTB + PV + SNs after FCU retrograde tracing: 65.2 ± 5.1% for N = 4 control mice; 62.3 ± 7.6% for N = 3 Hoxc8 SND mice, p=0.75, Student's t test. (g) No difference in PV fiber density in the ventral spinal cord between control and Hoxc8 SND mice. PV fiber stain with heat map below. (h) Quantification of the average PV pixel intensity at DRG C8 level. PV fiber density calculated only in ROI created in ventral spinal cord region. Lines indicate mean ± SEM. Average intensity for control: 82.1 ± 2.4, N = 3 mice. Average intensity for Hoxc8 SND : 81.1 ± 1.3, N = 3 mice (p=0.74, Student's t test). See also Figure 5-figure supplement 1. The online version of this article includes the following source data and figure supplement(s) for figure 5: Source data 1. Quantification of sensory markers in control and Hoxc8 mutants. These results indicate that the rabies labeling of MNs is Cre-dependent and mediated through transsynaptic spread via sensory central terminals. We used this labeling strategy to map the overall distribution of postsynaptic targets of pSNs targeting a specific limb muscle in both control and Hoxc8 SND RGT mice. We injected either the flexor carpi ulnaris (FCU) or the palmaris longus (PL) muscles with RabV, and mapped the location of transynaptically-labeled MNs, marked by Choline Acetyltransferase (ChAT). We then generated scatter plot and contour maps of the distribution of labeled RabV + ChAT + neurons (N = 3 animals). In control mice, injections into the FCU or PL labeled discrete clusters of RabV + /ChAT + neurons located in a dorsal region of the caudal LMC (Figure 6b,c), consistent with the location of the MN pools targeting these muscles (Bácskai et al., 2013). By contrast, in Hoxc8 SND mice rabies tracing from the FCU and PL labeled MNs that extended more ventrally and laterally within the LMC which of note, is typically the domain occupied by forelimb extensor MNs (Figure 6b,c). These qualitative observations suggest that Hoxc8 regulates the pattern of pSN connectivity within the ventral spinal cord, presenting the possibility that flexor pSNs lacking Hoxc8 may target inappropriate postsynaptic MN subtypes. Ectopic synapses between flexor pSNs and extensor MNs in Hoxc8 SND mice Because Hoxc8 expression is restricted to pSNs innervating distal forelimb flexor muscles, we next asked whether loss of Hoxc8 leads to inappropriate synapses onto distal forelimb extensor MNs. To examine this, we injected distal flexor muscles with RabV, while concurrently retrogradely labeling MNs through injection of CTB into distal extensor muscles (Figure 7a). If removal of Hoxc8 leads to an inappropriate coupling between flexor pSNs and extensor MNs, we would expect to observe colocalization of RabV + with CTB-labeled extensor MNs. Two distal extensor muscles were injected to maximize the possibility of finding ectopically connected MNs, and were also chosen based on their intrasegmental overlap with motor pools of injected flexors. To ensure no cross contamination of injected tracers, only superficial extensor muscles separated by at least three muscles from the injected flexors were chosen. We injected a distal forelimb flexor muscle, FCU or PL, with RabV, and retrogradely labeled both distal forelimb extensor carpi radialis and extensor digitorum MNs with CTB. In control PLAT::Cre + RGT mice, the set of mCherry-labeled flexor MNs, labeled through transsynaptic viral spread via flexor pSNs, were distinct from retrogradely labeled CTB extensor MNs (control: N = 9 mice; FCU: N = 5, PL: N = 4) (Figure 7b,d). This result is consistent with electrophysiological and anatomical studies showing that the flexor pSNs do not synapse with extensor MN pools (Eccles et al., 1957;Zampieri et al., 2014). By contrast, in RGT Hoxc8 SND mice we observed ectopic connections originating from distal flexor pSNs onto distal extensor MNs (Hoxc8 SND : N = 7 mice; FCU: N = 4, PL: N = 3) (Figure 7b,d). We quantified the fraction of MNs with coincident detection of RabV/CTB/ChAT over the total number of RabV labeled MNs in each injected animal. We found that in Hoxc8 SND mice, in which the FCU is injected with rabies, 29 ± 5%, of the total RabV/ChAT labeled MNs colabeled with CTB (N = 4 mice), compared to 0 ± 0% in control animals (N = 5 mice, p<0.0001, Student's t-test) (Figure 7c). Similarly, in Hoxc8 SND mice in which the PL was injected with rabies, 37 ± 12% of the total RabV/ChAT labeled MNs were CTB labeled (N = 3 mice), compared to that of 0.8 ± 0.8% in control animals (N = 4 mice, p=0.02, Student's t-test) (Figure 7e). The percentages likely underrepresent the entire cohort of To confirm that ectopic synapses were formed by distal flexor pSNs onto distal extensor MNs, we employed a more conventional labeling strategy using CTB and the fluorescent tracer Rhodaminedextran (Rh-Dex). After intramuscular injection, Rh-Dex is taken up by MNs, as well as pSN afferents, but is not transported transganglionically, thus restricting central tracing to MN soma. CTB, however, transfers into the central sensory axon, and accumulates in vGluT1 + sensory boutons at the soma of synaptically-coupled MNs. Thus, after muscle injection we can compare pSN CTB labeling of vGlut1 synapses on Rh-Dex labeled MNs. We injected the FCU with CTB and distal extensor muscles with Rh-Dex in RGT control and RGT Hoxc8 SND mice. Similar to the results of the rabies tracing assay, we observed the presence of ectopic synapses from CTB/vGlut1 labeled distal flexor pSNs onto distal extensor Rh-Dex + MNs (Figure 7-figure supplement 1a-c). Together, these results indicate that Hoxc8 plays an important role in pSNs during sensory-motor circuit development. Discussion Animals rely on internal neural representations of muscle position and activity in order to execute coordinated motor behavior (Akay et al., 2014;Mendes et al., 2013;Tuthill and Azim, 2018). In vertebrates, pSNs establish selective central connections with MNs innervating the same peripheral muscle, while avoiding MNs targeting functionally antagonistic muscles. Whether pSNs are intrinsically programmed to acquire muscle-specific identities that enable them to target appropriate central postsynaptic targets is largely unknown. A major roadblock in resolving the mechanisms of spinal sensory-motor circuit assembly has been a lack of molecular tools to study muscle-specific pSN subtype differentiation. We found that pSNs innervating distal forelimb muscles can be defined by selective expression of Hox transcription factors, and these profiles are initiated independent of limb-derived cues. Additionally, Hox genes are critical in generating appropriate patterns of central connections between pSNs and MNs. We suggest that the coordinate activity of Hox genes in multiple neuronal classes plays a key role in establishing synaptic specificity within developing limb control circuits. Hox genes and sensory neuron diversification Hox transcription factors are well known intrinsic determinants of patterning and cellular identities along the rostrocaudal axis of metazoans (Mallo and Alonso, 2013;Philippidou and Dasen, 2013). Our results indicate that, in addition to their broad roles in determining rostrocaudal positional identities, Hox genes have neuronal class-specific functions associated with the development of limb sensory-motor circuits. Our findings reveal that a subset of Hox genes are selectively expressed by pSNs generated at specific segmental levels, and these patterns parallel the rostrocaudal profiles of Figure 7 continued forelimb flexor muscles were injected with RabV and distal forelimb extensors were injected with CTB. Injections were performed at~P6 P7 and spinal cords were collected at~P12-13. (b) Colocalization of RabV, CTB, and ChAT signifying ectopic contacts in Hoxc8 SND mice where RabV was injected into the FCU and CTB was injected into the ECR and ED. (c) Quantification of the average percentage of RabV + CTB + ChAT + cells over total RabV + ChAT + cells where RabV was injected into the FCU. Lines indicate mean ± SEM. Average from control mice: N = 5 mice; 0 ± 0. Average from Hoxc8 SND : N = 4 mice; 0.26 ± 0.04. (p<0.0001, Student's t test). (d) Colocalization of RabV, CTB, and ChAT signifying ectopic contacts in Hoxc8 SND mice where RabV was injected into the PL and CTB was injected into the ECR and ED. (e) Quantification of the average percentage of RabV + CTB + ChAT + cells over total RabV + ChAT + cells where RabV was injected into the PL. Lines indicate mean ± SEM. Average from control mice: N = 4 mice; 0.008 ± 0.008. Average from Hoxc8 SND : N = 3 mice; 0.37 ± 0.13. (p=0.02, Student's t test). For both Hoxc8 SND mice and controls in which the FCU was injected, a total of~150 RabV MNs were counted while 120 RabV MNs were counted for control mice injected in the PL and 101 RabV MNs for Hoxc8 SND mice. See also Hox genes in the CNS. We found that the Hoxc8 gene is preferentially expressed by pSNs targeting distal forelimb flexor muscles, important for wrist and digit movement, reflecting the Hoxc8 expression domain in MNs. While our studies focused on a single Hox gene, it is likely that other forelimbspecific pSN subtypes can be similarly delineated by specific combinations of Hox4-Hox8 genes. It is notable that pSNs express members of the Hoxa and Hoxc gene clusters, while Hoxb genes appear to be expressed by broader populations of sensory neurons, most of which are likely cutaneous. These patterns are reminiscent of the differential expression of Hox genes within the spinal cord, where Hoxb genes are typically expressed in dorsal populations, containing cutaneous sensory relay interneurons, while Hoxa and Hoxc genes are expressed by motor-related interneurons and MNs (Dasen et al., 2005;Graham et al., 1991;Sweeney et al., 2018). Dorsoventral differences in Hox patterning appear to emerge developmentally, as Hox transcripts are initially expressed uniformly in neural progenitors along the dorsoventral axis (Liu et al., 2001). While the mechanisms that govern the later bias of Hoxa/c and Hoxb expression in muscle and cutaneous sensory systems are unclear, they could relate to the timing of differentiation. In the spinal cord, ventral motorrelated postmitotic neurons are born prior to dorsal types, and DRG neurons appear to exhibit a similar proprioceptive to cutaneous temporal progression (Fariñas et al., 1996;Lawson and Biscoe, 1979). The dorsoventral restriction of genes within Hox clusters could provide a mechanism to diversify subtype identity across multiple sensory modalities. As cutaneous neurons are known to terminate in the dorsal spinal cord, the coordinate activities of Hoxb genes could similarly function in the development of cutaneous sensory-relay circuits. Consistent with this idea Hoxb8 has been shown to be essential for normal development and organization of dorsal spinal interneurons, and loss of Hoxb8 leads to excess grooming and reduced thermal and nociceptive response (Holstege et al., 2008). Extrinsic and intrinsic control of sensory-motor circuit development Studies of sensory neuron development provide compelling evidence that a major determinant of subtype diversity and connectivity are instructive cues provided by peripheral limb muscle and mesenchyme. Expression of Ntf3 within the developing limb regulates expression of Etv1 and Runx3 in pSNs, and differences in the levels of NT3 signaling contribute to muscle specific identities (de Nooij et al., 2013;Wang et al., 2019). The limb mesenchyme has also been implicated as a source of extrinsic cues which differentiate hindlimb extensor and flexor pSN subtypes (Poliak et al., 2016;Wenner and Frank, 1995). A confounding aspect in the study of limb-derived signaling in pSN development is that peripheral Ntf3 signaling, as well as the intrinsic determinants Etv1 and Runx3, are also required for sensory neuron survival, often precluding genetic analysis of later aspects of sensory-motor circuit development. We found that Hoxc8 is dispensable for pSN survival, and loss of Hoxc8 does not prohibit the ability of pSNs to target their appropriate forelimb muscle targets. Moreover, expression of Hox genes in both forelimb-innervating pSNs and MNs is maintained in the absence of limb mesenchyme and muscle. While these results indicate a limb-independent mechanism of early neuronal differentiation, target-derived cues are likely required to establish the full molecular profiles of pSNs and functional characteristics. In spinal MNs, a major function of Hoxc8 is to regulate expression of Ret and Gfra genes, rendering a subset of MNs sensitive to activities of peripheral Gdnf to induce Pea3 expression within motor pools (Catela et al., 2016). Thus, in MNs Hox proteins regulate expression of cell surface receptors that retrogradely influence subtype specification. Hox genes may similarly act in pSNs to modify or constrain the responses to peripheral cues as sensory axons navigate through the developing limb bud. Expression of Hoxc8 in pSNs is largely confined to distal forelimb flexor subtypes, while distal forelimb extensor pSNs lack Hoxc8 pSN innervation. Interestingly, a recent study showed that Runx3 is essential for the development of forelimb extensor pSNs, and suggests that this pattern is regulated by limb-derived cues (Wang et al., 2019). The differential expression of Hoxc8 and Runx3 in distal flexors and extensors could reflect refinement in the pattern of these factors by limb-derived cues. For example, Hoxc8 may antagonize Runx3 function within flexor pSN subtypes. Alternatively, limb-derived cues may maintain Runx3 in extensor pSNs and restrict Hoxc8 expression to forelimb flexor pSNs. Hox genes and synaptic specificity in sensory-motor circuits We found that Hoxc8 is required in distal forelimb flexor pSNs to establish appropriate connections with their MN counterparts. How do Hox genes contribute to the specificity of central connections between pSNs and MNs? In spinal MNs, Hox genes and their downstream targets including Pea3 and Sema3e have been shown to be essential for the specificity of their central connections with pSNs (Baek et al., 2017;Pecho-Vrieseling et al., 2009;Vrieseling and Arber, 2006), in part, by regulating MN topographical organization and dendritic architecture. In mice lacking the Hox accessory factor Foxp1 the normal positioning of forelimb-innervating MNs is disrupted, leading to a sensory-motor mismatch (Sürmeli et al., 2011). The specificity of pSN-MN connections has been recently shown to correlate with the relative approach angles between pSN axons and MN dendrites (Balaskas et al., 2019), and loss of this alignment may account for the sensory-motor mismatch observed in both Foxp1 and pSN Hoxc8 mutants. Further studies will be necessary to definitively assess whether the coordinate regulation of MN and pSN connectivity by the same Hox gene contributes to sensory-motor specificity. Consistent with this model, in preliminary studies we found that after selective deletion of Hoxc8 from MNs, MN pools are disorganized and distal forelimb flexor pSNs target extensor MNs ( Figure 5-figure supplement 1c,d, Figure 7-figure supplement 1d,e). Although a Hox-specific molecular matching model for pSN-to-MN connectivity is provocative, Hoxc8 could be required in pSNs, independent of Hoxc8 in MN, for the segregation of axonal subtypes within the sensory nerve, or for their axon guidance within the spinal cord. Centrally, pSNs establish connections with a variety of postsynaptic targets, including local spinal and projection interneurons that relay proprioceptive information to the brain (Bermingham et al., 2001;Bikoff et al., 2016;Koch et al., 2017;Tripodi et al., 2011;Yuengert et al., 2015). The same Hox genes expressed by pSNs and MNs are also expressed by multiple classes of spinal interneurons, suggesting a broader role in shaping synaptic specificity within the spinal cord. Consistent with this idea, both long ascending spinocerebellar and local-circuit spinal interneurons have been shown to require Hox function to acquire limb-specific molecular identities (Baek et al., 2019;Sweeney et al., 2018). Results from this work indicate that in addition to contributing to sensory neuron diversity, Hox genes are also required in pSNs to shape synaptic specificity in developing sensory-motor circuits. These observations are consistent with studies indicating that coordinate Hox activities are required in multiple neuronal and non-neuronal lineages during circuit assembly (Barsh et al., 2017;Briscoe and Wilkinson, 2004;Zheng et al., 2015). Our findings suggest the same Hox gene could act in multiple neuronal classes during development, implying a coherent molecular strategy for wiring the circuits essential for limb control. Chick limb ablations Unilateral limb ablations were performed between stages 16-18 (Calderó et al., 1998) and embryos incubated to develop to stages 26-28. Embryos were sacrificed and further processed once full limb ablation was confirmed. Spinal cords with attached DRG were dissected and immersed in 4%PFA for 1-2 hr at 4C followed by cryprotection in 30% sucrose overnight. Tissue was cryosectioned at 16 um. Muscle extraction Mice were sacrificed at P12 by transcardial perfusion and whole muscle dissections were performed with the animal preparation submerged in ice cold 1X PBS solution. Following removal, each muscle was pinned down in a sylgard plate and immersed in 4%PFA for 2 hr at 4C followed by cryoprotection in 30% sucrose solution overnight. Muscles were embedded in mounting media and cryosectioned at 16 um thickness. Virus production Local stocks of virus were used to amplify, purify, and concentrate rabies virus (RVDG-mCherry-EnvA) according to established protocols (Osakada and Callaway, 2013;Wickersham et al., 2007). RVDG-mcherry virus was produced and amplified in B7GG cells and subsequently pseudotyped with EnvA in BHK-EnvA cells to produce RVDG-mCherry-EnvA with minor modifications to protocol. After BHK-EnvA cells were infected,~7 hr later, cells were washed three times in PBS and fresh medium was added, and this was repeated the following day. After a subsequent 48 hr incubation, medium was harvested, filter purified, and viral particles were concentrated by ultracentrifugation. Concentrated virus was then resuspended in PBS and viral titer was assessed by serial dilution of the virus on HEK293t cells to achieve a viral titer of~1Â10 8 IU/mL. Tracing experiments Sensory neuron labeling 1% CTB (Sigma-Aldrich) solutions were injected with a glass capillary into a forelimb muscle of anesthetized mice at P4-P5 and examined after 3 days. Pups were perfused with PBS and 4% PFA. Spinal cords with attached DRG were isolated and post-fixed for 2-3 hr at 4C. Tissue was cryosectioned at 16 um following cryoprotection. Sensory and motor neuron labeling For anterograde labeling of sensory and motor neurons,~0.8 uL of RVDG-mCherry-EnvA virus was injected with a glass capillary into either the FCU or PL of anesthetized mice at P5-P6 which were then perfused with PBS and 4% PFA 5 days later. Spinal cords were isolated, processed, and cryosectioned at 16 um or 30 um. Muscle injection specificity was verified post-mortem by the exclusive presence of fluorescent labeling in muscle of interest. Double labeling To anterogradely label sensory neurons and monosynaptically connected motor neurons, RVDG-mCherry-EnvA virus was injected into either the FCU or PL of mice at~P5, as described above, for analysis of ectopically labeled motor neuron soma. Concurrently,~10-50 nl of 1% CTB solution was injected into the ECR and ED muscles. Animals were perfused with PBS and 4% PFA 5 days later. Spinal cords were isolated, processed, and cryosectioned at 16 um. To anterogradely label sensory neuron synapses onto motor neurons,~30-50 nl of 1% CTB solution was injected into either the FCU or PL muscle of~P5 mice, as described above. Concurrently, TMR-Dextran (Rh-Dex) was injected into the ECR and ED muscles to retrogradely label motor neurons. Animals were perfused with PBS and 4% PFA~3 days later. Spinal cords were isolated, processed, and cryosectioned at 30 um. Quantification and statistical analyses Neuronal cell counts and muscle spindle analysis Neuronal cell counts were performed on 10 or 16 um cryosections obtained from caudal cervical DRG. Images were acquired using an LSM 700 Zeiss confocal microscope and cell counts were calculated using the Fiji/ImageJ cell counter feature. For chick limb ablation assays, neuronal cell counts were compared between DRG of the ablated and non-ablated sides of an individual chick embryo. Neuronal cell counts in which forelimb muscles were injected with 1% CTB were performed on 16 um cryosections from caudal cervical DRG and processed/analyzed as described above. Quantifications were done based on comparable labeling efficiency between all injected animals for each muscle type and was required to have a minimum of 10 PV + CTB-labeled sensory neurons. Tissue sections of 16 um thick muscle tissue were imaged to analyze muscle spindle projections. Sensory endings within muscle spindles were identified based on the presence of vGluT1 + terminals with characteristic annulospiral morphology. While each unique spindle of an entire muscle was not counted, a series of sections of the whole muscle was profiled to obtain a representative sample of muscle spindles for each muscle. For both cell counts and spindle analysis, serial sections throughout the entire tissue sample were collected into 8 and 5 parallel series of sections respectively and at least one full series of sections was compared between controls and mutants or limb ablated chick embryos. Analyses were performed on N ! 3 mice/genotype or per muscle type and N = 3 limb ablated chick embryos. Quantification of pSN collateral density (PV fiber density) Quantitative analysis of pSN fiber density in the ventrolateral region of the spinal cord was performed on collapsed confocal Z-stacks using Fiji/ImageJ analysis software. The total PV+ collateral area (calculated as the mean pixel intensity) was measured within a confined lateral region of the ventral spinal cord at the segmental level of DRG C8 set by the borders of the midline and the ventral and lateral gray matter and white matter boundaries. An ROI was set to cover the ventrolateral region to be quantified and measured 45,832 pixels. The threshold was designated based on PV labeling only in the ventrolateral area and the same threshold value was used across all animals. For each genotype, N = 3 animals were analyzed. Quantitative analysis of motor neuron position Plotting of labeled motor populations was performed on 30 um cryosections. Tiled images were acquired with a Zeiss confocal microscope (LSM 700) at 10X. X-Y coordinates for motor neuron soma measured in um units were determined with respect to the central canal using IMARIS software. Contour plots were generated from the X-Y scatter plots and six isolines were automatically assigned in Matlab. Quantitative analysis of ectopic motor neuron soma RVDG-mCherry-EnvA/CTB/ChAT labeled motor neurons were analyzed from 16 um serial cryosections of the cervical spinal cord. Coincident labeling of soma was quantified using the cell counter feature in ImageJ. For each genotype/forelimb flexor muscle type, at least N = 3 animals were analyzed. Only animals with comparably efficient labeling were used for analysis. Efficient labeling was designated as a minimum of 20 CTB/ChAT + MNs and 10 RabV/ChAT + MNs. Serial sections throughout the entire tissue sample were collected into 10 parallel series of sections and three full series of sections were compared for each animal. Analysis of sensory synaptic contacts with motor neurons Analysis of vGluT1 + sensory bouton contacts with P7-P9 motor neuron soma and~100 mm proximal dendritic arbor was performed using 0.4 mm confocal z stacks of 30 mm thick sections using a 63X oil objective lens. Gamma-motor neurons were excluded from analysis. Distance of boutons on dendritic arbor from soma was assessed using the scale bar set by Zen software. Images were analyzed using Fiji/ImageJ. Statistics Samples sizes were determined based on previous experience and the number of animals and definitions of N are indicated in the main text and figure legends. In figures where a single representative image is shown, results are representative of at least two independent experiments, unless otherwise noted. No power analysis was employed, but sample sizes are comparable to those typically used in the field. Data collection and analysis were not blind. Graphs of quantitative data are plotted as means with standard error of mean (SEM) as error bars, using Prism 8 (Graphpad) software. Significance was determined using unpaired (Student's) t-test and was calculated using Prism eight software. Exact p values are indicated in the main text and figure legends.
2020-08-19T13:06:29.430Z
2020-08-18T00:00:00.000
{ "year": 2020, "sha1": "1e72df62d01cd5e966db72b435468ce7658696ef", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.56374", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b23a588f1b29fd75af2b57893c47661b865dbceb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
35042946
pes2o/s2orc
v3-fos-license
Adenovirus‐mediated expression of the C‐terminal domain of SARS‐CoV spike protein is sufficient to induce apoptosis in Vero E6 cells The pro‐apoptotic properties of severe acute respiratory syndrome coronavirus (SARS‐CoV) structural proteins were studied in vitro. By monitoring apoptosis indicators including chromatin condensation, cellular DNA fragmentation and cell membrane asymmetry, we demonstrated that the adenovirus‐mediated over‐expression of SARS‐CoV spike (S) protein and its C‐terminal domain (S2) induce apoptosis in Vero E6 cells in a time‐ and dosage‐dependent manner, whereas the expression of its N‐terminal domain (S1) and other structural proteins, including envelope (E), membrane (M) and nucleocapsid (N) protein do not. These findings suggest a possible role of S and S2 protein in SARS‐CoV induced apoptosis and the molecular pathogenesis of SARS. Introduction Severe acute respiratory syndrome-coronavirus (SARS-CoV) was identified as the causative agent of SARS early in 2003 [1]. Fever, dyspnea [1], lymphopenia, neutropenia [2,3] and lower tract respiratory infection [1] were commonly found in infected individuals. Comparative genomic analysis revealed that SARS-CoV is a novel member of the viral family coronaviradae, with an RNA genome of 29.7 kbp [4][5][6]. At least five viral structural proteins (VSPs), namely the spike (S), envelope (E), membrane (M) and nucleocapsid (N) protein, together with the newly identified ORF3a [7,33], were encoded from the genome [9,10]. Among these proteins, expression of S, M and N are necessary and sufficient for pseudovirus assembly mimicking those found in SARS-CoV infected cells [11,12]. Accumulated evidences have demonstrated that survival of viruses depends on the successful modulation of apoptosis initiated either by the hosts or the viruses themselves [13][14][15][16]. Several studies have associated apoptosis with the pathogenesis of coronaviruses [17][18][19][20][21]. Previous reports suggested that over-expression of certain coronaviral proteins could induce apoptosis in vitro [22,23]. For SARS-CoV, clinical symptoms, such as depletion of hepatocytes and T lymphocytes, i.e., lymphopenia, were suggested to be related to apoptosis [24][25][26]. It was also demonstrated that in vitro replication of SARS-CoV induces apoptosis [27][28][29][30][31]. Recently, the ORF3a and the accessory protein 7a, but not the N, M and E protein of SARS-CoV, was demonstrated to induce apoptosis in Vero E6 cells [8,32]. In contrast, it is reported that the E and N protein of SARS-CoV induces apoptosis in Jurkat T and COS-1 cells, respectively, under serum depletion conditions [34,35]. It was also reported that baculovirus-mediated expression of the N-terminal (S1) but not the C-terminal (S2) domain of the S protein of SARS-CoV triggers the cell survival-related AP-1 signaling pathway in lung cells [36]. Nevertheless, the possible role(s) of the SARS-CoV VSPs in the virus-induced apoptosis is largely unknown. In this study, we demonstrated a possible role of SARS-CoV S protein in virus-induced apoptosis using recombinant adenovirus (rAd)-mediated expression system. The apoptotic properties of S, S1 and S2 protein, as well as other VSPs, including E, M and N protein, were investigated in Vero E6 cells. Generation of recombinant adenoviral virus Cloning of the SARS-CoV VSPs from viral cDNA, including S, S1 and S2, as well as three other structural genes -E, M and N gene (Fig. 1A), was described elsewhere [6,37]. The cloned cDNA fragments were tagged at the carboxy-terminal with a V5 epitope. The signal peptide of pig growth hormone (SP pGH ) [38] was placed upstream of the coding sequences of S (18-1255), S1 (18-683) and also S2 (684-1255), so as to ensure a comparable post-translational modifications for all the spike protein fragments used in the study. The transgenes were then subcloned into a modified bicistronic shuttle vector designated as pShuttle-CMV-GOI-IRES-eGFP, which is derived from the pShuttle vector of the AdEasyä XL Adenoviral Vector System (Stratagene) and the plasmid pBMN-I-GFP (Dr. G.P. Nolan, Stanford University School of Medicine). The bicistronic expression cassette contains the gene of interest (GOI) and the enhanced green fluorescent protein (eGFP), which are driven by a CMV promotor and an internal ribosomal entry site (IRES), respectively (Fig. 1A). The recombinant adenovirus containing the VSPs (rAd-VSPs) was then generated by incorporating the expression cassette into the pAdEasy-1 vector (Stratagene) according to manufacturerÕs instructions (Stratagene). A control adenovirus (rAd-Ctrl) with no transgene was also constructed. The rAds were propagated in AD-293 cells and CsCl-purified as described [39]. Immunoblotting To access the expression of SARS-CoV VSPs, Vero E6 cells were transduced with the corresponding rAds at multiplicity of infections (MOI) of 100. Cells were harvested 84 hours (h) post-transduction (p.t.) and cell lysate was denatured and subjected to SDS-PAGE (S, S1 and S2 in 5% PAGE; other VSPs in 10% PAGE). To detect the expressed VSPs, Western blotting was carried out as described [37] using AP-conjugated anti-V5 antibody (Invitrogen). Cell viability assay Viability of cells transduced at indicated MOI was accessed by trypan blue exclusion assay. Cells were harvested and stained with 0.025% trypan blue dye (Invitrogen) for 10 min, and the percentage of dead cells (blue) was counted using haemocytometer. Nuclear morphology To detect chromatin condensation, cells transduced at indicated MOI were collected by low-speed centrifugation and stained with Hoechst 33342 (Molecular Probes) phosphate buffered saline (PBS) solution (1:1000 v/v) at 37°C for 5 min. At least 200 cells from three random fields of view were counted under fluorescence microscope. DNA laddering assay Cellular DNA fragmentation into characteristic ladders in apoptotic cells was assayed as described [40] with modifications. Briefly, cells were transduced with indicated rAds at MOI of 100. Both floating and adherent cells were collected at indicated time points p.t. and were subjected to low speed centrifugation. Cell pellets were then washed once in ice-cold PBS and were subsequently resuspended in 80 ll of the same solution. Three hundred microliters of lysis buffer [10 mM Tris-HCl (pH 7.6), 10 mM EDTA, and 0.6% SDS] were added to the cell suspension, prior to the addition of 100 ll of 5 M NaCl. Lysates were then incubated at 4°C overnight. Cell debris was pelleted by centrifugation and the supernatants were treated with 10 ll of 20 mg/ml proteinase K (Gibco-BRL) at 37°C for 1 h. Low molecular weight DNA was concentrated by ethanol precipitation overnight at À20°C after phenol:chloroform extraction and subsequently analysed by 2% agarose gel electrophoresis. Flow cytometry analysis of early apoptosis by 7-AAD and Annexin V staining The asymmetry of the plasma membrane of rAd-S and-S2 transduced cells at 84 h p.t. was monitored by dual staining with Annexin V-PE and 7-aminoactinomycin D (7-AAD), which is a phosphatidylserine (PS)-binding protein and an impermeable DNA-labelling dye, respectively (Annexin V-PE apoptosis detection Kit I, BD Pharmingen BioSciences). Data were acquired by Coulter Epics Elite Flow Cytometer and were analyzed with the WinMDI v2.81 software package (the Scripps Research Institute). Early apoptotic cells were recognized as PS-externalized (Annexin V-PE labeled) with intact cell membrane that resists 7-AAD staining (lower-right quadrant), which allows the exclu- sion of necrotic cells that are indistinguishable from the late apoptotic cells (upper-right quadrant). At least 1 · 10 5 cells were counted for each data point. Statistical analysis A paired studentÕs t-test was used to compare the significance between specified groups, with P < 0.05, or 0.01 be defined as statistically significant. Adenovirus-mediated expression of SARS-CoV VSPs To determine the rAd dosage needed for maximum transduction efficiency with minimal cytopathic effects, Vero E6 cells were transduced with rAd-Ctrl at different MOIs and were examined at 84 h p.t. (Fig. 1B). At a MOI of 100, about 95% of cells were expressing eGFP, while no substantial apoptotic effect (i.e., less than 5% of non-viable and chromatin condensed cells) was observed. Therefore, a MOI of 100 was chosen as the upper dose limit of the rAd transductions in this study. The successful and comparable transductions of all rAd-VSPs were ensured in which at least 95% of cells showed the expression of eGFP and V5 epitope as detected by flow cytometer (data not shown), while the expression of SARS-CoV VSPs was further confirmed by Western blots (Fig. 1C). It was noted that the well-resolved double band pattern was observed for S and S1 at around 200 and 110 kDa, respectively, which mirrored pre-vious reports that these two proteins are heavily glycosylated [41][42][43][44][45]. Among the bands of the S protein doublets, the one with lower molecular weight is at about 180 kDa, which is expected to be the glycosylated protein found in the endoplasmic reticulum, and the one with higher molecular weight, which is about 200 kDa, is expected to represent its more complexly glycosylated form that is found in Golgi bodies [46]. Transduction by rAd-VSPs induces apoptosis in Vero E6 cells We next compared the apoptotic effects induced by rAd-VSPs transductions in terms of cell morphology, cell viability, chromatin condensation and cellular DNA fragmentation at 12 h intervals p.t. At 84 h p.t., cytopathic effects with abnormal cell morphology (i.e., shrinkage and detachment) ( Fig. 2A) and chromatin condensation (Fig. 2B) were observed in a substantial proportion of cells transduced by rAd-S and-S2, but neither in mock nor other rAds transduced cells. As shown in Fig. 2C, cells transduced by rAd-S and rAd-S2 collected at 84 h p.t. showed significantly (P < 0.01, ** ) stronger apoptotic effects in terms of both cell viability and chromatin condensation. Moreover, cellular DNA fragmentation into characteristic ladders was only observed in rAd-S and -S2 transduced cells (Fig. 2D), in which increments of about 200 bp in size became weakly observable at 36 h p.t. Although random shearing of DNA was also observed in parallel, the intensity of the ladder was substantially increased at 84 h p.t. These observations indicate that both S and S2 protein are able to induce apoptosis in Vero E6 cells while the other VSPs do not. Transduction of rAd-S2 showed a stronger apoptotic effect than that of rAd-S To further confirm the observed apoptotic effect of the S proteins, Vero E6 cells were transduced with rAd-Ctrl, -S, -S1 and -S2 at different MOIs and the percentage of apoptotic cells at 84 h p.t. were evaluated by chromatin condensation and PS-externalization using fluorescent microscopy and flow cytometry, respectively. As shown in Fig. 3A, the percentage of chromatin condensed cells induced by either rAd-S or -S2 transduction at all indicated MOIs was significantly higher than that of the others (P < 0.05, * or P < 0.01, ** ) in a dosage-dependent manner. Moreover, at MOIs of 50 and 100, the percentage of chromatin condensed cells induced by rAd-S2 transduction was significantly higher than those induced by rAd-S transduction (P < 0.01, #). A similar phenomenon was observed when the cell membrane asymmetry of cells were examined (Fig. 3B), in which the percentage of early apoptotic cells in rAd-S and -S2 transduction was at least 2 times higher than that of the controls at MOIs of 50 and 100. In summary, the above data strongly suggest that rAd-mediated overexpression of S and S2 protein induces apoptosis in Vero E6 cells, of which rAd-S2 induced substantially stronger apoptosis than rAd-S under the condition tested. Discussion Infection of SARS-CoV in Vero E6 cells induces extensive apoptosis through a caspase-3 and p38 MAPK dependent pathway [27,28,30,31]. Using rAd-mediated expression system, we assessed the apoptotic effect of the major structural proteins of SARS-CoV, including S, S1, S2, E, M and N protein. Typical features of apoptosis such as cell rounding, shrinkage, nuclear condensation, DNA fragmentation and PE-externalization were observed only in cells transduced with rAd carrying S and S2, but not S1, nor other structural proteins studied. These data suggest that the over-expression of SARS-CoV S and S2 could induce apoptosis. The present findings seem to be unique within the coronavirus family. In MHV [22] and IBV [23], overexpression of S protein did not induce observable apoptosis in vitro. On the other hand, the observed in vitro apoptotic effects of the VSPs of other coronaviruses, such as the E protein of MHV [22] and the N protein of TGEV [20], were, however, not observed when we overexpressed the SARS-CoV homologues in Vero E6 cells. Interestingly, the E and N protein of SARS-CoV has been reported to induce apoptosis in Jurkat T and COS-1 cells, respectively, under serum depletion conditions [34,35], which were not observed under the conditions tested in the current and a previously study [8]. Recently, over-expression of two newly identified viral proteins of SARS-CoV, ORF3a and 7a, were shown to induce apoptosis in Vero E6 cells as well, which is associated with caspase-8 and -3 activity, respectively [32,33]. Since the expression level of these viral proteins in SARS-CoV infected cells has not been clearly demonstrated, their pro-apoptotic properties may not be the only contributing factor in SARS-CoV induced apoptosis. In contrast, the S protein is one of the major viral proteins in SARS-CoV infected cells apart from N protein [47]. In SARS-CoV infected Vero E6 cells, cleavage of the S protein into fragments was suggested in previous studies [41,43], which includes a form that resembles the S2 protein in this study, and the inhibition of such protein processing completely abrogated the virus-induced cytopathic effects in vitro, suggesting the potential roles of S and S2 in SARS-CoV induced apoptosis. Although the activation of mitochondrial apoptotic pathway, caspase cascade, and the p38 MAPKdependent pathway are reported in several in vitro models of SARS-CoV induced apoptosis [27,28,30,31], the viral component(s) responsible for these observations remains unclear. Ren and co-workers demonstrated that the addition of inactivated SARS-CoV viral particles to Vero E6 cells is unable to induce apoptosis, implicating that expression of viral genes is indispensable for the viral-induced apoptosis in vitro. Regarding these findings, the pro-apoptotic properties of S and S2 in this report and the comparative study between the apoptotic pathways initiated by expression of individual viral genes and viral infection would certainly provide important clues in dissecting the molecular components responsible for the SARS-CoV induced apoptosis. The demonstrated roles of SARS-CoV S protein in viral entry and elicitation of neutralizing immune responses make it an attractive target for antiviral therapies [10]. In this regard, investigations on the molecular basis of the S protein induced apoptosis, which are ongoing in our laboratory, together with the findings in this study, are expected to provide important insights for the rational design of anti-viral therapies, and to the understanding of the molecular pathogenesis of SARS-CoV infection.
2018-04-03T02:47:48.413Z
2005-11-21T00:00:00.000
{ "year": 2005, "sha1": "879b8f600434d65d3155e5a786d102892e0e2e68", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1016/j.febslet.2005.10.065", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "2e2a9aa6dc82af4b66c3f4e0035a71d40f575f4c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
251886325
pes2o/s2orc
v3-fos-license
Comparison of florfenicol depletion in dairy goat milk using ultra-performance liquid chromatography with tandem mass spectrometry and a commercial on-farm test Florfenicol is a broad-spectrum antibiotic commonly prescribed in an extra-label manner for treating meat and dairy goats. Scientific data in support of a milk withdrawal interval recommendation is limited to plasma pharmacokinetic data and minimal milk residue data that is limited to cattle. Therefore, a rapid residue detection test (RRDT) could be a useful resource to determine if milk samples are free of drug residues and acceptable for sale. This study compared a commercially available RRDT (Charm® FLT strips) to detect florfenicol residues in fresh milk samples from healthy adult dairy breed goats treated with florfenicol (40 mg/kg subcutaneously twice 4 days apart) with quantitative analysis of florfenicol concentrations using ultra-performance liquid chromatography with tandem mass spectrometry (UPLC-MS/MS). In addition, storage claims for testing bovine milk using the RRDT were assessed using stored goat milk samples. Milk samples were collected every 12 h for a minimum of 26 days. Commercial RRDT strips remained positive in individual goats ranging from 528 to 792 h (22–33 days) after the second dose, whereas, UPLC-MS/MS indicated the last detectable florfenicol concentration in milk samples ranged from 504 to 720 h (21–30 days) after the second dose. Results from stored milk samples from treated goats indicate that samples can be stored for up to 5 days in the refrigerator and 60 days in the freezer after milking prior to being tested with a low risk of false-negative test results due to drug degradation. Elevated somatic cell counts and bacterial colony were noted in some of the milk samples in this study, but further study is required to understand the impact of these quality factors on RRDT results. Florfenicol is a broad-spectrum antibiotic commonly prescribed in an extralabel manner for treating meat and dairy goats. Scientific data in support of a milk withdrawal interval recommendation is limited to plasma pharmacokinetic data and minimal milk residue data that is limited to cattle. Therefore, a rapid residue detection test (RRDT) could be a useful resource to determine if milk samples are free of drug residues and acceptable for sale. This study compared a commercially available RRDT (Charm ® FLT strips) to detect florfenicol residues in fresh milk samples from healthy adult dairy breed goats treated with florfenicol ( mg/kg subcutaneously twice days apart) with quantitative analysis of florfenicol concentrations using ultra-performance liquid chromatography with tandem mass spectrometry (UPLC-MS/MS). In addition, storage claims for testing bovine milk using the RRDT were assessed using stored goat milk samples. Milk samples were collected every h for a minimum of days. Commercial RRDT strips remained positive in individual goats ranging from to h ( -days) after the second dose, whereas, UPLC-MS/MS indicated the last detectable florfenicol concentration in milk samples ranged from to h ( -days) after the second dose. Results from stored milk samples from treated goats indicate that samples can be stored for up to days in the refrigerator and days in the freezer after milking prior to being tested with a low risk of false-negative test results due to drug degradation. Elevated somatic cell counts and bacterial colony were noted in some of the milk samples in this study, but further study is required to understand the impact of these quality factors on RRDT results. KEYWORDS florfenicol, goat, extra-label drug use, drug residue, milk residue Introduction Florfenicol is a broad-spectrum antibiotic approved by the Food and Drug Administration (FDA) for use in cattle, swine, and fish, and by the European Medicines Agency (EMEA) for use in cattle, sheep, swine, and fish. The EMEA has extrapolated maximum residue limits (MRLs) to all food-producing species, including goats, due to the limited number of medicinal products approved for use in minor animal species (1). Despite the florfenicol FDA-and EMEA-approvals for use in ruminants, neither agency has approved florfenicol for use in lactating animals, which results in extra-label use of florfenicol in lactating cattle and small ruminants, even though there is no tolerance (TOL) or MRL established for milk and pharmacokinetic data in milk is limited to a few small studies in cattle (2)(3)(4). In the United States, the Animal Medicinal Drug Use Clarification Act (AMDUCA) permits veterinarians with a valid veterinarian-client-patient-relationship to prescribe FDAapproved medications in an extra-label manner (5). One condition of AMDUCA requires the veterinarian to determine a 'substantially extended' withdrawal interval (WDI) based on scientific evidence for extra-label drug use (ELDU) in food-producing species to ensure food products are free of drug residues. The Food Animal Residue Avoidance Databank (FARAD) is a federally funded program that serves to help veterinarians by recommending scientifically-based WDIs following extra-label drug use. According to FARAD internal WDI request data, florfenicol was the most-requested antimicrobial drug for goat meat and milk WDIs between 2015 and 2020. The majority of requests were for WDIs following subcutaneous administration with approximately half of the total submissions requesting milk WDIs. Determining a substantially extended milk WDI is challenging because there is only one published study in lactating dairy cattle following the subcutaneous administration of florfenicol, which reported a 60 h milk half-life and concentrations above the limit of detection up to 588 h after a single 40 mg/kg dose (2). Given the difficulty of determining a withdrawal interval due to the paucity of florfenicol residue milk data and the consequence of lost product & revenue in the event of antibiotic detection in the bulk tank, rapid residue detection tests provide a useful resource for producers to quickly determine if milk samples are free of drug residues and acceptable for sale. Rapid residue detection tests (RRDT) for detecting florfenicol in raw commingled cow milk are available. The RRDT that detects florfenicol and thiamphenicol is a rapid one step immunoreceptor assay that utilizes lateral flow technology where florfenicol or thiamphenicol interact with colored beads in the lateral flow test strip, leading to presence of colored lines in the test and control zones if residue is not detected, as well as color intensity changes as the florfenicol or thiamphenicol concentrations approach the sensitivity (6). According to the RRDT manufacturer's instructions, the test detects florfenicol or thiamphenicol down to 1 ppb in cow milk stored at 0-7 • C and has a specificity of 95%. However, it has been reported that these rapid residue detection tests may be cross-reactive both to similar medications or components of the milk (i.e. somatic cells, bacteria, fat-content, etc.) (7)(8)(9) resulting in false positives. The RRDT manufacturer's instructions indicate that there are no interferences in detection from somatic cells at ≤10 6 SCC/ml or bacteria at ≤ 3 × 10 5 CFU/ml, but high fat samples (>6.5%) may cause invalid results. Additionally, other amphenicols are the only known medication interferences that are cross-reactive at 100 ppb. Previous studies evaluating rapid residue detection tests for goat milk have reported that milk secretory mechanisms and milk composition vary between cows and goats, which may affect RRDT results when used with goat milk (8)(9)(10). The primary objective of this study was to compare a commercially available RRDT for florfenicol residues in fresh goat milk samples (dosing regimen of 40 mg/kg subcutaneously twice 4 days apart) with quantification of drug residues using ultra-performance liquid chromatography with tandem mass spectrometry. Secondary objectives were to assess the impacts of sample storage prior to testing and potential factors that could result in false positives for the RRDT. Animal enrollment The University of California Institutional Animal Care and Use Committee (IACUC) approved all experimental procedures conducted with animals for this study (IACUC Protocol Number 21671). The study was conducted at the University of California Davis goat facility, which utilizes farm practice managements common to those observed at other dairy goat farms in California. Animals enrolled in the study were selected by convenience, based on their lactation and kidding dates. Throughout the sampling period, study does were housed at the University of California, Davis Goat Teaching & Research Facility in penned areas with other does. Goats were fed alfalfa hay twice a day, 3-3.5 lbs 14% dairy ration, and provided water ad libitum. FIGURE Overview of the milk sampling protocol following treatment of lactating does with florfenicol mg/kg subcutaneously twice, days apart. 94.4 kg) were enrolled following a physical examination that included assessment of temperature, pulse, respiration rate, rumen contractions, body condition score, Faffa Malan Chart (FAMACHA) test, and udder palpation, conducted by a single veterinarian (JDR); animals had to demonstrate no apparent clinical disease to be enrolled in the study. All does were administered two subcutaneous 40 mg/kg doses of florfenicol (Nuflor R 300 mg/ml, Merck Animal Health, Madison, NJ, USA) 4 days apart. Does were weighed prior to initial florfenicol administration and the total dose was administered at two injections sites to limit no more than 10 ml being administered at each site, as recommended by the label; injections were administered subcutaneously using an 18 g × 1 inch needle on opposite sides of the body in the region of the abdomen. Milk collection Goats were milked by barn staff twice daily, at ∼12 h intervals. A 0.5% iodine teat-dipping solution was used for preand post-dipping of teats; after application, teats were dried with paper towels after at least a 40 s contact time. Prior to milk collection, each teat was stripped twice and fore-milk was examined for abnormal milk. Milk was collected in a clean glass jar and transferred to a clean metal bucket until each milking was complete. Milk was weighed using a Dairy Herd Information Association (DHIA)-certified hanging scale and mixed at least 3 times by pouring milk between two buckets. Samples were immediately transferred to 3 or 5 ml cryovials, which were kept at ambient temperature until RRDT was completed (maximum 30 min). At specified time points, additional 15-30 ml aliquots of milk were collected in 15 or 30 ml Eppendorf tubes for additional sampling or tests. Further details are described in the 'Milk Quality and Component Sampling' and ' Antibiotic Residue Screening of Stored Milk Samples' sections. For rapid quantification of milk total solids in the individual goat milk samples, a Brix test was completed on each morning milk sample and 12 h after each florfenicol dose. This was done . Milk quality and component sampling /fvets. . by inverting the Eppendorf tube multiple times, and adding one to three drops of milk to a digital refractometer (Palm Abbe TM model PA202X; MISCO; Solon, OH) using a 1 ml plastic pipet. The refractometer was calibrated daily using standard protocols. At four time points (0, 168, 468 h and at the final milking) milk was shipped overnight to two accredited milk testing laboratories and tested for components and quality including fat, protein, lactose, solids non-fat (SNF) percent, somatic cell count (SCC, cells/ml), milk urea nitrogen (MUN, mg/dl; Central Counties DHIA, Atwater, CA) and coliform count (CFU/ml), standard plate count (SPC, CFU/ml; Sierra Dairy Labs, Tulare, CA). Milk was transferred to 30 ml tubes and shipped overnight to each laboratory with ice packs. For coliform count testing, milk was placed in tubes without preservative, while the samples tested for components and quality were placed in tubes with bronopol 18% preservative (Bronolab W-II Liquid, Advanced Instruments, Norwood, MA). Some of the samples were collected and stored in the refrigerator for up to 72 h prior to shipping due to pre-designated collection times. Antibiotic residue screening of fresh milk samples Fresh milk samples were tested for residues using a commercial RRDT (Charm R FLT; Charm Sciences Inc., Lawrence, MA). Strips were stored, handled and utilized according to the manufacturer's instructions and individual samples were tested in duplicate. Once all samples were placed on the incubator, the lid was closed and the timer started. If the test strip results were ambiguous, images of the strip were sent to an additional sample collector for independent evaluation. If a testing strip indicated an invalid result, the milk sample was tested a second time. Milk samples were aliquoted and stored at −20 • C until they could be transferred to a −70 • C freezer (within 48 h), where they were maintained prior to ultra-performance liquid chromatography with tandem mass spectrometry (UPLC-MS/MS) analysis. This procedure was completed on morning milk samples starting on day 0, as well as the first evening milk sample after each dose, then continued daily on the morning milk samples until the strips, run in duplicate for a single milk sample, were interpreted as negative. If the milk sample run in duplicate was negative, then the stored milk sample from the prior evening milking (which was stored overnight in a refrigerator after mixing) was tested. Milk samples were tested in duplicate until samples from three consecutive milking events were negative. Results for RRDT screening of fresh milk samples are reported as hours or days post-second dose (PSD). Antibiotic residue screening of stored milk samples According to the manufacturer of the RRDT, bovine milk samples can be stored prior to testing in the refrigerator or freezer (<-15 • C) for 5 days or 2 months, respectively. To evaluate the potential for storing goat milk prior to testing, ∼15 ml of milk was collected at two time points (432 and 600 h, 18 and 25 days, respectively, post-first dose). These time points were chosen based on cattle data (4) that indicated florfenicol was detected in milk ∼26 days after subcutaneous . /fvets. . For testing refrigerated samples, the milk was stored in a standard consumer refrigerator (∼0-5 • C) then tested in duplicate 1, 3 and 5 days post-collection, with approximately 1 ml aliquots removed concurrently and frozen at −70 • C for later UPLC-MS/MS analysis. For the samples stored frozen, approximately a 1.5 ml aliquot was collected and stored in a −20 • C freezer for 60 days. On day 60, samples were thawed in cool water for 1 h, shaken and tested in duplicate using the RRDT strips, with the remaining milk sample being re-frozen in a −20 • C freezer and transferred as soon as possible to a −70 • C freezer until UPLC-MS/MS analysis could be completed. Sample analysis/quantification of florfenicol concentrations Florfenicol and florfenicol amine concentrations in goat milk samples were quantified using UPLC-MS/MS. Our study utilized the UPLC-MS-MS method for measuring florfenicol and florfenicol amine concentrations in milk and milk products from multiple species including goat milk developed by Power et al. (11). The present method was modified to simplify the extraction and reduce the sample volume and solvent usage, while maintaining sensitivity. Power et al. (11) showed sample . /fvets. . for the detection of florfenicol are shown in Supplementary Table 3. An eight-point calibration curve made up in blank goat milk was prepared in an identical manner to the samples using a concentration range of 8-4,000 ppb milk for both florfenicol and florfenicol amine. Using these standards, a linear calibration curve was constructed for both analytes to determine the analyte concentration in samples based on the sample:IS ratio. The limit of detection (LOD) and limit of quantitation (LOQ) were established according to the method described by Shah et al. in 1992 (12). The UPLC-MS-MS method was validated according to the FDA Bioanalytical Method Validation Guidance for Industry (13) with the exception of the selection of the highest quality control concentration (the highest quality control was based on the concentration range for milk samples) and a lower limit of quantitation was not established. Validation included spiking control milk at four concentrations (8, 24, 240 and 2,400 ppb). Five replicates of each concentration were analyzed each day for 3 days. The results from these analyses were used to establish precision, accuracy and recovery. Statistical analysis Descriptive analysis was conducted using a commercial spreadsheet program (Microsoft Office Excel, Microsoft Corp., Redmond, WA) and a commercial statistical software (JMP Pro 16.0, SAS Institute Inc., Cary, NC). Analysis of variance for florfenicol (ppb) and florfenicol amine (ppb) concentrations in frozen samples over time was conducted in the statistical software. Normal distribution was evaluated using Shapiro-Wilk test, and because normality was not met, a non-parametric approach was used. The nonparametric Kruskal-Wallis in the statistical software was used Frontiers in Veterinary Science frontiersin.org . /fvets. . to evaluate a significant difference in the florfenicol (ppb) and florfenicol amine (ppb) distribution by time in days. A P value < 0.05 for this analysis indicated that a significant difference in florfenicol concentrations was observed between any day pairwise comparisons. Descriptive data of enrolled animals and milk characteristics Supplementary Table 4 is a summary of the physical characteristic data for the five enrolled does. This information was recorded during physical examination at enrollment. Does varied from 2 to 5 years of age and body weights ranged from 77.5 to 113 kg. Body condition scores (BCS) ranged from 3 to 3.75 out of 5 and Faffa Malan Chart (FAMACHA R ) scores ranged from 1 to 2. Milk components and quality results for each time point sampled are summarized in Table 3. The daily milk production for each doe enrolled in the study is shown in Figure 2. Florfenicol and florfenicol amine in milk after treatment Rapid residue detection testing of fresh milk samples indicated the presence of florfenicol in the milk samples of all does 12 h after each dose. The third consecutive negative RRDT strips ranged from 528 to 792 h (22 to 33 days) PSD. UPLC-MS/MS testing of fresh frozen milk samples showed that florfenicol concentrations became non-detectable earlier than the RRDT strips indicated milk samples were negative, which can be seen in Figure 3 collected at 600 h (25 days) PFD had multiple instances where the RRDT results of the stored milk samples did not match the RRDT results completed on the fresh milk samples ( Table 5). The results for milk quantification of florfenicol/amine in stored frozen milk samples for 60 days after milk collection at two time points (432 and 600 h, 18 and 25 days, respectively, PFD) are shown in Table 6. Discussion Florfenicol is commonly prescribed in an extra-label manner when treating lactating dairy does despite minimal milk residue data and extrapolated goat milk withdrawal interval recommendations from cattle. Results from this evaluation indicate that a commercial RRDT validated for co-mingled cattle milk is suitable for detecting florfenicol residues in fresh milk samples from individual goats, despite the differences in milk composition between goats and cattle. Does treated with florfenicol at a dose of 40 mg/kg subcutaneously twice 4 days apart, had milk samples that remained positive on RRDT strips longer than was detectable on UPLC-MS/MS. The time to the third set of negative RRDT strips ranged from 528 to 792 h (22-33 days) PSD, whereas the time points when UPLC-MS/MS samples crossed below the LOD (3 ppb) ranged from 504-720 h (21-30 days) PSD. In addition, our results support that the RRDT manufacturer's instructions for milk samples from cattle can apply to goat milk samples, which can be stored up to 5 days in the refrigerator and 60 days in the freezer prior to testing. Lastly, our study was not able to statistically evaluate factors that could result in false positive RRDT samples, however, a trend of minimal drug degradation was observed. Does treated with florfenicol had milk samples that remained positive on RRDT strips longer than was detectable on UPLC-MS/MS. This difference can be attributed to the difference in sensitivity between the RRDT strips and UPLC-MS/MS. The RRDT strips have a 1 ppb validated detection limit in bovine milk, while the UPLC-MS/MS LOD was 3 ppb for both florfenicol and florfenicol amine. In the only published study of subcutaneous florfenicol administration in lactating cattle, florfenicol remained above the LOQ of 5 ppb (LOD not stated) for 432-588 h (18-24.5 days) after a single 40 mg/kg dose with an associated 60 h (2.5 days) terminal elimination half-life in milk (2). The present study did administer a two-dose regimen, rather than the single dose administered in the cattle study, which may account for the longer detection time. This two-dose regimen was selected based on common clinical practice where a second dose is needed for treatment efficacy, as well as common dosing regimens submitted to FARAD. Two does in our study with milk samples that remained positive on RRDT strips longer, also had lower milk production during their lactation and lower milk fat when compared to study counterparts ( Figure 2). The authors hypothesize that high milk producing animals may have an increase in the elimination of florfenicol when compared to lower producing animals. Since this study utilized healthy does, the results may not reflect overall milk production or excretion of florfenicol in unhealthy does. This is an important consideration given the known milk excretion differences of some drugs in mastitic cattle (14), which is attributed to decreased milk productions and metabolic changes. Rapid residue detection testing of stored milk samples mostly reflected the results obtained from the testing of fresh milk samples. However, the authors noted exceptions that occurred with some of the 600 h PFD milk samples. Refrigerated 600 h PFD milk samples from multiple does either had one strip interpreted as negative and one strip interpreted as positive or visual observation indicated a subjectively weakly positive/borderline negative. Similarly, milk samples from one doe at 600 h PFD stored frozen for 2 months were noted to be subjectively weakly positive/borderline negative upon visual observation. Although the sample size of stored milk The use of this commercially available RRDT to detect florfenicol in individual goat milk samples provides both advantages and disadvantages as a resource to determine if milk is acceptable for sale. Since the goal of a RRDT is to determine if milk is free of drug residues prior to consumption or sale, the main advantage of this RRDT is the simple procedure required for results within 8 min. Besides the specialized incubator and RRDT strips, the remaining commercial equipment (pipets and strip-reading machine) is optional, which allows for easy setup and utilization. Another advantage is the ability to test individual goat milk samples, which could be helpful for testing milk from animals that might be outliers due to illness or low milk production. Despite these advantages, the price of both the mandatory machine and RRDT strips requires a monetary commitment. Since the manufacturer's instructions clearly explain how to interpret the RRDT the optional RRDT reading machine was not purchased for this study. However, without this RRDT reading machine, interpretation of test strip results can be subjective when milk concentrations approach analytical sensitivity. For our study, test strips were evaluated by two individuals due to the subjective nature of interpretation. Another limitation was the number of RRDT strips that resulted in being unusable (i.e., packaging was compromised, the testing well was exposed on removal from the canister, adhesive layer tearing inappropriately rendered the flap unable to close, or invalid results obtained after incubation). This combined with the high expense for the equipment makes RRDT use practical in settings where high incidences of testing are required vs. operations that would need infrequent testing. Although the RRDT can provide quick results, negative RRDT results should not be a determining factor for estimating a WDI following extra-label drug use. In addition, given the lack of currently available RRDTs validated for use with goat milk, RRDTs validated for use with cattle milk are used extra-label, so scientific validation of each RRDT according to the National Conference on Interstate Milk Shipments guidelines of 90% specificity and 90% sensitivity with 95% confidence intervals would be ideal (8,15). Given the small number of milk samples with milk quality and components outside of the normal range for goat milk included in this study, the potential negative effect of milk components or quality on the accuracy of RRDT results could not be assessed statistically; however, the milk components and quality parameters for the vast majority of milk samples were within manufacturer's recommended limits. The manufacturer's instructions for the RRDT used in this study indicated that certain milk components or the presence of other amphenicols may cause invalid results and potentially lead to false positives. Specifically, the manufacturer's instructions state that no interferences in detection will result from somatic cells at ≤10 6 SCC/ml or bacteria at ≤3 × 10 5 CFU/ml, milk fat samples (<6.5%) or other amphenicols present in concentrations <100 ppb. Some of the goat milk samples collected in this study were noted to have elevated somatic cell counts and bacterial colonies, but the impact is unclear due to a limited number of affected samples. With the exception of a single milk sample, all other milk samples had fat percentages below 6%. Future studies should be aimed to elucidate if these milk components and quality parameters affect the RRDT results by utilizing goats with and without milk components and quality parameters in the normal range. Conclusion Based on comparison with UPLC-MS/MS, this study supports that the RRDT evaluated in this study can be used for detection of florfenicol residues in milk samples from individual goats treated in an extra-label manner with florfenicol. RRDT results for florfenicol residues in milk samples indicated that samples can remain positive longer than was detected using UPLC-MS/MS for nearly all goats studied. These results were likely due to sensitivity differences between the two methods. Furthermore, we also observed minimal degradation of florfenicol after storage in the refrigerator, indicating a potential use of this approach for delayed testing of goat milk for drug residues after storage. Future studies should be completed in a larger and more representative population of goats, including animals that are ill and with goats that have milk components and quality parameters outside of the normal range. Ethics statement The animal study was reviewed and approved by University of California Institutional Animal Care and Use Committee (IACUC; Protocol Number 21671).
2022-08-29T13:55:49.113Z
2022-08-29T00:00:00.000
{ "year": 2022, "sha1": "9b9d96c3c43b53e37f591b0a35dda091cc2ab0ef", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "9b9d96c3c43b53e37f591b0a35dda091cc2ab0ef", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
134971824
pes2o/s2orc
v3-fos-license
Mooring data from the Crest mooring on Georges Bank from 1994-1995 as part of the U.S. GLOBEC Georges Bank project (GB project) Mooring data from the Crest mooring on Georges Bank from 1994-1995 as part of the U.S. GLOBEC Georges Bank project Table of Table of Contents Two deployments were made at this site.The first deployment was between Oct 28, 1994 andJan 21, 1995.The second deployment was between Apr 2, 1995 andSep 30, 1995.During the second deployment the transmissometer record appears to decline at a steady rate during mid-record.This maybe due to biofouling, but this trend does not appear in the fluorometer. Acquisition Description Two deployments were made at this site.The first deployment was between Oct 28, 1994 andJan 21, 1995.The second deployment was between Apr 2, 1995 and Sep 30, 1995. Processing Description During the second deployment the transmissometer record appears to decline at a steady rate during mid-record.This maybe due to biofouling, but this trend does not appear in the fluorometer. [ table of This instrument designation is used when specific make and model are not known.The Sea Tech Fluorometer was manufactured by Sea Tech, Inc. Processing Description During the second deployment the transmissometer record appears to decline at a steady rate during mid-record.This maybe due to biofouling, but this trend does not appear in the fluorometer. [ table of contents | back to top ] a slow response, frequency output temperature sensor manufactured by Sea-Bird Electronics, Inc. (Bellevue, Washington, USA).It has an initial accuracy of +/-0.001degrees Celsius with a stability of +/-0.002degrees Celsius per year and measures seawater temperature in the range of -5.0 to +35 degrees Celsius.more information from Sea-Bird Electronics Bird SBE-4 conductivity sensor is a modular, self-contained instrument that measures conductivity from 0 to 7 Siemens/meter.The sensors (Version 2; S/N 2000 and higher) have electrically isolated power circuits and optically coupled outputs to eliminate any possibility of noise and corrosion caused by ground loops.The sensing element is a cylindrical, flow-through, borosilicate glass cell with three internal platinum electrodes.Because the outer electrodes are connected together, electric fields are confined inside the cell, making the measured resistance (and instrument calibration) independent of calibration bath size or proximity to protective cages or other objects.Transmissometer can be deployed in either moored or profiling mode to estimate the concentration of suspended or particulate matter in seawater.The transmissometer measures the beam attenuation coefficient in the red spectral band (660 nm) of the laser lightsource over the instrument's path-length (e.g.20 or 25 cm).This instrument designation is used when specific make and model are not known.The Sea Tech Transmissometer was manufactured by Sea Tech, Inc. (Corvalis, OR, USA).chlorophyll-a fluorometer has internally selectable settings to adjust for different ranges of chlorophyll concentration, and is designed to measure chlorophyll-a fluorescence in situ.The instrument is stable with time and temperature and uses specially selected optical filters enabling accurate measurements of chlorophyll a.It can be deployed in moored or profiling mode. contents | back to top ] table of contents | back to top ] Two deployments were made at this site.The first deployment was between Oct 28, 1994 and Jan 21, 1995.The second deployment was between Apr 2,
2019-04-27T13:13:24.588Z
2019-03-13T00:00:00.000
{ "year": 2005, "sha1": "0574e093efcca14935022a3d3c448cca4c42a968", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1575/1912/bco-dmo.2402.1", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "0574e093efcca14935022a3d3c448cca4c42a968", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
256634865
pes2o/s2orc
v3-fos-license
Extracellular electron transfer-dependent anaerobic oxidation of ammonium by anammox bacteria Anaerobic ammonium oxidation (anammox) bacteria contribute significantly to the global nitrogen cycle and play a major role in sustainable wastewater treatment. Anammox bacteria convert ammonium (NH4+) to dinitrogen gas (N2) using intracellular electron acceptors such as nitrite (NO2−) or nitric oxide (NO). However, it is still unknown whether anammox bacteria have extracellular electron transfer (EET) capability with transfer of electrons to insoluble extracellular electron acceptors. Here we show that freshwater and marine anammox bacteria couple the oxidation of NH4+ with transfer of electrons to insoluble extracellular electron acceptors such as graphene oxide or electrodes in microbial electrolysis cells. 15N-labeling experiments revealed that NH4+ was oxidized to N2 via hydroxylamine (NH2OH) as intermediate, and comparative transcriptomics analysis revealed an alternative pathway for NH4+ oxidation with electrode as electron acceptor. Complete NH4+ oxidation to N2 without accumulation of NO2− and NO3− was achieved in EET-dependent anammox. These findings are promising in the context of implementing EET-dependent anammox process for energy-efficient treatment of nitrogen. Bacteria capable of anaerobic ammonium oxidation (anammox) produce half of the nitrogen gas in the atmosphere, but much of their physiology is still unknown. Here the authors show that anammox bacteria are capable of a novel mechanism of ammonium oxidation using extracellular electron transfer. A naerobic ammonium oxidation (anammox) by anammox bacteria contributes up to 50% of N 2 emitted into Earth's atmosphere from the oceans 1,2 . Also, anammox bacteria have been extensively investigated for energy-efficient removal of NH 4 + from wastewater 3 . Initially, anammox bacteria were assumed to be restricted to NH 4 + as electron donor and NO 2 − or NO as electron acceptor 4,5 . More than a decade ago, preliminary experiments suggested that Kuenenia stuttgartiensis and Scalindua could couple the oxidation of formate to the reduction of insoluble extracellular electron acceptors such as Fe(III) or Mn (IV) oxides 6,7 . However, extracellular electron transfer (EET) activity and the molecular mechanism of this coupling reaction have remained unexplored to date. Further, these tests with K. stuttgartiensis and Scalindua could not discriminate between Fe (III) oxide reduction for nutritional acquisition (i.e., via siderophores) vs. respiration through EET 8 . Therefore, these preliminary experiments are not conclusive to determine if anammox bacteria have EET capability or not. Although preliminary work showed that K. stuttgartiensis could not reduce Mn(IV) or Fe(III) with NH 4 + as electron donor 6 , the possibility of anammox bacteria to oxidize NH 4 + coupled to EET to other type of insoluble extracellular electron acceptors cannot be ruled out. In fact, EET (and set of genes involved with EET) is not uniformly applied to all insoluble extracellular electron acceptors; some electroactive bacteria are not able to transfer electrons to carbon-based insoluble extracellular electron acceptors such as electrodes in bioelectrochemical systems but could reduce metal oxides and vice versa 9 . It has been known for more than two decades that carbon-based high-molecular-weight organic materials, which are ubiquitous in terrestrial and aquatic environments and that are not involved in microbial metabolism (i.e., humic substances) can be used as an external electron acceptor for the anaerobic oxidation of compounds 10 . Also, it has been reported that anaerobic NH 4 + oxidation linked to the microbial reduction of natural organic matter fuels nitrogen loss in marine sediments 11 . A literature survey of more than 100 EET-capable species indicated that there are many ecological niches for microorganisms able to perform EET 12 . This resonates with a recent finding where Listeria monocytogenes, a host-associated pathogen and fermentative gram-positive bacterium, was able to respire through a flavin-based EET process and behaved as an electrochemically active microorganism (i.e., able to transfer electrons from oxidized fuel (substrate) to a working electrode via EET process) 13 . Further, it was reported that anammox bacteria seem to have homologs of Geobacter and Shewanella multi-heme cytochromes that are responsible for EET 14 . These observations stimulated us to investigate whether anammox bacteria can couple NH 4 + oxidation with EET to carbon-based insoluble extracellular electron acceptor and can behave as electrochemically active bacteria. Here we report that in the absence of NO 2 − , phylogenetically distant anammox bacteria couple the anaerobic oxidation of NH 4 + with transfer of electrons to carbon-based insoluble extracellular electron acceptors such as graphene oxide (GO) or electrodes poised at a certain potential in microbial electrolysis cells (MECs). Our results also revealed that anammox bacteria oxidized NH 4 + to N 2 with NH 2 OH as intermediate of the process. Interestingly, the electrons released from the NH 4 + oxidation were transferred to the extracellular electron acceptor via a pathway that is analog to the ones present in metal-reducing organisms such as Geobacter spp. and Shewanella spp. Taken together, our results revealed the potential of anammox bacteria to use solid-state electron acceptors as the terminal electron sink and demonstrated that there is no need for NO 2 − , NO 3 − or partial nitritation for anaerobic NH 4 + oxidation. Results and discussion Ammonium oxidation coupled with EET. To evaluate if anammox bacteria possess EET capability, we first tested whether enriched cultures of three phylogenetically and physiologically distant anammox species can couple the oxidation of NH 4 + with the reduction of an insoluble extracellular electron acceptor. Cultures of Ca. Brocadia (predominantly adapted to freshwater environments) and Ca. Scalindua (predominantly adapted to marine water environments) were enriched and grown as planktonic cells in membrane bioreactors ( Supplementary Fig. 1a) 15 . Fluorescence in situ hybridization (FISH) showed that the anammox bacteria constituted >95% of the bioreactor's community ( Supplementary Fig. 1b-g). Also, a previously enriched K. stuttgartiensis (predominantly adapted to freshwater environments) culture was used 4 . The anammox cells were incubated anoxically for 216 h in the presence of 15 NH 4 + (4 mM) and GO as a proxy for insoluble extracellular electron acceptor. No NO 2 − or NO 3 − was added to the incubations. GO particles are bigger than bacterial cells and cannot be internalized, and thus GO can only be reduced by EET 16 . Indeed, GO was reduced by anammox bacteria as shown by the formation of suspended reduced GO (rGO), which is black in color and insoluble (Fig. 1a) 16 . In contrast, abiotic controls did not form insoluble black precipitates. Reduction of GO to rGO by anammox bacteria was further confirmed by Raman spectroscopy, where the formation of the characteristic 2D and D + D′ peaks of rGO 17 were detected in the vials with anammox cells (Fig. 1b), whereas no peaks were detected in the abiotic control. Further, isotope analysis of the produced N 2 gas showed that anammox cells were capable of 30 N 2 formation (Fig. 1c). In contrast, 29 N 2 production was not significant in any of the tested anammox species or controls, suggesting that unlabeled NO 2 − or NO 3 − were not involved. The production of 30 N 2 indicated that the anammox cultures use a different mechanism for NH 4 + oxidation in the presence of an insoluble extracellular electron acceptor (further explained below). Gas production was not observed in the abiotic control (Fig. 1c). To determine if anammox bacteria were still dominant after incubation with GO, we extracted and sequenced total DNA from the Brocadia and Scalindua vials at the end of the experiment. Differential coverage showed that the metagenomes were dominated by anammox bacteria (Supplementary Fig. 2a, c). Taken together, these results support that anammox bacteria have EET capability. Electroactivity of anammox bacteria. Electrochemical techniques provide a powerful tool to evaluate EET, where electrodes substitute for the insoluble minerals as the terminal electron acceptor 13 . Compared with metal oxides, the use of electrodes as the terminal electron acceptor allow us to quantify the number of externalized electrons per mol of NH 4 + oxidized. Also, since the electrode is only used for bacterial respiration, then we can better assess EET activity compared with metal oxides, where we cannot differentiate between metal oxide reduction for nutritional acquisition from respiration through EET activity. Therefore, we tested if anammox bacteria interact with electrodes via EET and use them as the sole electron acceptor in MEC. One singlechamber MEC operated at eight different set potentials (from -0.1 to 0.6 V vs. standard hydrogen electrode (SHE)) using multiple working electrodes ( Supplementary Fig. 1h) was initially operated under abiotic conditions with the addition of NH 4 + only. No current and NH 4 + removal were observed in any of the abiotic controls. Subsequently, the Ca. Brocadia culture was inoculated into the MEC and operated under optimal conditions for anammox (i.e., addition of NH 4 + and NO 2 − ). Under this scenario, NH 4 + and NO 2 − were completely removed from the medium without any current generation (Fig. 2a). Stoichiometric ratios of consumed NO 2 − to consumed NH 4 + (ΔNO 2 − /ΔNH 4 + ) and produced NO 3 − to consumed NH 4 + (ΔNO 3 − /ΔNH 4 + ) were in the range of 1.0-1.3 and 0.12-0.18, respectively, which are close to the theoretical ratios of the anammox reaction 18 . These ratios indicated that anammox bacteria were responsible for NH 4 + removal in the MEC. Subsequently, NO 2 − was gradually decreased to 0 mM leaving the electrodes as the sole electron acceptor. When the exogenous electron acceptor (i.e., NO 2 − ) was completely removed from the feed, anammox cells began to form Fig. 1i) and current generation coupled to NH 4 + oxidation was observed in the absence of NO 2 − (Fig. 2a). Further, NO 2 − and NO 3 − were below the detection limit at all time points when the working electrode was used as the sole electron acceptor, suggesting that NO 2 − and NO 3 − did not play an apparent role in the process. The magnitude of the current generation was proportional to the NH 4 + concentration (Fig. 2a), and maximum current density was observed at the set potential of 0.6 V vs. SHE. There was no visible biofilm growth and current generation at set potentials ≤0.2 V vs. SHE. To confirm that the electrode-dependent anaerobic oxidation of NH 4 + was catalyzed by anammox bacteria, additional control experiments were conducted in chronological order in the MEC. The presence of allylthiourea (ATU), a compound that selectively inhibits aerobic NH 3 oxidation by ammonia monooxygenase (AMO) in ammonia-oxidizing bacteria (AOB), ammonia-oxidizing archaea (AOA), and Comammox 19 , did not result in an inhibitory effect on NH 4 + removal and current generation (Fig. 2a). NH 4 + was not oxidized when the MEC was operated in open circuit voltage mode (OCV; the electrode is not used as an electron acceptor) (Fig. 2b), strongly suggesting an electrode-dependent NH 4 + oxidation and that trace amounts of O 2 if present, are not responsible for NH 4 + oxidation. Addition of NO 2 − resulted in an immediate drop in current density with simultaneous removal of NH 4 + and NO 2 − and formation of NO 3 − , in the expected stoichiometry 18 (Fig. 2c). Repeated addition of NO 2 − resulted in the complete abolishment of the current generation, indicating that anammox bacteria were solely responsible for current production in the absence of an exogenous electron acceptor. Absence of NH 4 + from the feed resulted in no current generation, and current was immediately resumed when NH 4 + was added again to the feed (Fig. 2d), further supporting the role of anammox bacteria in current generation. These results also indicate that current generation was not catalyzed by electrochemically active heterotrophs, which might utilize organic carbon generated from endogenous decay processes. Autoclaving the MEC immediately stopped current generation and NH 4 + removal (Fig. 2d) indicating that current generation was due to biotic reaction. Similar results were also obtained with MECs operated with Ca. Scalindua or K. stuttgartiensis cultures ( Supplementary Fig. 3a, b), suggesting that they are also electrochemically active and can oxidize NH 4 + using working electrodes as the electron acceptor. Taken together, these results provide strong evidence for electrode-dependent anaerobic oxidation of NH 4 + by phylogenetically distant anammox bacteria. Cyclic voltammetry (CV) was used to correlate between current density and biofilm age, in cell-free filtrates (filtered reactor solution) and the developed biofilms at different time intervals. The anodes exhibited similar redox peaks with midpoint potentials (E 1/2 ) of −0.01 ± 0.05 V vs. SHE for all three anammox species ( Fig. 2e and Supplementary Fig. 3c, d). The midpoint potentials obtained in our CV analyses were in the redox windows of cytochromes involved in external electron transport in Shewanella spp. such as CymA and MtrC 20 . In contrast, our results differ from a previous study that reported the complete anoxic conversion of NH 4 + to N 2 at oxidative potentials of 0.73 ± 0.06 V vs. SHE in a nitrifying bioelectrochemical system 21 . This difference in the redox potentials suggest different pathways of anoxic NH 4 + oxidation. No redox peaks were observed for the cell-free solution, indicating that soluble mediators are not involved in EET. Also, the addition of exogenous riboflavin, which is a common soluble mediator involved in flavin-based EET process in gram-positive and gram-negative bacteria 13,22 , did not invoke changes in current density. Thus, the CV analysis corroborated that the electrode biofilms were responsible for current generation through direct EET mechanism. The mole of electrons transferred to the electrode per mole of NH 4 + oxidized to N 2 (Supplementary Table 1) was stoichiometrically close to Eq. 1. Also, electron balance calculations showed that coulombic efficiency (CE) was 87.8 ± 3.2% for all NH 4 + concentrations and anammox cultures tested in the experiments with electrodes as the sole electron acceptor (Supplementary Table 1). To determine if cathodic reaction (i.e., hydrogen evolution reaction) has an effect on electrode-dependent anaerobic NH 4 + oxidation, additional experiments with Ca. Brocadia were conducted by operating single and double-chamber MECs in parallel (at 0.6 V vs. SHE applied potential). However, there was no significant difference in NH 4 + oxidation and current production between the different reactor configurations (Supplementary Fig. 4), suggesting no influence of cathodic reaction (i.e., H 2 recycling) on the process. This was further supported by electron balance and CE calculations (Supplementary Table 1). In addition, NH 4 + oxidation and current production were not affected by the addition of Penicillin G ( Supplementary Fig. 4), a compound that has inhibitory effects in some heterotrophs, but it does not have any observable short-term effects on anammox activity 23,24 . Similar results were obtained with Ca. Scalindua and K. stuttgartiensis (data not shown). As pointed above, one of the limitations of Penicillin G is that it does not arrest the activity of all heterotrophs. Despite this limitation, the role of heterotrophs in current production was excluded because of the other experimental controls conducted in this study. Since no exogenous organic carbon was added to the MEC reactors, the only source of organics for heterotrophic organisms was through endogenous decay. However, there was no current generation in the absence of NH 4 + (Fig. 2d), suggesting the lack of involvement of heterotrophic electroactive bacteria in the process. Scanning electron microscopy (SEM) confirmed biofilm formation on the electrodes' surface for the three tested anammox bacteria ( Supplementary Fig. 5). The biofilm cell density of MECs inoculated with Ca. Brocadia was higher at 0.6 V vs. SHE ( Supplementary Fig. 5e, f) compared with other set potentials, and no biofilm was observed at set potentials ≤0.2 V vs. SHE ( Supplementary Fig. 5a). These observations correlate very well with the obtained current profiles at different set potentials (Fig. 2a). Cell appendages between cells and the electrode were not observed. Cell appearance was very similar to reported SEM images of anammox cells 23 . FISH with anammox-specific probes (Fig. 2f) and metagenomics of DNA extracted from the biofilm on the working electrodes of MECs showed that anammox were the most abundant bacteria in the biofilm community ( Supplementary Fig. 2b, d). Similarly, AOB were not detected, which further supports the lack of ATU inhibition on NH 4 + removal and current generation. By differential coverage and sequence composition-based binning 25 , it was possible to extract highquality genomes of Brocadia and Scalindua species from the electrodes ( Supplementary Fig. 2b, d). Based on the differences in the genome content, average amino acid identity (AAI) ≤ 95% compared with reported anammox genomes to date, and evolutionary divergence in phylogenomics analysis (Supplementary Fig. 6) we propose a tentative name for Ca. Brocadia present in our MECs: Candidatus Brocadia electricigens (etymology: L. adj. electricigens; electricity generator). Molecular mechanism of EET-dependent anammox process. After confirming through bioelectrochemical analyses that anammox bacteria are electrochemically active, isotope labeling experiments were carried out to better understand how NH 4 + is converted to N 2 by anammox bacteria in EET-dependent anammox process. Complete oxidation of NH 4 + to N 2 was demonstrated by incubating the MECs with 15 NH 4 + (4 mM) and 14 NO 2 − (1 mM). Consistent with expected anammox activity, anammox bacteria consumed first the 14 NO 2 − resulting in the accumulation of 29 N 2 in the headspace of the MECs. Interestingly, after depletion of available 14 NO 2 − , a steady increase of 30 N 2 was observed with slower activity rates compared with the typical anammox process (Fig. 3a, Supplementary Table 2). These results confirm the GO experiments where 30 N 2 was detected when the three anammox species were incubated with 15 NH 4 + (Fig. 1c). Gas production was not observed in the abiotic control incubations. In the current model of the anammox reaction (Eq. 2) 4 , NH 4 + is converted to N 2 with NO 2 − as the terminal electron acceptor. This is a process in which first, NO 2 − is reduced to nitric oxide (NO, Eq. 3) and subsequently condensed with ammonia (NH 3 ) to produce hydrazine (N 2 H 4 , Eq. 4), which is finally oxidized to N 2 (Eq. 5). The four low-potential electrons released during N 2 H 4 oxidation fuel the reduction reactions (Eqs. 3 and 4), and are proposed to build up the membrane potential and establish a proton-motive force across the anammoxosome membrane driving the ATP synthesis. In the MEC experiments with Ca. Brocadia using multiple working electrodes as sole electron acceptors, we observed the production of NH 2 OH followed by a transient accumulation of N 2 H 4 ( Supplementary Fig. 7). No inhibitory effect was observed in incubations with 2-phenyl-4,4,5,5,-tetramethylimidazoline-1oxyl-3-oxide (PTIO) (Supplementary Fig. 8), a NO scavenger 4 . Therefore, we hypothesized that NH 2 OH, and not NO, is an intermediate of the electrode-dependent anammox process. To investigate whether NH 2 OH could be produced directly from NH 4 + in electrode-dependent anammox process, MECs were incubated with 15 NH 4 + (4 mM) and 14 NH 2 OH (2 mM). The isotopic composition of the reactors revealed that unlabeled 14 NH 2 OH was used as a pool substrate, and we detected newly synthetized 15 NH 2 OH from 15 NH 4 + oxidation (Fig. 3b). Similarly, a previous study showed that NH 2 OH was the major intermediate of anoxic NH 4 + oxidation performed by electroactive nitrifying microorganisms 21 . Even though Vilajeliu et al. 21 observed the same intermediate, the difference in the community composition and midpoint redox potentials, suggest different pathways of microbial-driven anoxic NH 4 + oxidation to NH 2 OH. It is known that NO and NH 2 OH, the known intermediates in the anammox process, are strong competitive inhibitors of the N 2 H 4 . Under these conditions, anammox bacteria will consume first the preferred electron acceptor (i.e., 14 NO 2 − ) and form 29 N 2 and then the remaining 15 NH 4 + will be oxidized to the final product ( 30 N 2 ) through the electrode-dependent anammox process. NO and N 2 O were not detected throughout the experiment. Results from triplicate MEC reactors are presented as mean ± SD. b Determination of NH 2 OH as the intermediate of the electrode-dependent anammox process. The MECs with mature biofilm on the working electrodes operated at 0.6 V vs. SHE were fed with 4 mM 15 NH 4 + and 2 mM 14 NH 2 OH. Under these conditions, anammox bacteria would preferentially consume the unlabeled pool of hydroxylamine (i.e., 14 NH 2 OH), leading to the accumulation of 15 NH 2 OH due to the oxidation of 15 NH 4 + . Samples were derivatized using acetone, and isotopic ratios were determined by gas chromatography mass spectrometry (GC/MS). Results from triplicate MEC reactors are presented as mean ± SD. c Ion mass chromatograms of hydroxylamine derivatization with acetone. The MECs with mature biofilm (Ca. Brocadia) on the working electrodes operated at 0.6 V vs. standard hydrogen electrode (SHE) were fed with 4 mM 15 NH 4 + and 10% deuterium oxide (D 2 O). The mass to charge (m/z) of 73, 74, and 75 corresponds to derivatization products of 14 NH 2 OH, 15 NH 2 OH, and 15 NH 2 OD, respectively, with acetone determined by GC/MS. Twenty microliters of 14 NH 2 OH and 15 NH 2 OH were used as standards. The 73 m/z (top) at a retention time of 8.6 min arises from the acetone used for derivatization. The 75 m/z (bottom) accumulation over the course of the experiment indicates that the oxygen used in the anaerobic oxidation of ammonium originates from OH − of the water molecule. oxidation activity by the hydrazine dehydrogenase (HDH) 26 . However, oxidation of N 2 H 4 ( Supplementary Fig. 7) and detection of 30 N 2 (Fig. 3a) in our experiments, suggest that even though there might be some inhibition caused by the NH 2 OH, the HDH is still active. Also, comparative transcriptomics analysis of the electrode's biofilm revealed that the HDH was one of the most upregulated genes when the electrode was used as the electron acceptor instead of NO 2 − (Supplementary discussion). Incubations with 15 NH 4 + (4 mM) in 10% deuterium oxide (D 2 O) showed accumulation of 15 NH 2 OD, which suggests that in order to oxidize the NH 4 + to NH 2 OH, the different anammox bacteria use OH − ions generated from water (Fig. 3c). Abiotic incubations did not show any production of NH 2 OH or NH 2 OD. Based on these results we propose the following reactions for EETdependent anammox process: The complete NH 4 + oxidation to N 2 coupled with reproducible current production can only be explained by electron transfer from the anammoxosome compartment (energetic central of anammox cells and where the NH 4 + is oxidized) to the electrode. In order to identify the possible pathways involved in NH 4 + oxidation and electron flow through compartments (anammoxosome) and membranes (cytoplasm and periplasm) in EETdependent anammox process (electrode poised at 0.6 V vs. SHE as electron acceptor) vs. typical anammox process (i.e., nitrite used as electron acceptor), we conducted a genome-centric comparative transcriptomics analysis (Supplementary discussion). Even though comparative transcriptomics analysis is a useful approach for exploring and detecting genes not previously known to play a role in adaptive responses to environmental changes, the levels of mRNA are not directly proportional to the expression level of the protein they encode, and therefore it is difficult to predict protein function and activity from quantitative transcriptome data. Accordingly, the results presented below from the comparative transcriptomics analysis should be read as hypothetical. In the anammoxosome compartment, the genes encoding for ammonium transporter (AmtB), hydroxylamine oxidoreductase and HDH were the most upregulated in response to the electrode as the electron acceptor (Supplementary Table 8). This observation agrees with the NH 4 + removal and oxidation to N 2 observed in the MECs and isotope labeling experiments (Figs. 2a and 3a). The genes encoding for NO and NO 2 − reductases (nir genes) and their redox couples were significantly downregulated when the electrode was used as the electron acceptor (Supplementary Table 8). This is expected as NO 2 − was not added in the electrode-dependent anammox process. Also, this supports the hypothesis that NO is not an intermediate of the electrode-dependent anammox process and that there was no effect of PTIO when NO 2 − was replaced by the electrode as electron acceptor (Supplementary Fig. 8). Isotope labeling experiments revealed that NH 2 OH was the intermediate in EET-dependent anammox process and NO was not detected throughout the experiment (Fig. 3b), suggesting that the production of NH 2 OH was not through NO reduction. This was further supported by the observation that the electron transfer module (ETM) and its redox partner whose function is to provide electrons to the hydrazine synthase (HZS) for NO reduction to NH 2 OH were downregulated (Supplementary Table 6). Interestingly, our analysis revealed that the electrons released from the N 2 H 4 oxidation (Eq. 8) are transferred to the electrode via an EET pathway that is analog to the ones present in metal-reducing organisms such as Geobacter spp. and Shewanella spp. (Supplementary Fig. 10, Supplementary discussion). Highly expressed cytoplasmic electron carriers such as NADH and ferredoxins can be oxidized at the cytoplasmic membrane by the NADH dehydrogenase (NADH-DH) and/or formate dehydrogenase to directly reduce the menaquinone pool inside the cytoplasmic membrane (Supplementary Table 3). An upregulated protein similar to CymA (tetraheme c-type cytochrome) in Shewanella would then oxidize the reduced menaquinones, delivering electrons to highly upregulated periplasmic cytochromes shuttles and to a porin-cytochrome complex that spans the outer membrane ( Supplementary Fig. 10, Supplementary Table 3). From this complex, electrons could be directly accepted by the insoluble extracellular electron acceptor. Taken together, the results from the comparative transcriptomics analysis suggest an alternative pathway for NH 4 + oxidation coupled to EET when the working electrode is used as electron acceptor compared with NO 2 − as the electron acceptor. In conclusion, our study provides the first experimental evidence that phylogenetically and physiologically distant anammox bacteria have EET capability and can couple the oxidation of NH 4 + with the transfer of electrons to carbon-based insoluble extracellular electron acceptors. The prevalence of EET-based respiration has been demonstrated using bioelectrochemical systems for both gram-positive and gram-negative bacteria 13,27 . However, compared with reported EET-capable bacteria, to externalize electrons anammox bacteria have to overcome an additional electron transfer barrier: the anammoxosome compartment. Electrochemically active bacteria are typically found in environments devoid of oxygen or other soluble electron acceptors 27 . Our results show a novel process of anaerobic ammonium oxidation coupled to EET-based respiration of carbon-based insoluble extracellular electron acceptor by both freshwater and marine anammox bacteria and suggest that this process may also occur in natural anoxic environments where soluble electron acceptors are not available. In environments such as anoxic sediments, microbial metabolism is limited by the diffusive supply of electron acceptors 28 . Nitrogen loss by anammox and denitrification are expected to be limited by the diffusive flux of NO 2 − or NO 3 − , and/or by O 2 diffusion that can be used by aerobic NH 4 + oxidizers. However, 30 N 2 production has been observed in 15 NH 4 + incubations of sediments at depths below the penetration depth of NO 2 − , NO 3 − and O 2 in marine 11,28-30 and freshwater 31 environments. These observations cannot be explained by conventional denitrification or anammox process. Interestingly, this phenomenon was observed in sediments rich in metal oxides or natural organic matter such as humic substances 11,31 . EET-dependent anaerobic ammonium oxidation may play a role in such environments devoid of soluble electron acceptors such as NO 2 − , NO 3 − , and O 2 . In natural environments, the NH 4 + used by anammox bacteria is derived from heterotrophic pathways of degradation of organic matter 32 , and therefore anammox cannot operate independently from the mineralization of organic matter 33 . Ubiquitous high-molecularweight organic compounds such as humic substances are known to act as terminal electron acceptors in anaerobic microbial respiration 10,34 . Also, humic substances may serve as redox mediators between electron donors and poorly soluble metal oxides minerals in soils and sediments mediating dissimilatory metal oxide reduction 34 . Therefore, humic substances present in natural organic matter may be a good candidate in natural environments of carbon-based electron acceptor for EET-based anaerobic ammonium oxidation, or as redox mediator between anammox and iron oxides. These results offer a new perspective of a key player involved in the biogeochemical nitrogen cycle, which previously was believed to rely strictly on soluble electron acceptors for NH 4 + oxidation. The fact that anammox bacteria can perform NH 4 + oxidation coupled with EET, suggest that this process may have implications in the global nitrogen cycle by contributing to the nitrogen loss in environments where soluble electron acceptors are unavailable. Therefore, a better understanding of EET processes contributes to our understanding of the cycles that occur on our planet 27 . Also, compared with conventional anammox process, EET-dependent anammox process achieved complete removal of NH 4 + (at low and high concentrations) to nitrogen gas with no accumulation of NO 2 − or NO 3 − or the production of the greenhouse gas N 2 O. In conventional anammox process (i.e., when NO 2 − is used as the electron acceptor), NO 3 − is generated as a result of the oxidation of NO 2 − by anammox bacteria. Consequently, the effluent from conventional anammox process for wastewater treatment requires further polishing by nitrate reducing organisms before discharge. In contrast, as shown by our results, since NO 2 − was not added in the EET-dependent anammox process, no production of NO 3 − was detected. Given the fact that EET-dependent anammox process can occur at very low voltages (0.3-0.6 V vs. SHE), the process can be powered by renewable energy sources such as wind or solar 35 . Also, the energy released from NH 4 + oxidation can be captured in the form of hydrogen gas. These findings have important implications in energy-efficient treatment of N-rich wastewater using bioelectrochemical systems. Future studies should focus on evaluating the EET-dependent process using anode materials with more conductive surface area such as porous micro-channeled electrodes in order to improve the NH 4 + removal rates. Materials and methods Enrichment and cultivation of anammox bacteria. Biomass from upflow column reactors (XK 50/60 Column, GE Healthcare, UK) with Ca. Brocadia and Ca. Scalindua were harvested and used as inoculum. Ca. Brocadia and Ca. Scalindua planktonic cells were enriched in two bioreactors (BioFlo ® 115, New Brunswick, USA) equipped with a microfiltration (average pore size 0.1 µm) hollow fiber membrane module (Zena-membrane, Czech Republic) ( Supplementary Fig. 1a). Operating conditions of the membrane bioreactors (MBRs) were described previously 15 . The MBRs were operated at pH 7.5-8.0 and 35 ± 1°C for Brocadia and room temperature (20-25 C) for Scalindua. The culture liquid in the MBRs was continuously mixed with a metal propeller at a stirring speed of 150 rpm and purged with 95% Ar-5% CO 2 at a flow rate of 10 mL min −1 to maintain anaerobic conditions. The inorganic synthetic medium was fed continuously to the reactors at a rate of~5 L d −1 and hydraulic retention time was maintained at one day. The synthetic medium was prepared by adding the following constituents; NH 4 + (2.5-10) mM, NO 2 − (2.5-12) mM, CaCl 2 100 mg L -1 , MgSO 4 300 mg L -1 , KH 2 PO 4 30 mg L -1 , KHCO 3 500 mg L -1 , and trace element solutions 36 . In the case of Ca. Scalindua culture, the synthetic medium was prepared using non-sterilized Red Sea water. Samples for microbial community characterization were taken from the MBRs for FISH and metagenomics analysis (see FISH and DNA extraction, metagenome library preparation, sequencing and sequence processing and analysis sections below). A previously enriched K. stuttgartiensis culture was also used for the experiments 4 . Incubation of anammox bacteria with NH 4 + and graphene oxide. To test whether anammox bacteria have EET capability, the three enriched anammox cultures were incubated in serum vials for 216 h with 15 NH 4 + and graphene oxide (GO) as a proxy for insoluble extracellular electron acceptor. Standard anaerobic techniques were employed in the batch incubation experiments. All the procedures were performed in the anaerobic chamber (Coy Laboratory Products; Grass Lake Charter Township, MI, USA). Anoxic buffers and solutions were prepared by repeatedly vacuuming and purging helium gas (>99.99%) before experiments. Biomass from the MBRs was centrifuged, washed twice, and suspended in inorganic medium containing 2 mM 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES, pH 7.8) prior to inoculation into the vials. The same composition of the inorganic medium used in the MBRs was supplied to the vials. The cell suspension was dispensed into 100 mL glass serum vials, which were sealed with butyl rubber stoppers and aluminum caps. Biomass concentration in the vials ranged from 0.1 to 0.9 mg-protein mL -1 . The headspace of the serum vials was replaced by repeatedly vacuuming and purging with pure (>99.99%) helium gas. Positive pressure (50-75 kPa) was added to the headspace to prevent unintentional contamination with ambient air during the incubation and gas sampling. Prior the addition of 15 NH 4 + , the vials were pre-incubated overnight at room temperature (~25°C) to remove any trace amounts of substrates and oxygen. The activity test was initiated by adding 4 mM of 15 NH 4 Cl (Cambridge Isotope Laboratories) and GO to a final concentration of 200 mg L -1 using a gas-tight syringe (VICI; Baton Rouge, LA, USA). No NO 2 − or NO 3 − was added to the incubations. The vials were incubated in triplicates at 30°C for Ca. Brocadia and K. stuttgartiensis cultures and at room temperature (~25°C) for vials with Ca. Scalindua. Vials without biomass were also prepared as abiotic controls. The concentrations of 28 N 2 , 29 N 2 , and 30 N 2 gas were determined by gas chromatography mass spectrometry (GC/MS) analysis 37 . Fifty microliters of headspace gas was collected using a gas-tight syringe (VICI; Baton Rouge, LA, USA) and immediately injected into a GC (Agilent 7890A system equipped with a CP-7348 PoraBond Q column) combined with 5975C quadrupole inert MS (Agilent Technologies; Santa Clara, CA, USA), and mass to charge (m/z) = 28, 29, and 30 was monitored. Standard calibration curve of N 2 gas was prepared with 30 N 2 standard gas (>98% purity) (Cambridge Isotope Laboratories; Tewksbury, MA, USA). At the end of the batch incubations, DNA was extracted and sequenced for metagenomics analysis (see DNA extraction, metagenome library preparation, sequencing and sequence processing, and analysis section below). To confirm the reduction of the GO, the samples were centrifuged and subjected to dehydration process with absolute ethanol. Samples were maintained in a desiccator until Raman spectroscopy analysis. Raman spectroscopy (StellarNet Inc) was performed with the following settings: Laser 473 nm, acquisition time 20 s, accumulation 5 and objective 50×. Bioelectrochemical analyses. To evaluate if anammox bacteria (Ca. Brocadia and Ca. Scalindua) are electrochemically active, single-chamber multiple working electrode glass reactors with 500 mL working volume were operated in microbial electrolysis cell (MEC) mode. The working electrodes (anodes) were graphite rods of 8 cm length (7.5 cm inside the reactor) and 0.5 cm in diameter. Platinum mesh was used as counter electrode (cathode) and Ag/AgCl as reference electrode (Bioanalytical Systems, Inc.). A schematic representation of the multiple working electrode MEC is presented in Supplementary Fig. 1h. The multiple working electrodes were operated at a set potential of -0.1, 0, 0.1, 0.2, 0.3, 0.4, 0.5, and 0.6 V vs. SHE. The amperometric current was monitored continuously using a VMP3 potentiostat (BioLogic Science Instruments, USA), with measurements every 60 s and analyzed using EC-lab V 10.02 software. To evaluate if K. stuttgartiensis is electrochemically active, experiments were conducted in single-chamber MECs (300 mL working volume) with carbon cloth working electrode (0.6 V vs. SHE). The reactors and working and counter electrodes were sterilized by autoclaving prior to the start of the experiments. The reference electrodes were sterilized by soaking in 3 M NaCl overnight and rinsing with sterile medium. After the reactors were assembled, epoxy glue was used to seal every opening in the reactor to avoid leakage. Gas bags (0.1L Cali -5 -Bond. Calibrate, Inc.) were connected to the MECs to collect any gas generated. The gas composition in the gas bags was analyzed using a gas chromatograph (SRI 8610C gas chromatograph, SRI Instruments). The inorganic medium composition in the MECs was the same as the one supplied in the MBRs (see enrichment and cultivation of anammox bacteria section above), with variations in the NH 4 + and/or NO 2 − concentration. After preparation, the inorganic medium was boiled, sparged with N 2 :CO 2 (80:20) gas mix for 30 min to remove any dissolved oxygen and finally autoclaved. The autoclaved medium was cooled down to room temperature inside the anaerobic chamber (Coy Laboratory, USA). Prior to the experiments, KHCO 3 was weighed in the anaerobic chamber and dissolved in the medium. The reactors were operated in fed-batch mode at 30°C for Ca. Brocadia and K. stuttgartiensis cultures and at room temperature (~25°C) for Ca. Scalindua. The medium in the MECs was gently mixed with a magnetic stirrer throughout the course of the experiments. The pH of the MECs was not controlled but was at all times between 7.0 and 7.5. To exclude the effect of abiotic (i.e., non-Faradaic) current, initial operation of the reactors was done without any biomass addition. After biomass inoculation, the MECs were operated with set potentials and optimal conditions for the anammox reaction (i.e., addition of NH 4 + and NO 2 − ). Afterward, NO 2 − was gradually decreased to 0 mM leaving the working electrodes as the sole electron acceptor. To confirm that the electrode-dependent anaerobic oxidation of NH 4 + was catalyzed by anammox bacteria, additional control experiments were conducted in chronological order including addition of allylthiourea (ATU), operation in open circuit voltage mode (i.e., anodes were not connected to the potentiostat; electrode is not used as electron acceptor), addition of nitrite, operation without addition of NH 4 + and then with addition of NH 4 + , and autoclaving. ATU was added to a final concentration of 100 μM to evaluate the contribution of nitrifiers to the process 19 . Biomass from a nitrifying reactor was incubated in triplicate vials with 100 μM of ATU and was used as a positive control for the inhibitory effect of ATU. Throughout the reactor operation, the concentrations of NH 4 + , NO 2 − , and NO 3 − were determined as described below (see "Analytical methods" section). All experiments were done in triplicate MECs, unless mentioned otherwise. CV at a scan rate of 1 mV s -1 was performed for the anodic biofilms at different time intervals following initial inoculation to determine their redox behavior. Scans ranged from −0.6 to 0.6 V vs. SHE at pH 7.0 and 25°C. Current was normalized to the geometric anode surface area. To determine the presence of extracellular secreted redox mediators by anodic communities, CVs were performed with cellfree filtrates (filtered using a 0.2 μm pore diameter filter) collected from the reactors and placed in separate sterile electrochemical cells. Also, experiments were conducted to evaluate the effect of adding riboflavin, which is a common soluble mediator involved in EET in gram-positive and gram-negative bacteria 13,22 . Riboflavin was added to the mature anammox biofilm to a final concentration of 250 nM 22 . To test if cathodic reaction (i.e., hydrogen evolution reaction) has an effect on electrode-dependent anaerobic ammonium oxidation, experiments were also conducted in double-chamber MECs (Supplementary Fig. 1k) with a single carbon cloth working electrode (0.6 V vs. SHE). The anode and cathode chambers in double-chamber MECs were separated by a proton-exchange Nafion membrane. Also, to exclude the effect of heterotrophic activity on the current generation, 500 mg L −1 of penicillin G (Sigma-Aldrich, St. Louis, MO) 8 was added in the last batch cycle to inhibit heterotrophs. To determine the role of NO in the electrode-dependent anammox metabolism, single-chamber MECs were incubated with 4 mM NH 4 + and 100 μM of 2-phenyl-4,4,5,5,-tetramethylimidazoline-1-oxyl-3-oxide (PTIO), a NO scavenger. MECs with 4 mM NH 4 + and without PTIO addition were run in parallel as the negative control. PTIO inhibits K. stuttgartiensis activity when NO is an intermediate of the anammox reaction 4 , therefore vials with K. stuttgartiensis were used as positive control of the effect of PTIO. Liquid samples were taken every day and filtered using a 0.2 μm filter and subjected to determination of NH 4 + concentration as described below (see "Analytical methods" section). For isotopic and comparative transcriptomics analysis experiments, singlechamber MECs (Adams & Chittenden Scientific Glass, USA) with a single carbon cloth working electrode (0.6 V vs. SHE) and 300 mL working volume were used ( Supplementary Fig. 1k). N tracer batch experiments in MECs. To elucidate the molecular mechanism of electrode-dependent anaerobic ammonium oxidation by different anammox bacteria, isotopic labeling experiments were conducted in single-chamber MECs operated at set potential of 0.6 V vs. SHE. All batch incubation experiments were performed in triplicate MECs. MEC incubations without biomass for the 15 N tracer batch experiments were also prepared to exclude any possibility of an abiotic reaction. Standard anaerobic techniques were employed in the batch incubation experiments. All the procedures were performed in the anaerobic chamber (Coy Laboratory Products; Grass Lake Charter Township, MI, USA). Anoxic buffers and solutions were prepared by repeatedly vacuuming and purging helium gas (>99.99%) before the experiments. The purity of 15 N-labeled compounds was greater than 99%. The headspace of the MECs was replaced by repeatedly vacuuming and purging with pure (>99.99%) helium gas. Positive pressure (50-75 kPa) was added to the headspace to prevent unintentional contamination with ambient air during the incubation and gas sampling. Oxidation of NH 4 + to N 2 was demonstrated by incubating the MECs with 15 37 . Fifty microliters of headspace gas was collected using a gas-tight syringe (VICI; Baton Rouge, LA, USA) and immediately injected into a GC (Agilent 7890A system equipped with a CP-7348 PoraBond Q column) combined with 5975C quadrupole inert MS (Agilent Technologies; Santa Clara, CA, USA). Standard calibration curve of N 2 gas was prepared with 30 N 2 standard gas (>98% purity) (Cambridge Isotope Laboratories; Tewksbury, MA, USA). To investigate whether hydroxylamine (NH 2 OH) could be produced directly from NH 4 + in electrode-dependent anaerobic ammonium oxidation by anammox bacteria, single-chamber MECs were incubated with 15 NH 4 Cl (4 mM, Cambridge Isotope Laboratories) and an unlabeled pool of 14 NH 2 OH (2 mM) for 144 h. Liquid samples were taken every day and filtered using a 0.2 μm filter and subjected to determination of 15 NH 2 OH and 14 NH 2 OH. NH 2 OH was determined by GC/MS analysis after derivatization using acetone 38 . Briefly, 100 µl of liquid sample was mixed with 4 µl of acetone, and 2 µl of the derivatized sample was injected to a GC (Agilent 7890 A system equipped with a CP-7348 PoraBond Q column) combined with 5975 C quadrupole inert MS (Agilent Technologies; Santa Clara, CA, USA) in splitless mode. NH 2 OH was derivatized to acetoxime (C 3 H 7 NO), and the molecular ion peaks were detected at mass to charge (m/z) = 73 and 74 for 14 NH 2 OH and 15 NH 2 OH, respectively. Twenty-five micrometers of 14 NH 2 OH and 15 NH 2 OH were used as standards. To determine the source of the oxygen used in the electrode-dependent NH 4 + oxidation to NH 2 OH, MECs were incubated with 15 NH 4 Cl (4 mM, Cambridge Isotope Laboratories) in the presence of 10% D 2 O for 144 h. Stable isotopes of NH 2 OH were determined by GC/MS analysis after derivatization using acetone as described above. Activity and electron balance calculations. Activities of specific anammox ( 29 N 2 ) with nitrite as the preferred electron acceptor and electrode-dependent anammox ( 30 N 2 ) with working electrode (0.6 V vs. SHE) as sole electron acceptor were calculated based on the changes in gas concentrations in single-chamber MEC batch incubations. The activity was normalized against the protein content of the biofilm on the electrodes. Protein content was measured as described below (see "Analytical methods" section). The moles of electrons recovered as current per mole of NH 4 + oxidized were calculated using where I is the current (A) obtained from the chronoamperometry, dt (s) is the time interval over which data were collected, NH 4 + is the moles of NH 4 + consumed during the experiment, and F = 96,485 C mol -1 is Faraday's constant. CE was calculated using where n CE Theo (NH 4 + ) is the theoretical number of moles of electrons (in our case it is three moles of electrons) recovered as current per mole of NH 4 + oxidized. Analytical methods. All samples were filtered through a 0.2 µm pore-size syringe filters (Pall corporation) prior to chemical analysis. NH 4 + concentration was determined photometrically using the indophenol method 39 (lower detection limit = 5 μM). Absorbance at a wavelength of 600 nm was determined using multilabel plate readers (SpectraMax Plus 384; Molecular Devices, CA, USA). NO 2 − concentration was determined by the naphthylethylenediamine method 39 (lower detection limit = 5 μM). Samples were mixed with 4.9 mM naphthylethylenediamine solution, and the absorbance was measured at a wavelength of 540 nm. NO 3 − concentration was measured by HACH kits (HACH, CO, USA; lower detection limit = 0.01 mg l −1 NO 3 − -N). User's guide was followed for these kits and concentrations were measured by spectrophotometer (D5000, HACH, CO, USA). Concentrations of NH 2 OH and hydrazine (N 2 H 4 ) were determined colorimetrically as previously described 40 . For NH 2 OH, liquid samples were mixed with 8-quinolinol solution (0.48% (w/v) trichloroacetic acid, 0.2% (w/v) 8hydroxyquinoline and 0.2 M Na 2 CO 3 ), and heated at 100°C for 1 min. After cooling down for 15 min, absorbance was measured at 705 nm 41 . N 2 H 4 was derivatized with 2% (w/v) p-dimethylaminobenzaldehyde and absorbance at 460 nm was measured 42 . The concentration of biomass on the working electrodes was determined as protein concentration using the DC Protein Assay Kit (Bio-Rad, Tokyo, Japan) according to manufacturer's instructions. Bovine serum albumin was used as the protein standard. Fluorescence in situ hybridization (FISH). The microbial community in the MBRs and the spatial distribution of anammox cells on the surface of the graphite rod electrodes was examined by FISH after 30 days of reactor operation. The graphite rod electrodes were cut in the anaerobic chamber with a sterilized tube cutter (Chemglass Life Sciences, US). The electrode samples were fixed with 4% (v/v) paraformaldehyde (PFA), followed by 10 nm cryosectioning at −30°C (Leica CM3050 S Cryostat). FISH with rRNA-targeted oligonucleotide probes was performed as described elsewhere 43 using the EUB338 probe mix composed of equimolar EUB338 I, EUB338 II, and EUB 338 III 44,45 for the detection of bacteria and probes AMX820 or SCA1309 for anammox 46,47 . Cells were counterstained with 1 μg ml −1 DAPI (4′,6-diamidino-2-phenylindole) solution. Fluorescence micrographs were recorded by using a Leica SP7 confocal laser scanning microscope. To determine the relative abundance of anammox bacteria by quantitative FISH, 20 confocal images of FISH probe signals were taken at random locations in each well and analyzed by using the digital image analysis DAIME software as described elsewhere 48 . Scanning electron microscopy. The graphite rod electrodes were cut in the anaerobic chamber with sterilized tube cutter (Chemglass Life Sciences, US). The electrode samples were soaked in 2% glutaraldehyde solution containing phosphate buffer (50 mM, pH 7.0) and stored at 4°C. Sample processing and scanning electron microscopy (SEM) was performed as described elsewhere 49 sputter-coated with gold-palladium before imaging in a JEOL JSM-6335F SEM, operating at 3 kV. Metagenomics sequencing and analysis. Biomass from the vials of the GO experiment was harvested by centrifugation (4000 × g, 42°C) at the end of the batch incubations. Biofilm samples from the electrodes were collected after 30 days of reactor operation with the working electrode as the sole electron acceptor. The biomass pellet and the electrode samples were suspended in Sodium Phosphate Buffer in the Lysing Matrix E 2 mL tubes (MP Biomedicals, Tokyo, Japan). After 2 min of physical disruption by bead beating (Mini-beadbeater TM , Biospec products), the DNA was extracted using the Fast DNA spin kit for soil (MP Biomedicals, Tokyo, Japan) according to the manufacturer's instructions. The DNA was quantified using Qubit (Thermo Fisher Scientific, USA) and fragmented to~550 bp using a Covaris M220 with microTUBE AFA Fiber screw tubes and the settings: duty factor 20%, peak/displayed power 50 W, cycles/burst 200, duration 45 s, and temperature 20°C. The fragmented DNA was used for metagenome preparation using the NEB Next Ultra II DNA library preparation kit. The DNA library was paired-end sequenced (2 × 301 bp) on a Hiseq 2500 system (Illumina, USA). Raw reads obtained in the FASTQ format were processed for quality filtering using Cutadapt package v. 1.10 51 with a minimum phred score of 20 and a minimum length of 150 bp. The trimmed reads were assembled using SPAdes v. 3.7.1 52 . The reads were mapped back to the assembly using minimap2 53 (v. 2.5) to generate coverage files for metagenomic binning. These files were converted to the sequence alignment/map (SAM) format using samtools 54 . Open reading frames (ORFs) were predicted in the assembled scaffolds using Prodigal 55 . A set of 117 hidden Markov models (HMMs) of essential single-copy genes were searched against the ORFs using HMMER3 (http://hmmer.janelia.org/) with default settings, with the exception that option (-cut_tc) was used 56 . Identified proteins were taxonomically classified using BLASTP against the RefSeq protein database with a maximum e-value cut-off of 10 −5 . MEGAN was used to extract class-level taxonomic assignments from the BLAST output 57 . The script network.pl (http:// madsalbertsen.github.io/mmgenome/) was used to obtain paired-end read connections between scaffolds. 16S rRNA genes were identified using BLAST 58 (v. 2.2.28+, and the 16S rRNA fragments were classified using SINA 59 (v. 1.2.11) with default settings except min identity adjusted to 0.80. Additional supporting data for binning was generated according to the description in the mmgenome package 60 (v. 0.7.1.). Genome binning was carried out in R 61 (v. 3.3.4) using the R-studio environment. Individual genome bins were extracted using the multimetagenome principles 25 implemented in the mmgenome R package 61 (v. 0.7.1). Completeness and contamination of bins were assessed using coverage plots through the mmgenome R package and by the use of CheckM 62 based on the occurrence of a set of single-copy marker genes 63 . Genome bins were refined manually as described in the mmgenome package and the final bins were annotated using PROKKA 64 (v. 1.12-beta). Phylogenomics analysis. Extracted bins and reported anammox genomes were used for phylogenetic analysis. Reported anammox genomes were downloaded from the NCBI GenBank. HMM profiles for 139 single-copy core genes 63 were concatenated using anvi'o platform 65 . Phylogenetic trees with estimated branch support values were constructed from these concatenated alignments using MEGA7 66 with Neighbor Joining, Maximum-likelihood and UPGMA methods. Comparative transcriptomics analysis. Comparative transcriptomic analysis was conducted to compare the metabolic pathway of NH 4 + oxidation and electron flow when working electrode is used as the electron acceptor vs. NO 2 − as the electron acceptor. Samples for comparative transcriptomic analysis were taken from mature electrode's biofilm of duplicate single-chamber MECs with NO 2 − as the sole electron acceptor and after switching to set potential growth (0.6 V vs. SHE, electrode as the electron acceptor). Biofilm samples were collected from carbon cloth electrodes with sterilized scissors in the anaerobic chamber. Samples were stored in RNAlater™ Stabilization Solution (Invitrogen™) until further processing. Total RNA was extracted from the samples using PowerBiofilm RNA Isolation kit (QiAGEN) according to the manufacturer's instructions. The RNA concentration of all samples was measured in duplicate using the Qubit BR RNA assay. The RNA quality and integrity were confirmed for selected samples using TapeStation with RNA ScreenTape (Agilent Technologies). The samples were depleted of rRNA using the Ribo-zero Magnetic kit (Illumina Inc.) according to manufacturer's instructions. Any potential residual DNA was removed using the DNase MAX kit (MoBio Laboratories Inc.) according to the manufacturer's instructions. After rRNA depletion and DNase treatment the samples were cleaned and concentrated using the RNeasy MinElute Cleanup kit (QIAGEN) and successful rRNA removal was confirmed using TapeStation HS RNA Screentapes (Agilent Technologies). The samples were prepared for sequencing using the TruSeq Stranded Total RNA kit (Illimina Inc.) according to the manufacturer's instructions. Library concentrations were measured using Qubit HS DNA assay and library size was estimated using TapeStation D1000 ScreenTapes (Agilent Technologies). The samples were pooled in equimolar concentrations and sequenced on an Illumina HiSeq2500 using a 1 × 50 bp Rapid Run (Illumina Inc). Raw sequence reads in fastq format were trimmed using USEARCH 67 v10.0.2132, -fastq_filter with the settings -fastq_minlen 45 -fastq_truncqual 20. The trimmed transcriptome reads were also depleted of rRNA using BBDuk 68 with the SILVA database as reference database 69 . The reads were mapped to the predicted protein coding genes generated from Prokka 64 v1.12 using minimap2 53 v2.8-r672, both for the total metagenome and each extracted genome bin. Reads with a sequence identity below 0.98 were discarded from the analysis. The count table was imported to R 61 , processed and normalized using the DESeq2 workflow 70 and then visualized using ggplot2. Analyses of overall sample similarity were done using normalized counts (log transformed), through vegan 71 and DESeq2 70 packages (Supplementary Fig. 9). Differentially expressed genes were evaluated for the presence of N-terminal signal sequences, transmembrane spanning helices (TMH) and subcellular localization using SignalP 5.0 72 , TMHMM 2.0 software and PSORTb 3.0.2 73 respectively. Differentially expressed genes that appeared annotated as 'hypothetical' were reconsidered for a putative function employing BLAST searches (i.e., BLASTP, CD-search, SmartBLAST), MOTIF search, COG, and PFAM databases, as well as by applying the HHpred homology detection and structure prediction program (MPI Bioinformatics Toolkit). Statistics and reproducibility. The number of replicates is detailed in the subsections for each specific experiment and was mostly determined by the amount of biomass available for the different cultures. In all experiments, three biological replicates were used, unless mentioned otherwise. No statistical methods were used to predetermine the sample size. The experiments were not randomized, and the investigators were not blinded to allocation during experiments and outcome assessment. Statistical analyses were carried out in R 61 v. 3.3.4 using the R-studio environment. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Raw sequencing reads of Illumina HiSeq of metagenomics and metatranscriptomics data associated with this project can be found at the NCBI under BioProject PRJNA517785. Annotated GenBank files for the anammox genomes extracted in this study can be found under the accession numbers SHMS00000000 and SHMT00000000. The genome binning and the comparative transcriptomics analysis are entirely reproducible using the R files available on https://github.com/DarioRShaw/Electro-anammox. Also, complete datasets generated in the differential expression analysis are available in the online version of the paper.
2023-02-08T15:44:56.927Z
2020-04-28T00:00:00.000
{ "year": 2020, "sha1": "dd1b16fd84f4ca1d4d050b191d480a538346b7ee", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-16016-y.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "dd1b16fd84f4ca1d4d050b191d480a538346b7ee", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [] }
55903037
pes2o/s2orc
v3-fos-license
Hopping conductivity and insulator-metal transition in films of touching semiconductor nanocrystals This paper is focused on the the variable-range hopping of electrons in semiconductor nanocrystal (NC) films below the critical doping concentration $n_c$ at which it becomes metallic. The hopping conductivity is described by the Efros-Shklovskii law which depends on the localization length of electrons. We study how the localization length grows with the doping concentration $n$ in the film of touching NCs. For that we calculate the electron transfer matrix element $t(n)$ between neighboring NCs for two models when NCs touch by small facets or just one point. We study two sources of disorder: variations of NC diameters and random Coulomb potentials originating from random numbers of donors in NCs. We use the ratio of $t(n)$ to the disorder-induced NC level dispersion to find the localization length of electrons due to the multi-step elastic co-tunneling process. We found three different phases at $n<n_c$ depending on the strength of disorder, the material, sizes of NCs and their facets: 1)"insulator"where the localization length of electrons increases monotonically with $n$ and 2)"oscillating insulator"when the localization length (and the conductivity) oscillates with $n$ from the insulator base and 3)"blinking metal"where the localization length periodically diverges. The first two phases were seen experimentally and we discuss how one can see the more exotic third one. In all three the localization length diverges at $n=n_c$. This allows us to find $n_c$. I. INTRODUCTION Semiconductor nanocrystals (NCs) have a great potential for optoelectronics applications such as solar cells 1 , light emitting diodes 2 and field effect transistors 3,4 . Their advantage is size-tunable optical and electrical properties 5 and low-cost solution-based processing techniques 6,7 . These applications require conducting NC films and several ways of introducing carriers via doping are being explored 3,[8][9][10][11][12][13][14][15] . At a given concentration of carriers one tries to improve the mobility by moving NCs closer to each other and reducing their contact resistance. In many studies 8,9,11,15 the low temperature conductivity of doped films was found to obey Efros-Shklovskii (ES) variable range hopping law 16 : Here σ 0 is a conductivity prefactor, T is the temperature, and in Gaussian units. Here e is the electron charge, ξ is the localization length, ε f is the effective dielectric constant of the film, k B is the Boltzmann constant, C 9.6 17 . Typically, ξ grows with the concentration of electrons n in a NC and with the improvement of contacts between NCs. Therefore, T ES becomes smaller and the film becomes more conducting 15 . In this paper we concentrate on doping of NC films by chemical donors or acceptors 18 which was recently achieved in InAs 12 , CdSe 13 , HgS 14 and Si 15 NCs. While many experimental studies have been directed towards increasing the conductivity of NC films with increased n, it was not clear when ξ diverges and T ES vanishes so that the NC film becomes metallic [19][20][21] . In other words, what is the critical concentration n c of electrons (or donors) in a NC necessary for the insulator-metal transition (IMT)? Recently 15 n c was estimated for the case favorable for the IMT , where close-to-spherical NCs touch each other by small facets of radius ρ d without any ligands that impede the conduction by creating a barrier between NCs (see Fig. 1a). The result is very simple The IMT is illustrated in Fig. 1a where we show how an electron wave packet of the minimum available size for a given n quasiclassically passes between two touching NCs at n > n c , but has to tunnel at n < n c and, therefore, becomes more vulnerable to disorder. Contacts between NCs may have different origins. For example, a close-to-spherical NC has small facets due to the discreteness of the crystal lattice. Their radius can be estimated as ρ a = da/2, where a is the lattice constant and d is the NC diameter. For CdSe NCs with a = 0.6 nm and d = 5 nm, ρ a ∼ 1.2 nm and Eq. (3) gives n c = 2 × 10 20 cm −3 . For the case in which NCs shown in Fig. 1a touch each other away from these facets, a finite tunneling distance b ∼ 0.1 nm in the medium between NCs should be taken into account. This leads to Eq. (3) where ρ = ρ b = db/2 ρ a is the radius of an effective "b-contact" and the critical concentration n c is much larger. On the other hand, at very light doping when the average number of electrons per NC, N = πnd 3 /6, is less than unity one should see the nearest-neighbor hopping Here a is the lattice constant, d is the NC diameter. The blue cloud depicts the smallest available electron wave packet with the size k −1 F ∼ n −1/3 , where kF is the Fermi wavenumber and n is the doping concentration of electrons in each NC. (a) Electron transport at n > nc. The smallest electron packet fits in the touching facets and moves through the contact. (b) At n < nc, the smallest wave packet gets stuck near the contact and the electron tunneling between NCs is depleted so much that it cannot overcome the disorder to delocalize electrons. between NCs with the activation energy equal to the charging energy of a NC 17,22 . Thus, the ES hopping should be observed in a large range of the concentrations 1/d 3 < n < n c . To calculate T ES given by Eq. (2), we need to know how the localization length ξ(n) grows in this range, before reaching the NC diameter d and diverging in a critical vicinity of n c . The localization length of electrons is determined by the competition of the disorder energy δE and the tunneling matrix element t between neighboring NCs. We study two main sources of disorder: the dispersion of NC diameters, which changes the quantization kinetic energy, and the variation of the number of donors in a NC, which leads to charging of NCs and random Coulomb potentials shifting electron levels. We also calculate t(n, ρ) for two mentioned above models of small-ρ contacts. We arrive to the conclusion that typically the combination of both sources of disorder is so strong that one needs N 1 electrons per NC to make large enough t in order to get appreciable ξ and approach the IMT. In this paper we deal with the generic case for small semiconductor NCs when electron energy shells of the spherical NCs are weakly split and separated by the quantization gap ∆ . We show that when the disorder energy γ becomes larger than ∆ and NCs touch by contact facets of small radius ρ, the localization length ξ is (3) do not depend on the disorder strength. This happens because when γ exceeds ∆ the energy difference between neighboring NCs δE saturates at ∆ due to the periodicity of the quantized spectrum (see Fig. 4). Remarkably both Eq. (4) and Eq. (3) continue to play an important role when γ becomes smaller than ∆ and well defined peaks of the density of states appear (see Fig. 2). With decreasing width of these peaks the localization length starts to oscillate at small N while keeping its minima close to the base line Eq. (4) (see Fig. 3, we call this phase oscillating insulator (OI)). By further decreasing the disorder the oscillations can take over all range of concentrations n < n c and eventually ξ diverges at series of maximum points adjacent to n c (see Fig. 5, we call this phase blinking metal (BM)). New phases OI and BM are shown together with the large-disorder phase called here the usual insulator (I) where the localization length obeys Eq. (4) and the metallic phase (M) in two phase diagrams in the plane (N, ∆/E c ) (see Figs. 7 and 8). Note that the border of M is always given by Eq. (3). We used our phase diagrams to address the situation in several widely used semiconductor NCs, i.e., CdSe, InAs and ZnO, with d = 5 nm, ρ = ρ a = 1.2 nm and 7% dispersion of NC diameters. We show that in CdSe and InAs the Coulomb interaction can ignored (marginally in CdSe) and with the growing N one can see only two phases OI and M. A substantially broad BM phase which, of course, would improve the NC film conductivity at smaller N requires even smaller dispersion of diameters, say 3%. However, for ρ ≤ ρ a the highly desirable metallic state for N ∼ 1 when only the 1S shell is filled can be achieved only with unrealistic less-than-1% dispersion of NC diameters. Of course, one can always increase ρ to achieve BM and extend it all the way till N = 1 (on the way to the bulk semiconductor). In ZnO Coulomb disorder effects play an important role leading to the expansion of the I phase between OI and M ones. The paper is organized as follows. In Sec. II, we dwell upon the main energies of a single NC, i.e., the quantization energy gap ∆ separating consecutive degenerate shells of the electron spectrum and the charging energy E c of a NC. In Sec. III we start from very large ratios of ∆/E c where the dispersion of NC diameters dominates over the Coulomb disorder. We use values of t(n, ρ) calculated later in Sec. VI to find ξ(n) and n c . For the case of relatively large diameter dispersion and very small ρ at large n, the localization length ξ(n) follows Eq. (4) and n c is given by Eq. (3). We also study the case of a very weak diameter dispersion and arrive at BM. In Sec. IV we study the charging of NCs and the resulting Coulomb disorder and get ξ(n) for any ∆/E c . We show that at ∆/E c < 5 the Coulomb disorder eliminates the BM phase and extends the range of validity of Eq. (4). In Sec. V we discuss examples of widely used semiconductor CdSe, InAs and ZnO NCs. In Sec. VI we calculate the tunneling matrix element t(n, ρ) for NCs touching by contact facets (see Fig. 1b). In Sec. VII, we study the case when NCs touch each other away from prominent facets or are separated by short ligands and derive the corresponding expressions for ξ. In Sec. VIII we deal with large NCs where the random electric field of donors split and mix degenerate shell levels so that semiconductor NCs acquire random spectra similar to that of metallic granules. We conclude in Sec. IX. II. NC ELECTRONIC SPECTRUM AND CHARGING ENERGY We assume that close-to-spherical NCs have diameter d and touch each other by facets with radius ρ. At small enough ρ electrons are localized inside NCs. We suppose that the electron wave function is close to zero at the NC surface, due to the large confining potential barrier created by the insulator matrix surrounding the NC. Under these conditions, electrons occupy states with different radial and angular momentum quantum numbers, i.e., (n, l)-shells, each being degenerate with respect to the azimuthal quantum number m = −l, . . . , l where the polar axis (z axis) is defined in the direction of electron tunneling connecting centers of two neighboring NCs (we talk more about this in Sec. VI). As we explained in Introduction we are interested in NCs with the average electron number N 1. Therefore several (n, l)-shells are occupied. The quantum energy gap between two consecutive (n, l)-shells typically is where m * is the effective electron mass inside NCs. Also, when the quantum numbers are large, Bohr's correspondence principle allows us to consider quasiclassically the average density of states of electrons and introduce the Fermi wave number k F k F = 3π 2 1/3 n 1/3 . Here n = 6N/πd 3 is the density of electrons in a NC. Below, k F serves as a measure of the concentration n. The kinetic energy of electrons is only a part of the total energy of the NC. One should add to it the total Coulomb interaction energy of all electrons and donors. In general, calculating the total Coulomb energy (self-energy) of the system is a difficult problem because of the random position of donors. For our case, however, a significant simplification is available because the semiconductor dielectric constant ε is typically much larger than the dielectric constant ε m of the medium in which the NC is embedded. This allows us to ignore in the first approximation the energy dependence on positions of donors and electrons and instead concentrate on only the dependence on the total charge Qe of the NC. The energy of a NC with charge Qe surrounded by neutral NCs (the self-energy) is equal to Q 2 E c , where the charging energy is For non-touching NCs where the volume fraction of semiconductor NCs is f ≤ 0.52, one can use the Maxwell-Garnet formula 23 to calculate the effective dielectric constant ε f . This gives ε f 3 at f = 0.52 corresponding to the very moment of NC touching (we take ε m = 1, ε = 10 as in the case of CdSe NCs). For these ε m and ε, the effective dielectric constant ε f was calculated numerically for all range of f 24 including f > 0.52 obtained for faceted NCs touching by facets. One can check 25 that Eq. (8) works well even for f as large as 0.7. This means that for NCs touching by small facets or separated by short ligands, ε/ε f 3 is a good estimate for CdSe and other semiconductors with ε 15 that we are dealing with in this paper. For semiconductors with much larger ε one may use results of Ref. 25. The ratio ∆/E c is an important parameter of our theory. In n-type semiconductors we address here, for NCs with d = 5 nm, ∆/E c =2, 3, 5, 27 for Si, ZnO, CdSe, and InAs, respectively. III. LOCALIZATION LENGTH AND IMT DETERMINED BY DISPERSION OF NC DIAMETERS There are two important sources of disorder for electrons in a NC film. The first one is the variation of NC diameters. Since the energy gap ∆ ∝ 1/d 2 , each energy level gets a shift α∆, where α = 2δd/d and δd is the variation of the diameter d (experimentally, δd/d is as large as 5−15% 15,26 so α = 0.1−0.3). The second source of disorder is the fluctuations of the donor number in a NC, which result in the charging of NCs and subsequent random potentials. We study this phenomenon in Sec. IV. In this section we deal with the case of large enough ∆/E c when charging can be ignored. The dispersion of NC diameters creates the energy shift of the electron levels close to the Fermi level where N 2/3 gives the number of filled shells. When N is small, γ 1 ∆. The energy levels of NCs are then quite aligned and the density of states has periodically alternating maxima and minima (see Fig. 2). In this case, the transport mechanism depends on the position of the Fermi level or in other words on the average electron number N 11,17,27 . When the Fermi level is in the middle of each degenerate shell, i.e., the local density of states is very large, Coulomb correlations "dig" the Coulomb gap in this density of states 16 , which in turn leads to low-temperature ES conductivity law Eq. (1). When the Fermi level is close to the middle of the gap ∆ where a small density of states may be present due to overlapping tails of neighboring shells, the Coulomb effects are not important since the density of states is already very small. Such a constant density of states leads to the Mott variable range hopping 11,27 . However, for both ES and Mott variable range hopping, one should use the concept of the localization length which determines the exponential decay of the electron wave function with the distance x from the NC where the localized electron resides in. The localization length ξ is determined by the co-tunneling between two distant NCs with energies close to the Fermi level [27][28][29][30] . In the co-tunneling process, an electron tunnels between neighboring NCs of the chain of M intermediate NCs connecting the initial and final NCs. If after the tunneling all intermediate NCs remain in the ground state, the co-tunneling process is called elastic. Alternatively, an intermediate NC can acquire an electron-hole excitation. Such process is called inelastic. At low temperatures the elastic process dominates. We show in Sec. VI that in the chain of NCs extended along the z direction, inside each intermediate NC only the m = 0 state in the highly degenerate (n, l)-shell contributes to the tunneling process with a dominant matrix element t. Thus, along the chain of co-tunneling, there is only one possible series of intermediate energy states closest to the energy of the tunneling electron and no summation over different states of a given shell is needed for calculating the total amplitude. We can say that we deal with non-degenerate levels (red as shown in Figs Here δE is the energy difference between the tunneling electron and the state in the intermediate NC. Eq. (10) is valid when ln(δE/t) > 1 or ξ < d and the film is far from the critical vicinity of the IMT. So once the matrix element t is known, we can get the localization length. For different types of contacts between NCs, the value of t is different. The largest ξ is obtained in the case when NCs touch by facets of finite radius ρ. The corresponding tunneling matrix is derived in Sec. VI as The energy difference δE oscillates with the density of states, which is followed by the oscillation of the localization length (see Fig. 3). At small N and when the Fermi level is inside a degenerate shell where the density of states is large, one arrives at the ES law (1) and gets δE = γ 1 ∆. The localization length reaches a maximum at such N ξ d ln(αd 2 /n 1/3 ρ 3 ) . When the Fermi level resides in the middle of the gap ∆ between shells where the Mott variable range hopping takes over, the energy difference δE ∆. Therefore, the localization length reaches its periodic minima which are given by Eq. (4). The local period is ∼ N 1/3 and slowly changes with N . Eqs. (12) and (4) together give the envelope of the oscillating localization length as shown in Fig. 3 by the dotted line and the dashed line, respectively. We denote this phase as "oscillating insulator" (OI). Periodic oscillations of the hopping conductivity with N were observed in CdSe 11 . According to Eq. (9) γ 1 grows with N and reaches ∆ at N = α −3/2 . At larger N the energy difference δE saturates at ∆ because of the spectrum periodicity. The corresponding system of electron energy levels with a smooth density of states is shown in Fig. 4. Thus, oscillations of ξ(N ) stop at N = α −3/2 . We arrive at the usual insulator (I) where ξ obeys Eq. (4) which follows from δE = ∆ and Eqs. (5), (10) and (11). This gives Eq. (3) for the critical concentration n c . This sequence of changes of ξ(N ) is shown schematically in Fig. 3. Apparently it requires that In this section, we focus on the case of relatively small ρ when the inequality (13) holds. In this case, every maximum of ξ is finite because the argument of the logarithmic function in Eq. (12) The case opposite to the inequality (13) corresponding to a smaller α and a larger ρ is studied in the next section. One should note that our result for ξ is obtained away from the critical vicinity of n c . So our estimate of n c obtained from the condition ξ = d needs a correction. Indeed we estimated the probability of the electron hopping between two distant NCs via the elastic co-tunneling along a single typical chain of M NCs. Near the IMT one should add probability amplitudes of many such chains. Then the sum of all amplitudes gives a total probability ∝ (tK/∆) M . Here K is the connective constant of the NC lattice. According to Anderson 31 , the IMT happens when tK/∆ = 1. Using Eqs. (5) and (11), for the simple cubic lattice (where according to Ref. 32 K = 4.7) we arrive at the estimate n c ≈ 0.5ρ −3 while for the face-centered cubic lattice (where K = 10 as given in Ref. 33) we get n c ≈ 0.2ρ −3 . This result found from the insulating side of the IMT is reasonably close to Eq. Each NC has a ladder of (2l + 1)-degenerate (n, l)-shells with the gap ∆ between them. Due to variations of diameters, the whole ladder of energy levels is shifted up and down by an energy larger than ∆. Here we show only two shells closest to the Fermi level. Only one level (red) of each shell contributes to the tunneling with the matrix element t. So far, we have studied the case where the inequality (13) holds. Now we turn to the opposite situation αd 2 /ρ 2 1 of relatively large ρ and small α. In this case Eq. (12) indicates that the localization length periodically diverges at α 3 d 9 /ρ 9 < N < d 3 /ρ 3 . This means that electrons whose energy levels are in the middle of the shell are delocalized while those who are located in the tails of the density of states are still localized and have a localization length described by Eq. (4). We call this phase "blinking metal" (BM) since its metallicity occurs only at certain positions of the Fermi level (a good example of such metal is the quantum hall effect). However, at N = d 3 /ρ 3 , the system enters the usual metal (M) phase where electrons are delocalized regardless of the Fermi level position. The corresponding behavior of the localization length at N ≤ d 3 /ρ 3 is shown in Fig. 5. In this case, γ 1 ∆ at the IMT point since IV. ROLE OF NC CHARGING DUE TO DONOR NUMBER FLUCTUATIONS Let us now discuss another type of disorder present in the film, i.e., the fluctuations of the donor number δN around the average number N from NC to NC. At large N , δN is Gaussian-distributed, i.e., δN ∼ √ N . If each NC is neutral, δN would lead to substantial fluctuations δE F = E F / √ N ∼ N 1/6 ∆ of the Fermi energy E F from one NC to another. To establish the unique chemical potential of electrons (the Fermi level), electrons move from NCs with larger-than-average n to ones with smaller-than-average n and NCs get charged creating the Coulomb potential in space. Below we argue that the typical number of charges Q in NCs depends on the ratio ∆/E c as shown in Fig. 6. When E c is very small, the final chemical potential is established when NCs have almost the same number of electrons. Accordingly, most NCs obtain a net charge Qe where Q ∼ √ N . However, at larger E c when ∆/E c N 1/3 , the price of charging gets so large that the number of transferred electrons Q ∼ N 1/6 (∆/E c ) is much smaller than √ N (see Fig. 6). One arrives to this result by equating the initial fluctuation of the Fermi energy δE F to the growth of the Coulomb potential of a NC QE c . At ∆/E c = N −1/6 , charging becomes so costly that the charge number Q = 1. Beyond this point, all NCs are neutral (see Fig. 6). One can understand the importance of the parameter ∆/N 1/3 E c by calculating the electronic screening radius of the film. Since the screening radius r 0 can be estimated as ε f /4πe 2 g(E) where g(E) N 1/3 /∆d 3 is the average density of states, one gets We see that in agreement with Fig. 6, when ∆/N 1/3 E c 1 and r 0 d electrons do not screen donor charges, while in the opposite case ∆/N 1/3 E c 1 the electron screening becomes important. Due to charging of NCs, each NC finds itself in the environment of charged neighbors and gets a random potential energy shift up or down. Apparently the energy shift created by a single NC at the distance r where d r r 0 is QE c d/r and the typical shift created collectively by all NCs in the sphere of radius r 0 is Note that γ 2 does not depend on d. This is not surprising because one can arrive to the same result for potential energy fluctuations thinking about our film as a bulk heavily doped semiconductor with concentration n N/d 3 of randomly positioned donors screened by degenerate electron gas 34 . Let us find what happens when the charging effect outweighs the diameter variation. We start from the case αd 2 /ρ 2 1 and use that for Fig. 7 which in the (N, ∆/E c ) plane shows phases with different behaviors of the localization length. The upper part of Fig. 7 summarizes results obtained for diameter variations in Sec. III. We see how with growing N the film goes through an oscilating insulator (OI), an insulator (I) and a metal (M). Coulomb effects become important when γ 2 > γ 1 or according to Eqs. (9) and (14) ∆/E c < 1/N 1/3 α 4/3 . At the upper OI-I border N = α −3/2 where γ 1 = ∆, this happens at ∆/E c = α −5/6 . Let us explore now what happens at ∆/E c < α −5/6 where the energy difference δE = γ 2 . When N is small, γ 2 is small too so that the density of states has periodic peaks and the localization length oscillates. The system is again an oscillating insulator (OI) (see the narrower part of the blue domain in Fig. 7). At larger N when ∆/E c < N 5/9 , the energy shift γ 2 exceeds ∆ so that away from the left blue domain shown in Fig. 7 we arrive at the spacial distribution of levels shown in Fig. 4. δE then saturates at ∆ and one again obtains the result (4) for the localization length ξ. The system becomes a usual insulator (I). Thus in the case of relatively small ρ and large α when αd 2 /ρ 2 1, the localization length first stops oscillating and then diverges (the system enters first the domain I and then the domain M as shown in Fig. 7). In the opposite case of relatively large ρ and very small α where αd 2 /ρ 2 1 we analyze the role of Coulomb effects in Fig. 8. The upper part of this phase diagram ∆/E c > α −7/3 (d/ρ) −3 is again dominated by diameter variations. As shown in Sec. III in this case with growing N the film goes through an oscillating insulator (OI), a blinking metal (BM) and a metal (M). When we include Coulomb effects the vertical OI-BM border marked as the line 2) in Fig. 8 V. EXPERIMENTAL IMPLICATIONS FOR CdSe, InAs AND ZnO NC In previous section, we have studied theoretically possible situations for NC films as shown by phase diagrams in Figs. 7 and 8. Now we focus on several commonly used semiconductor NCs trying to put them on these diagrams. We choose the same geometrical parameters α = 0.15, d = 5 nm, ρ = ρ a = 1.2 nm for all of them. Then, we get α −3/2 d 3 /ρ 3 and use the phase diagram Fig. 7. The upper part of the diagram where the NC diameter variation is the major source of disorder is separated from the lower one where Coulomb disorder dominates by ∆/E c = α −5/6 5. For CdSe NCs since ∆/E c = 5, the Coulomb effects are marginal so that we can think about the NC diameter variation only. When N increases, the film moves from OI to M as depicted by the upper part of the phase diagram in Fig. 7 with the intermediate region I being narrow and neglected since α −3/2 and d 3 /ρ 3 are quite close. For InAs, since ∆/E c = 27 5, the Coulomb effects are completely negligible and again the system experiences the OI-I-M phase changes. For ZnO, however, the ratio ∆/E c = 3 < 5 and the random Coulomb potential is the leading disorder in the film. One should then use the lower part of Fig. 7 for the phase change process with increasing N . In this case we have the same sequence of phases OI-I-M but with now the phase I appreciably expanded by the Coulomb random potential. Si NC films are similar to that of ZnO as its ∆/E c = 2 is also very small. One should note that there is no BM phase for the chosen parameters. To get this phase, one has to tune α down by making the NCs more monodisperse. For d = 5 nm and ρ = ρ a = 1.2 nm, one needs α < 0.06 to open the BM phase. This is probably the state-of-the-art monodispersity. To go even further, one may wonder whether the BM phase can be expanded all the way till N = 1. The inequality αd 3 /ρ 3 ≤ 1 guarantees that the line 2) of Fig. 8 reaches the N = 1 line and simultaneously the condition ∆/E c ≥ (d/ρ) 4 is required for the film to be above the point where line 3) crosses N = 1. If both inequalities are satisfied one can expect a desirable 20,35,36 band-like transport behavior of electrons when they populate only the 1S-level. However, for NCs with d = 5 nm and ρ = ρ a 1.2 nm, the necessary α = ρ 3 /d 3 0.01 is unrealistically small while necessary ∆/E c ≥ (d/ρ) 4 ∼ 200 is too large. Even increasing ρ to 2ρ a brings us only to criteria α ≤ 0.1 and ∆/E c ≥ 16. Of course, our estimates are good only for ρ d/2 so these numbers should not be taken too seriously. For ZnO (or Si), since ∆/E c < (d/ρ) 5/3 even at ρ = 2 nm, the system can never see a BM phase due to the large Coulomb disorder as shown by Fig. 8. There is an important case where additional Coulomb fluctuations may be ignored. We are talking about NC films gated by an ionic liquid or an electrolyte 10,11 . Anions which enter spaces between NCs and attract electrons in this case play the role of chemical donors we studied above. However, contrary to immobile dopants inside a NC anions remain mobile in the process of adjustment of the gate voltage and tend to screen electron charges 17 . Thus, in this case disorder effects due to fluctuations of NC diameters discussed in Sec. III should dominate. The ZnO (or Si) NC films then become similar to CdSe or InAs. At αd 2 /ρ 2 1 the OI domain expands while the I domain shrinks and at αd 2 /ρ 2 1, the OI and I regions are consumed by the BM phase. VI. TUNNELING MATRIX ELEMENT FOR NANOCRYSTALS TOUCHING BY FACETS Beyond the surface of a single NC in the surrounding medium, the wave function of an electron at the Fermi level decays with the distance s from the surface as ∝ e −s/b where b = / √ 2mU 0 . Here m is the electron mass in the medium and U 0 is the workfunction of NCs. For U 0 4 eV and m = m e , where m e is the electron mass in vacuum, one gets b 1Å, which is smaller than the lattice constant. So, approximately, the electron wave function is zero on the surface of NCs. When two NCs touch by contact facets, the electron wave function of the left NC is strongly modified inside the dashed sphere of radius ρ containing the facets. Namely, due to the right NC, the wave function acquires a tail leaking into the right NC (see Fig. 9a). The wave function inside the right NC is deformed in the same way. The overall wave function is split into two which are symmetric and asymmetric combinations of the modified wave function ψ inside each NC (see Fig. 9b). The origin is set at the center of the contact and the polar axis is pointed towards the center of the right NC. The coordinates of the centers of left and right NCs are r L and r R , respectively. ψ(r − r L ) refers to the wave function in the left NC and ψ(r − r R ) is that of the right one. Below, we just use the left wave function for discussion and simply denote it as ψ. The tunneling matrix element t between two NCs can be estimated by calculating the energy splitting between the symmetric and asymmetric wave functions Ψ s,a of Eq. (15). As in the problem of calculating the electron terms of the molecular ion H + 2 in § 81 of Ref. 37, the energy splitting can be calculated as where the integral is taken over the contact boundary plane z = 0 (see the vertical dashed line in Fig. 9b). In the case we are discussing now, the contact is made of the touching facets. In this contact plane, ψ vanishes at (x, y) outside the facets in the surrounding medium. Let us deal with E F belonging to a degenerate (n, l)-shell. Then, the unperturbed wave function of the left NC is ψ 0 (r − r L ) where j l is the spherical Bessel function, Y m l are spherical harmonics, k n d/2 is the nth zero point of the Bessel function and k n ≈ 2πn/d ∼ k F , (r , θ , φ ) are the coordinates of r = r − r L in the spherical coordinate system. Y m l (θ , φ ) → 0 at θ → 0 for all m = 0, and for m = 0 Y 0 l (0, φ ) = (2l + 1)/4π > 0. Thus among the 2l + 1 degenerate levels of the (n, l)-shell only one state (m = 0) oriented along the z axis contributes to the tunneling between neighboring NCs (marked red in Fig. 4). So we just need to calculate the tunneling matrix element t of the m = 0 state. When the number N of electrons inside each NC is large, for the (n, l)-shell at the Fermi level, we have n ∼ l ∼ N 1/3 k F d since the radial and angular kinetic energies should be of the same order. So for the m = 0 state, the wave function is highly concentrated near the z axis spreading mainly within the polar angle 1/ √ k F d. More accurately, since each (n, l)-shell has 2l + 1 degeneracy, we get where the factor 2 comes from the spin degeneracy. . The radial distribution is described by j l (k n r ) sin[k n (r − d/2)]/k n r at large r . Therefore, we get approximately the normalized unperturbed wave function at large distance from the left NC center. So near the facet, the original unperturbed wave function ψ 0 can locally be regarded as an incident plane wave superposed by its completely reflected wave from the surface, i.e., As a result, the problem of an electron tunneling through a facet is analogous to the one of a plane wave with the wavenumber k F diffracting on a circular aperture with radius ρ in z = 0 plane screen. In the regime where k F ρ 1, Bethe 38 solved this diffraction problem for microwaves, while Levine and Schwinger 39 and Bouwkamp 40 solved it for a scalar plane wave. Here we use the simple solution in the first-order approximation in k F ρ 1 given by Rayleigh 41 . One can write the Schrodinger equation for the function ψ as Boundary conditions on the z = 0 plane are ψ = 0 on the screen and dψ/dz is continuous at the aperture. We write the solution ψ as the sum of ψ 0 and δψ, where δψ is the correction due to the aperture opening and the unperturbed wave function ψ 0 of the left NC is zero on the right side of the boundary plane (z > 0). We denote δψ L , δψ R as the left (z < 0) and right (z > 0) part of the correction function δψ respectively. So in the z = 0 boundary plane, δψ L = δψ R and outside the aperture δψ L = δψ R = 0. The continuity of the derivative dψ/dz leads to a jump of d (δψ) /dz, i.e., dψ 0 /dz + d(δψ L )/dz = d(δψ R )/dz. The symmetry between δψ L and δψ R gives d(δψ L )/dz = −d(δψ R )/dz in the aperture (the proof can be found in Refs. 38 and 41. For a possible interpretation of this result, see the footnote 42 ) and therefore d(δψ R )/dz = (dψ 0 /dz)/2 ≈ 2l/πk n /d 3/2 . Now one can rewrite the integral for t in terms of the correction to the wave function on the right side which is where δψ R satisfies the Schrodinger equation (19). At the aperture ∇ 2 (δψ R ) ∼ δψ R /ρ 2 k 2 F δψ R because k F ρ 1. In the first approximation we can neglect the latter term and thus deal with the Laplace's equation with the boundary conditions δψ R = 0 on the screen and d(δψ R )/dz ≈ 2l/πk n /d 3/2 at the aperture. Mathematically, an identical problem was exactly solved in hydrodynamics (see § 108 in Ref. 43). Indeed, the Laplace's equation ∇ 2 ϕ = 0 can be used to describe the motion of a rigid disk of radius ρ moving with velocity u along its axis (defined as the z axis with the origin at the disk center) through unlimited incompressible liquid if ϕ denotes the velocity potential. Boundary conditions for ϕ are that ∇ϕ = u on the disk and ϕ = 0 at z = 0 outside the disk. The kinetic energy of the liquid in the z > 0 space is where g is the density of the liquid. Using Green's theorem and the Laplace's equation for the right half space (z > 0), we get where the integral is taken over the whole z = 0 plane. The potential ϕ is zero outside the disk. Therefore, the integration is over the disk only, as in Eq. (20). Knowing the exact solution for ϕ one can arrive at K = (2/3)gρ 3 u 2 (see § 108 in Ref. 43). Thus In our diffraction problem, δψ R plays the role of ϕ and d(δψ R )/dz ≈ 2l/πk n /d 3/2 plays the role of u. Therefore, using Eq. (6), we get the tunneling matrix element t in Eq. (20) At k F d ∼ 1, one gets the tunneling matrix element for the 1s band In Ref. 44, a solution-based oriented attachment method was used to prepare fused dimers of two semiconductor NCs. These dimers can be seen as two NCs touching by their facets. Eq. (26) for t can then be used to calculate the splitting of the first exciton absorption line in the dimer spectrum. One should note that Eq. (26) is obtained here in the limit of infinitesimal tunneling distance b (which is further explained in Sec. VII). In the same limit, the method used in Ref. 44 leads to a smaller t 2 ρ 4 /m * d 6 . The reason for this difference is that on the facet plane our wave function has a larger magnitude than the one conjectured in Ref. 44. One can interpret the result for the tunneling matrix element Eq. (25) as following. Originally the wave function ψ 0 is zero on the boundary plane and its derivative along the z axis is k 3/2 F /d on the contact facet. Now due to the existence of the facet, the electron wave function is modified as ψ which leaks into the right NC and is nonzero on the facet, while the derivative is hardly changed by the small perturbations. Because the wave function substantially changes over a distance ρ, we can say that ψ ≈ (dψ/dz)ρ. As a result, inside the contact facet in the z = 0 plane for the m = 0 state which is highly oriented along the z axis. So we get the result (25) for t. From this t we arrive at Eq. (4) for the localization length and Eq. (3) for the critical concentration n c . A schematic plot of n c as a function of the facet radius ρ is presented in Fig. 10. The critical concentration scales as 0.3/ρ 3 at all ρ d/2. In the vicinity of ρ = d/2, electrons are no longer confined inside each NC and the film becomes a bulk semiconductor. In this case, n c a 3 B 0.02, we return to the Mott criterion for the IMT and get a drastic drop of the critical concentration from 2/d 3 to 0.02/a 3 B . VII. NANOCRYSTALS TOUCHING AWAY FROM FACETS When NCs touch each other away from prominent facets by an area of the atomic size a ρ, the electrons tunnel mainly via the effective "b-contact" of radius ρ b = db/2 a (see Fig. 11). For electrons tunneling between NCs outside this contact, the tunneling distance is larger than b and the probability is negligible due to the exponentially decaying wave function. Since electrons have to tunnel through the medium where they have mass m, when calculating the integral Eq. In this case we can use the LCAO approximation in the way done for the ground state in Ref. 20. We calculate ψ = ψ 0 as the wave function for a single spherical NC embedded in the infinite surrounding medium with the finite decay length b. For simplicity, in this section and below we do only scaling analysis ignoring numerical coefficients. Using the continuity of the wave function on the NC surface, we get is the first-kind spherical Hankel function and only Y 0 l (θ , φ ) is nonzero at θ = 0 corresponding to the state participating in the tunneling. The origin is set at the touching point of NCs with the z axis pointed towards the center of the right NC and therefore the boundary plane is at z = 0 (see the vertical dashed line in Fig. 11). Again (r , θ , φ ) are the coordinates of r = r − r L in the spherical coordinate system and r L is the coordinate of the center of the left NC. For the finite potential barrier U 0 the derivative of the wave function divided by the effective mass is continuous across the surface, i.e., where d/b 1, m/m * 1, k F d 1 at high doping concentration and k F d ∼ 1 for the ground state. At 1/k F b m/m * , the cotangent function diverges which means cos (k F d/2 + ϕ l ) ≈ 1, 1/ sin(k F d/2 + ϕ l ) ≈ −m * /mk F b. So on the boundary plane inside the b-contact (r − d/2 = 0 + ), we have The tunneling matrix element is then At 1/k F b m/m * , the cotangent function either vanishes or is finite depending on whether d/b m/m * or k F d 1 is satisfied. This means the sine function is always finite and of the order 1. So inside the b-contact we get and the tunneling matrix element is One can check that when we put k −1 F ∼ d into Eqs. (33) and (35), we get the same tunneling matrix elements for the ground state as derived in Ref. 20 for NCs touching in one point. According to Eqs. (33) and (35), the localization length is then This leads to the critical concentration which has its minimum value n c 1/(db) 3/2 = 1/ρ 3 b at m/m * d/b. Even this minimum value is much larger than 1/ρ 3 a since ρ a ρ b . Thus, when NCs touch away from prominent facets the critical concentration is pushed much higher. In fact, for CdSe NC films, by using b = 0.1 nm, d = 5 nm, m = m e , m * = 0.13m e 45 where m e is the free electron mass, we get n c 3 × 10 21 cm −3 , which is difficult to achieve. When NCs are separated by short ligands 46 by a small distance s, the overlapping wave functions exponentially decay as ∝ e −s/b between neighboring NCs. Following a procedure similar to above derivations, we can get the tunneling matrix element t as (39) At large s we can ignore the logarithmic terms originating from the prefactor of t. But for small s, near the IMT, the role of these terms becomes important. One should note that even when NCs touch by short ligands, the localization length of electrons can be enhanced by increasing the doping concentration n inside each NC. The critical concentration n c is then which can easily become unrealistically large. VIII. RANDOM-SPECTRUM NC In previous sections, we have studied the highly degenerate case assuming that the splitting of (n, l)-shells is much smaller than the energy gap ∆. In this section, we discuss limits of applicability of this assumption and find the localization length for strongly split and mixed (n, l)−shells which form a random spectrum similar to the case of metal garnules [28][29][30]47,48 . We show below that this happens at relatively small ∆/E c < N 1/3 or in the domain below the dashed line on Figs. 7 and 8. So the theory of this section is applicable for large enough NCs made from Si or ZnO. Besides shifting the ladder of degenerate levels up and down discussed above, the random electric field created by neighboring charged NCs can split the degenerate shells of each NC due to the Stark effect. This field determined by nearest-neighbor NCs is E ∼ e √ N /ε f d 2 . Electrons in the NCs respond to the internal field, which is smaller than E by the factor 3/(2 + ε/ε f ). As we said in Sec. II, ε/ε f is 3, so this factor is 3/5 and we will ignore it. To calculate the Stark splitting we first note that the matrix element of the electric field potential does not vanish only between shells with l values differing by unity and is then of the order of eE d. The typical energy difference between such shells in the spherical well with N electrons is N 1/3 ∆. Therefore, the typical Stark energy shift or the width of the split shell W emerges in the second-order perturbation theory and is (The Stark splitting can also come from random positions of N donors inside each NC and is comparable to Eq. (41). This disorder creates an internal dipole moment ∼ √ N ed and an electric field, oriented in a random direction.) Comparing Eq. (41) with the energy gap ∆ between consecutive shells, we see that at ∆/E c < N 1/3 (42) the levels become random with the spacing δ = ∆/N 1/3 as the only characteristic energy 49 . ∆/E c = N 1/3 is shown in Figs. 7 and 8 by the dashed lines separating I and I' phases. When the inequality (42) holds, the degeneracy is broken and different (n, l)-shells mix with each other. Thus inside each NC, the states close to the Fermi level and, therefore, involved in the electron tunneling have typically different l numbers so that they have different parity and their tunneling matrix element t has random signs. The electron wave functions of different m hybridize and become chaotic instead of being confined in certain polar angles. So the typical magnitude of the wave function on the contact facet is √ k F d times smaller than that of the "red" m=0 state for the degenerate case. These changes lead to the random matrix spectrum case which has been studied in previous work for larger dots [28][29][30]47,48 . In this case, where k F is given by Eq. (6). Therefore, the typical tunneling matrix element is At the same time, the energy gap between consecutive non-degenerate levels is also reduced to δ ∆/(2l + 1) 2 /m * d 3 k F . Then according to Refs. 28 and 30 the localization length is where a b = ε f 2 /m * e 2 and ε f is the effective dielectric constant of the film. According to Eq. (46), at t δ, the localization length is still much smaller than the NC diameter d, which seems to indicate a criterion different from t δ for the IMT. However, one should notice that as t → δ, the charge discreteness is no longer well preserved and the charging energy vanishes 50,51 , so δ takes the place of E c and changes the expression of ξ to d/ ln(δ/t). Using Eq. (44) and δ 2 /m * d 3 k F , we get t δ at k F ρ ∼ 1. The localization length ξ becomes comparable to d at this point. This again leads to our above criterion Eq. (3), the same as for the degenerate case. Since this elimination of charging energy occurs in the vicinity of the IMT, we should see a steep growth of the localization length, which is a major feature different from the degenerate case. According to Eq. (3), n c n M at ρ a B . The critical concentration decreases with ρ and saturates at n M when ρ ∼ a B . IX. CONCLUSION In this paper we studied theoretically what happens to the variable range hopping conductivity of semiconductor NC films when NCs are doped by donors with the concentration n. Experiments show that the localization length of electrons ξ(n) grows with n and at some n = n c becomes larger than the diameter d of NCs, what signals that the film is approaching the insulator-metal transition (IMT). We provide theoretical estimates of ξ(n) and n c . The localization length is determined by the competition of disorder and transfer matrix element t(n) between neighboring NCs. We concentrated on the case of small spherical NCs in which the electron spectrum consists of degenerate energy shells separated by the quantization gap ∆. In such films energy levels of NCs vary due to the dispersion of NC diameters and variations of the number of donors from NC to NC which result in random Coulomb potentials. We showed that for the standard diameter dispersion it is important for ∆/E c > 5, where E c is the charging energy, while the Coulomb disorder dominates for the opposite case ∆/E c < 5. The matrix element t(n) grows with n and depends on the geometry of contacts between NCs. We calculated t(n) for different types of contacts. We showed that for a finite separation between NCs or even when NCs touch each other in one point, the IMT may need unrealistically large n. This is why we focused on the case when close-to-spherical NCs touch by smallest facets . We found ξ(n) in this case and our results are in qualitative agreement with the experimental data for ξ(n) obtained in Ref. 15. For these facets n c is still relatively high and for d = 5 nm CdSe NCs it corresponds to N ∼ 20 electrons per NC, which justifies our large-N approach. To make n c smaller one should deal with small NCs with ∆/E c > 5 and use NCs touching by larger facets. Another route is making much smaller dispersion of diameters, but this route does not look realistic.
2015-12-31T19:04:19.000Z
2015-12-10T00:00:00.000
{ "year": 2015, "sha1": "3aa57f38c61aea7e5cefaa933e8f8482164b3794", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.93.125430", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "3aa57f38c61aea7e5cefaa933e8f8482164b3794", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231598326
pes2o/s2orc
v3-fos-license
Metasequoia glyptostroboides potentiates anticancer effect against cervical cancer via intrinsic apoptosis pathway This study was undertaken to investigate the anticancer effects of organic extracts derived from the floral cones of Metasequoia glyptostroboides. Dried powder of M. glyptostroboides floral cones was subjected to methanol extraction, and the resulting extract was further partitioned by liquid–liquid extraction using the organic solvents n-hexane, dichloromethane (DME), chloroform, and ethyl acetate in addition to deionized water. HeLa cervical and COS-7 cells were used as a cancer cell model and normal cell control, respectively. The anticancer effect was evaluated by using the Cell Counting Kit-8 assay. The viability of COS-7 cells was found to be 12-fold higher than that of the HeLa cells under the administration of 50 µg/ml of the DME extract. Further, the sub-G1 population was determined by FACS analysis. The number of cells at the sub-G1 phase, which indicates apoptotic cells, was increased approximately fourfold upon treatment with the DME and CE extracts compared with that in the negative control. Furthermore, RT-qPCR and western blotting were used to quantitate the relative RNA and protein levels of the cell death pathway components, respectively. Our results suggest that the extracts of M. glyptostroboides floral cones, especially the DME extract, which possesses several anticancer components, as determined by GC–MS analysis, could a potential natural anticancer agent. Cervical cancer affects many women in developing countries and has a high mortality rate. Although the cervical cancer screening program has reduced the incidence and mortality of cervical cancer, the incidence among young women remains a public health problem 1 . In addition, the administration of chemotherapeutic agents and multidrug resistance are accompanied by severe side effects that cause negative gynecological and obstetric outcomes 2,3 . Tumor cells in the ovary can metastasize to the lymphatic and circulatory systems if not blocked by the anatomical barriers 4 . Surgical resection and systemic chemotherapy can affect drug efficacy. Consequently, high chemotherapeutic doses that are adversary to human health are the only options to treat the mucosal and epithelial membranes of the affected tissues. Doxorubicin (DOX) is an efficient chemotherapeutic agent that is widely used for the treatment of various cancers 5 . However, adversary side effects, poor bio-distribution, and high toxicity to non-cancerous cells limit the use of DOX 6,7 . Targeting apoptosis avoidance, a major feature of cancers, is the most effective nonsurgical strategy for the treatment of cancers because it is not specific to the cause or type of cancer 8 . Apoptosis is mediated by an intrinsic or extrinsic pathway based on the origin of the apoptotic stimulus. These pathways are also known as mitochondrial and death receptor pathways, respectively 8 . The intrinsic apoptosis pathway is activated inside the cell by the Bcl-2 protein family and is independent of receptor signal transduction 8,9 . The extrinsic pathway is primarily mediated by the signaling through membrane-bound receptors belonging to the tumor necrosis factor (TNF) superfamily called the "death domain" 9 . Metasequoia glyptostroboides Miki ex Hu (M. glyptostroboides) is a deciduous conifer from the red-wood family of Cupressaceae and is distributed in Europe as well as many parts of East Asia and North America 10 . Herbal plant-based extracts have shown enormous potential in the treatment of various diseases, including cervical cancer. Although M. glyptostroboides-derived extracts or secondary metabolites have been found to exhibit numerous biological and pharmacological activities [11][12][13] , the effect of M. glyptostroboides floral cone extracts on cervical cancer has not been addressed to date. Therefore, in this study, M. glyptostroboides floral cone extracts prepared with various organic solvents were assessed for their anticancer effects on the cervical cancer cell line HeLa cells versus their normal counterparts, COS-7 cells. Also, chemical component analysis of the DME extract was performed using GC-MS analysis. Methods Materials. The organic solvents n-hexane, dichloromethane, chloroform, and ethyl acetate were purchased from Daejung (Korea). Dulbecco's Modified Eagle's Medium (DMEM), penicillin/streptomycin, fetal bovine serum (FBS), phosphate-buffered saline (PBS), and trypsin were purchased from Gibco (Carlsbad, CA). The Cell Counting Kit-8 (CCK-8) was purchased from Dojindo Co. Ltd. (Beijing, China). Sample preparation. The floral cone powder of M. glyptostroboides was subjected to methanol extraction at a ratio of 1:10 (w/v) for 1 week and then filtered through a 0.45-μm Whatman No. 1 filter paper. The supernatant was dried using a rotary evaporator (N-1110S-W, Eyela, Tokyo, Japan), resulting in the crude methanol extract (ME) of M. glyptostroboides floral cones with a yield of 9.40%. To prepare different organic extracts, ME was further dissolved in deionized (DI) water and successively partitioned using n-hexane (HE), dichloromethane (DME), chloroform (CE), and ethyl acetate (EAE) solvents. Each solvent layer was separated and dried using a rotary evaporator, resulting in the crude HE, DME, CE, and EAE extracts of M. glyptostroboides floral cones. To prepare the test samples, each extract sample was dissolved in dimethyl sulfoxide (DMSO). A detailed extraction procedure is provided in Fig. 1. Cell cultures. HeLa (Human cervical carcinoma cell) and COS7 (African green monkey kidney cell) cells were cultured in DMEM medium with 10% FBS and 1% penicillin/streptomycin. Cells were incubated at 37 ℃ in humidified air with 5% CO 2 . Gas chromatography-mass spectroscopy (GC-MS) analysis. GC-MS anlysis of DME extract was performed using a GC-Thermo Jeol JMS700 appratus following our previously reported method, and componentes were identified using GC-MS-based NIST and Willey library 14 . Cytotoxicity assay. For cytotoxicity assay, COS7 and HeLa cells were incubated with various concentrations of HE, DME, CE, EAE, and DE extracts of M. glyptostroboides floral cones. After 24 h incubation, cells were Quantification of sub-G1 phase by flow cytometry analysis (FACS). FACS analysis was carried out to determine the cytotoxic effecet of HE, DME, CE, EAE, and DE extracts of M. glyptostroboides. The sub-G1 population was quantified using propidium iodide (PI; Sigma-Aldrich) staining and a cellometer. HeLa cells were seeded in 6-well plates at a density of 1 × 10 6 cells per well, cultured for 24 h and then treated with 50 μg/ ml of the HE, DME, CE, EAE, or DE extract for 24 h. Cells were harvested with trypsinization and fixed with 80% cold ethanol in PBS for 30 min. Cells were then washed twice with cold PBS and centrifuged at 2000 rpm. The pellet was resuspended in PBS and stained with 50 μg/ml PI in PBS containing 100 μg/ml RNase A. Cells were then incubated at 37 ℃ for 40 min. Afterward, their DNA contents were analyzed using the cellometer instrument. Western blot analysis. HeLa cells were pretreated with 50 µg/ml of the HE, DME, CE, EAE, or DE extract of M. glyptostroboides floral cones. Total protein was obtained by lysing the cells with RIPA buffer containing protease and phosphatase inhibitors (ThermoFisher Scientific, UK). Protein samples were subjected to SDS-PAGE and electro-transferred onto polyvinylidene difluoride membranes (Millipore, Burlington, MA). The membranes were blocked using 5% skim milk in Tris-buffered saline-Tween 20 (TBST). The membranes were incubated overnight at 4 ℃ with the primary antibodies and then washed with TBST. The membranes were subsequently treated with the appropriate secondary antibodies for 4 h. β-actin was used as an internal standard. Image J software was used to analyze the band intensities. RT-qPCR analysis. The mRNA levels of apoptosis markers in HeLa cells were assessed using RT-qPCR. Total RNA was collected using the TRIzol reagent (Life Technologies, Carlsbad, CA) and reverse-transcribed using the PrimeScript RT reagent Kit (Takara Bio Inc., Kusatsu, Japan). The CFX96 system (Bio-Rad Laboratories, Hercules, CA) and iQ SYBR Green Supermix (Bio-Rad Laboratories) were used for qPCR. β-actin mRNA levels were used to normalize p53, B-cell lymphoma 2 (Bcl-2), and Bcl-2-associated X (BAX) mRNA levels. The 2 (−ΔΔCT) method was used to determine the relative mRNA levels. The primer sequences used in this study are provided in Supplementary Table S1. Detection of reactive oxygen species (ROS) generation. The detection of ROS generation assay was according to previous studies with some modification 16 . To detect ROS at chemical level, a 500 µl sample of 1 mM 2′,7′-dichlorodihydrofluorescein diacetate (DCF-DA; 287810, MERCK MILLIPORE, German) was reacted with 10 mM NaOH (2 ml) for 30 min to achieve complete deacetylation in the darkroom. The mixture solution was then neutralized with 10 ml PBS. Each sample in PBS was mixed with DCF-DA solution and horseradish peroxidase (HRP; P8375, SIGMA, USA) (2.2 unit per ml) at a ratio of 1:1:1 and reacted in a darkroom for 30 min. Centrifugation proceeded at 13,000 rpm and 4 ℃ for 10 min. The supernatant solution was moved to a 96 well black plate and the fluorescence intensity of DCF was observed with excitation and emission wavelengths at 485 nm and 535 nm, respectively. The standard curve was obtained from a H 2 O 2 solution. Further, the generation of ROS at cellular level was evaluated using DCF-DA (Cellular ROS assay kit, Abcam). In the presence of HRP and H 2 O 2 , DCF-DA is converted to highly fluorescent 2′,7′-dichlorodihydrofluorescein (DCF). The ROS assay was performed according to the supplier's instructions. The confluent cells incubated at the concentration of 50 µg/ml per extract for 6 h in 12-well cell plates were treated with 1 mM of H 2 O 2 for 30 min. The cells were twice washed with PBS and incubated with 10 mM DCF-DA for 40 min at 37 ℃ in the dark. The cells were then washed twice with PBS and analyzed by a microplate reader at an excitation and emission wavelength of 485 nm and 530 nm, respectively. Statistical analysis. The data presented as the mean ± standard deviation from three independent experiences was analyzed by student's t-test with a p-value of < 0.01 considered as significant for the differences. Results and discussion GC-MS analysis of DME. GC-MS analysis of DME resulted in the indentification of 45 different chemical compounds (Supplementary Table S2). The majority of compounds belonged to the organic acids, terpenes and phenolic compounds, especially terpenes and quinones such as ferruginol 17 , taxodione 18 which contributed significant amount of the total chemical composition of DME by 14.58 and 2.21%, respectively. The anticancer activities of these compounds are mediated due to the presence of hydroxyl groups 17,18 , including other biologically active components, which have been reported to be anticancerous and/or antitumorous in nature such as estradiol 19 . The resutls of GC-MS analysis of DME confirms that the reported anticancer acitivity of DME could be mediated via these bioactive components present in the DME. In brief, dried powder of the floral cones of M. glyptostroboides was successively extracted with methanol and partitioned using the organic solvents n-hexane, dichloromethane, chloroform, and ethyl acetate in addition to DI water (Fig. 1). The cytotoxic effects of the extracts on HeLa cervical cancer cells were assessed for alongside COS7 cells as the non-cancer cell control using CCK-8. Cells were treated with the HE, DME, CE, EAE, and DE extracts for 24 h at the concentration range of 6.25, 12.5, 25, and 50 µg/ml per extract. Figure 2a,b show that the HE, DME, and CE extracts had considerable anticancer effects on HeLa cells in a dose-dependent manner. The viability of COS7 cells was 2-, 12-, and 1.3-fold higher than that of HeLa cells under the administration of 50 µg/ml of HE, DME, and CE extracts, respectively. We also confirmed low cytotoxicity of our test samples on normal cervical epithelial cells than HeLa cells (Supplementary Fig. S1). Quantification of sub G1 phase and nuclear disruption in HeLa cells. To assess the anticancer effects of various organic extracts of M. glyptostroboides on HeLa cells, we performed Hoechst 33342 staining and cell cycle assay. Figure 3 shows the results of the Hoechst 33342 staining of HeLa cells treated with M. glyptostroboides-derived extracts. The extracts DME and CE displayed a brightly colored condensed fragmented nuclei indicated nuclear disintegration and suggesting the apoptotic cell death 22 . In contrast to this, the extracts HE, EAE, and DE did not show the presence of apoptotic nuclei. Further, staining results were quantified as apoptotic index, suggesting 36.67 ± 4.93% and 28.66 ± 3.79% apoptotic nuclei in the cells treated with DME and CE, respectively. Moreover, the phenotypic characteristic of irregular cell morphologies and membrane blebbing especially in the DME and CE extracts treated cells indicated the apoptotic morphology (Fig. 3). These results www.nature.com/scientificreports/ are consistent with the results of the cell viability assay shown in Fig. 1, suggesting that the extracts induced apoptosis in HeLa cells. To assess the anticancer effects of various organic extracts of M. glyptostroboides on HeLa cells, sub-G1 phase of cell cycle was determined by using PI-staining ( Fig. 4 and Supplementary Fig. S2). PI can permeate through the damaged membranes of apoptotic cells and stains the nuclei 23,24 . Apoptotic cells accumulate in the sub-G1 phase. The proportion of the cells in the sub-G1 phase upon treatment with the DE extract was 10.2 ± 1.6% and www.nature.com/scientificreports/ similar to that in the negative control (9.4 ± 2.3%), HE (10.3 ± 3.6%), and EAE (10.03 ± 2.1%). The maximum sub-G1 population was observed in DME (42.76 ± 1.7%) followed by CE (39.7 ± 0.81%), which was significantly 4.5-and 4.2-folds higher compared with the negative control (Fig. 4). Collectively, the results of the cell cycle analysis, cell viability assay, (Fig. 2b), and Hoechst 33342 staining showed that the DME and CE extracts induced apoptosis, suggesting the anticancer behavior of these extracts. These findings are in strong accordance with a recent study in which a plant-derived secondary metabolite has been shown to induce cell cycle assay and consequent apoptosis in HeLa cells 20 . Analysis of the apoptotic pathway in HeLa cervical cancer cells. Apoptosis is an important cellular process, and it is divided into two main pathways-extrinsic and intrinsic. To identify the apoptotic pathway induced in HeLa cells by the M. glyptostroboides extracts, we performed western blotting to evaluate the changes in the protein levels of the extrinsic apoptosis pathway components BID, Cleaved Caspase (Cl Cas)-3, and Cl Cas-8. Cas-8 is the initiator of the extrinsic apoptosis pathway. Stimulation of the death receptor activates Cas-8, which then cleaves BID 25 . Cleaved BID, also known as truncated BID (tBID), induces the intrinsic pathway. Figure 5b shows the quantification of the data from the western blot analysis of Fig. 5a. The Cl Cas-8 level increased approximately 2.4-and 2.1-folds in HeLa cells treated with the DME and CE extracts, respectively, relative to the level in the negative control. Upregulated Cl Cas-8 levels increased the cleavage of BID into tBID. As shown in Fig. 5b, the tBID levels in HeLa cells treated with the DME and CE extracts were 2.3-, and 1.7-fold more higher, respectively, which were significantly different than that in the negative control. The level of Cl Cas3, which is a member of the cysteine-aspartic acid protease family, was significantly increased in the cells treated with the DME and CE extracts relative to the level in the negative control. These results strongly support that the DME and CE extracts activate caspase-8 to induce apoptosis in HeLa cells via the extrinsic pathway. Also, the significantly higher amount of cleaved Parp, a well-known marker of DNA damage and apoptosis 26 , was noticed in the cells treated with the DME and CE (Fig. 6), suggesting the apoptotic potential of these extracts. The production of tBID in the extrinsic pathway may affect the intrinsic pathway. Therefore, we evaluated the mRNA levels of BAX, p53, and Bcl-2, which are involved in the intrinsic pathway. Figure 7 shows the temporal mRNA levels of Bcl-2, BAX, and p53 in HeLa cells as assessed by qRT-PCR. Interestingly, Bcl-2 mRNA level increased with time in cells treated with the HE, DME, or CE extract, and after 10 h, 2.5-, 4.3-, and 4.5-folds www.nature.com/scientificreports/ increase were observed, respectively, relative to the level in the negative control group. After 10 h of treatment with the HE, CE, or DME extract, BAX mRNA level was upregulated by 1.3-, 1.9-, and 3.4-folds, respectively. Additionally, p53 mRNA level increased approximately 1.5-fold upon 10 h of treatment with the HE, DME, or CE extract (Fig. 7a). However, no significant changes were observed with the EAE and DE extracts. BAX mRNA level was increased by the DME and CE extracts in a time-dependent manner, but Bcl-2 mRNA level also increased (Fig. 7b,c). Both BAX and Bcl-2 are involved in the intrinsic apoptosis pathway. BAX is a pro-apoptotic protein regulated by the tumor suppressor protein p53. Conversely, Bcl-2 is an anti-apoptotic protein that binds to BAX. When HeLa cells were treated with the HE, DME, and CE extracts, the intrinsic pathway was distracted by the upregulated Bcl-2 levels while being induced by the upregulated p53 and BAX levels. In addition, reactive oxygen species, which is one of the major factors in the intrinsic apoptosis pathway, was found to be different at the chemical level, however, it did not cause any difference in the cells (Supplementary Fig. S3). www.nature.com/scientificreports/ Taken together, M. glyptostroboides-derived organic extracts, particularly the DME extract, which may contain anticancer terpenoid compounds 13 induced Cl Cas8, resulting in the increased cleavage of BID into tBID, which probably promotes the intrinsic pathway. However, Bcl-2, which increases simultaneously with BAX and p53 levels, is thought to interfere with the intrinsic pathway (Fig. 8). Conclusions The DME organic extract derived from the floral cones of M. glyptostroboides containing anticancerous quinone, terpenes and steroid components showed high cytotoxicity in cervical cancer cells (HeLa) and low toxicity in normal cells (COS7). Furthermore, Hoechst 33342 staining, cell cycle assay, western blotting, and RT-PCR results showed that the cytotoxicity of the extract resulted from the induction of the extrinsic apoptosis pathway. In particular, cleaved Cas8, which plays an important role in extrinsic apoptosis, was upregulated by 27.17-fold relative to the level in the negative control, indicating the anticancer potential of the DME extract. However, further research is needed to unravel the death receptor and corresponding extrinsic pathway to elucidate the exact apoptosis mechanism induced by the DME extract and the relevant biomedical potential. Figure 8. Working model for the mechanism underlying the induction of apoptosis in HeLa cells by the DME extract. The dotted arrows indicate that more evidence is needed to establish a correlation. The solid arrows denote activation. "-" indicates inhibition.
2021-01-14T14:25:30.869Z
2021-01-13T00:00:00.000
{ "year": 2021, "sha1": "8fd37a8089fcd43ec2edb6279927e8d2a3361ab8", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7806586", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "1b85e92423a0e93dc03aa1d0f568b97018c7aae9", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
259865772
pes2o/s2orc
v3-fos-license
Coprecipitation Methodology Synthesis of Cobalt-Oxide Nanomaterials Influenced by pH Conditions: Opportunities in Optoelectronic Applications , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The cobalt oxide (Co 3 O 4 ) nanomaterials were prepared by coprecipitation synthesis technique by maintaining the pH of the mother solution at 7, 8, and 9. The prepared nanomaterials were subjected to structural and optical characterizations, and the results were examined. The optical absorption spectral studies reveal that the two absorption bands indicate ligand – metal coordination. The photoluminescence spectra contain emission peak at 488 and 745nm due to size and shape of the synthesized materials. The magnetic nature of the samples was identi fi ed from the hysteresis loop traced by vibrating sample magnetometry (VSM). The Fourier transform infrared (FT-IR) spectrum of Co 3 O 4 nanomaterials reveals two sharp bands absorbed in 584 and 666cm -1 . This ascribes to the Co-O and O-Co-O stretching, respectively. As the pH of the solution varied from 7 to 10, the SEM image authenticates the transformation of Co 3 O 4 nanomaterials morphology from spherical to cubic to agglomerated shape. From the UV-Vis spectra, two absorption bands around 473nm and 762nm are observed for the materials prepared at pH 7 and 8. But at pH 9, these two peaks were shifted towards higher wavelengths 515 nm and 777 nm. The observed ferromagnetic nature of Co 3 O 4 nanomaterials clearly show the role of surface spins and surface morphology on the magnetic properties of Co 3 O 4 nanomaterials. The cyclic voltammetry (CV) curves show the rectangular type of voltammogram. This is an indication of good charge propagation with the electrodes. The Nyquist plots of Co 3 O 4 nanomaterials have a semicircle in the high frequency region and a vertical line in the low frequency region. The results suggest that Co 3 O 4 is found to be a promising material for the fabrication of light-emitting diodes, solar cells, and optoelectronic devices. Introduction The nanosized semiconducting compounds have been attracting researcher's interest due to their structural, chemical, physical, optical, and magnetic properties which differ from their bulk counterparts [1]. P-type semiconductor such as cobalt oxide (Co 3 O 4 ) nanomaterials have been attracting attentions due to their excellent properties [2,3]. Cobalt oxide has been widely investigated for various applications such as sensors, electrochromic devices, and catalysts and lithium batteries [4,5]. Cobalt oxide shows interesting optical, magnetic, and electrochemical properties when it is reduced to the nanometer scale [6]. It has a wide range of morphological structures, such as nanorods, core shells, nanowires, helixes, nanobelts, nanoplatelets, and nanotubes with a surface-to-volume ratio [7]. There are various synthesis methods that have been employed to synthesis Co 3 O 4 nanomaterials via physical deposition, wet-chemical, and thermal route [8]. In the synthesis of metal oxide nanoparticles via the hydrothermal method, the pH of the solution significantly affects the purities, morphology, and structure of the nanoparticles. For instance, Bi 2 O 3 nanomaterials synthesised at pH of 13 show a pure monoclinic phase with a higher degree crystalline compared to the nanomaterials prepared at pH 11 and 12 [9]. In the case of ZrO 2 -based nanomaterial, the pH of the reaction solution at 7, 8, and 9 provides ZrO 2 with good crystalline nature [10]. The ZrO 2 crystal sizes were 12, 14, and 16 nm for pH 7, 8, and 9, respectively [10]. Moreover, the ZrO 2 prepared at pH 7 exhibited a ferromagnetic nature against the diamagnetic nature that exhibited by the materials prepared at pH 8 and 9 [10]. Similarly, in the case of SnO 2 nanomaterials synthesized by microwave-assisted method, the morphology changes (change in spherical shape to flower shape) when the pH varied from 7 to 9 [11]. Moreover, the magnetic behaviour changes from ferromagnetic to diamagnetic nature [11]. Many parameters exhibit favourable changes when pH was varied including physical, chemical, magnetic, and optical properties (energy gap, crystal structure, and luminescence). Hence, many works consider the effect of pH during the preparation and characterisation of nanomaterials using chemical methods [12]. Hence, the present work is focused on the synthesis of Co 3 O 4 nanomaterials by the coprecipitation method and studying the effect of pH on the structural, morphological, and physical properties of Co 3 O 4 nanomaterials. Finally, the chromaticity parameters of the Co 3 O 4 nanomaterial were evaluated. Experimental 2.1. Materials 2.1.1. Cobalt Oxide Nanomaterial Preparation. In general, the cobalt chloride solution was mixed with various concentrations of ammonia solutions, and the resulting product was calcinated. In this work, three samples were prepared by maintaining the pH of the reaction solution at 7, 8, and 9. Typically, a 0.1 mol of cobalt chloride (Sigma-Aldrich ≥98%) solution was prepared by adding 50 ml of demineralised water (solution A). Then, 20 ml of ammonium hydroxide (~28% NH 3 base, Sigma-Aldrich) solution was added to 30 ml of dematerialized water (solution B). The solutions A and B were mixed and stirred for 30 minutes. In this reaction, the ammonia (NH 4 OH) solution was added to maintain the required pH. The final solution was stirred for 6 h. The obtained precipitate was washed using ethanol and deionised water consequently. Then, the washed material was dried at 80°C for 12 h in a hot air oven. It was calcined at 500°C for 5 h. The ammonium hydroxide concentration is modified based on the pH values in the required solution B. Materials Characterisation. The powder XRD patterns were recorded for the prepared samples in the Xpert high score instrument over the 2θ range of 20°-80°in steps of 0.02°. From these patterns, information regarding particle size and defects was obtained. The FT-IR spectrum was recorded using Avatar 330, in the wavelength range 400-4000 cm -1 . The KBr pellet technique was used. The spectrum can be used to identify and differentiate molecules present in the sample. The synthesized powder was analysed by the FEI Quanta FEG 200-High Resolution Scanning Electron Microscope. SEM is employed to study the crystal structure and morphology of the samples. The UV-Vis absorption spectra of the synthesized materials were recorded from 190 nm to 1100 nm using a Shimadzu spectrophotometer. It is employed for the quantitative analysis of samples. The photoluminescence responses of the samples were examined using a Jobin Yvon FLUOROLOG-FL3-11 spectrophotometer under an excitation wavelength of 330 nm. It arises when the emitting excited state is generated by the absorption of a photon. The Lake Shore vibrating sample magnetometer (model 7404) was used to study the magnetic properties of synthesized nanomaterials. The colour coordinates of the prepared samples were investigated within the framework of CIE 1931. The electrochemical performance of Co 3 O 4 nanomaterials was evaluated using cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) (three electrode system). The working electrode was made with active material Co 3 O 4 (80 wt. %); active carbon black (10 wt. %) and polyvinylidene fluoride (10 wt. %). The slurry was coated in the stainless steel which is the current collector. The different scan rates of 10, 20, 30, 50, and 100 mVs -1 and the potential window of -0.2 to 1 V were used for the measurement. In this experiment, a 1 M KOH solution was used as an electrolyte. Results and Discussion 3.1. XRD Analysis. The X-ray powder diffraction (XRD) patterns of the synthesised Co 3 O 4 nanomaterials and JCPDS used for comparison are depicted in Figure 1. Figure 1(a) represents the X-ray powder diffraction 2 International Journal of Photoenergy (XRD) patterns of the prepared samples. All the peaks coincided with JCPDS file no. 76-1802, which suggests a cubic spinel structure with space group Fd3m [13]. The intensity of the prepared Co 3 O 4 nanomaterial increases, while increasing pH which suggests that the prepared materials are of good crystalline nature. This was further confirmed by the variation in crystallite size, calculated from the Scherrer equation where D is the crystallite size, λ is the X-ray wavelength (CuK α − 1:5406Å), β is the FWHM, and θ is the diffraction angle. The calculated crystal sizes reduce from 14 to 10 nm, while the pH changes from 7 to 9. The diffraction peak shifted slightly from right to left while increasing pH of the solution (Figure 1(b)). This is due to the different pH of the reaction solution providing energy to molecules to occupy the proper equilibrium sites, resulting in the improvement of the crystalline property and degree of orientation of the Co 3 O 4 nanomaterials. Fourier Transform Infrared Spectroscopy. FTIR spectroscopic analysis done to determine the functional group characters and purity of synthesized metal oxide nanoparticles. From Figure 2, two major sharp bands observed at 584 cm -1 and 666 cm -1 which ascribes the symmetric stretches to Co-O [14] and O-Co-O [15], respectively. The broad bands at 3300 cm -1 are attributed to the O-H stretching. The weak IR band at 1652 and 1619 cm -1 accounting as symmetric and asymmetric stretching of H-O-H because due to the adsorption of moisture. These OH and H-O-H moisture bands may be observed due to the sample pellets being exposed in air ambiance. Figure 3 shows the scanning electron microscope (SEM) images of synthesized nanomaterials. The particles are of uniform size and well-defined spherical shapes. Moreover, when the pH of the solution was increased, the shapes of the particles are growing in International Journal of Photoenergy certain direction such as spherical and cubic (Figure 3(a)) to the regular shape of the cubic structure (Figure 3(b)). Figure 3(c) shows the agglomerated, spherical-shaped particles. Here, the pH of the solution plays a major role in the formation of nucleation sites which determine the shape of the particles. The other reason for such morphological change might be due to the presence of OHions in the precursor solution that creates noncovalent bonding interaction. This intermolecular interaction during the reaction is often referred to as the steric effect, which leads to the raising of repulsive forces between overlapping electron clouds. So, it authenticates the transformation from spherical to cubic to agglomerated shape. UV-Vis Spectra and Optical Bandgap. The UV-Vis spectra of the solutions were measured using a UV-Vis spectrophotometer (Lambda 25, Perkin Elmer Inc., Shelton, CT, USA) from 200 nm to 1200 nm in 10 mm quartz cuvettes at room temperature, and ε are given in M −1 cm −1 . Figure 4 shows the optical absorption spectra of synthesized materials. There are two absorption bands around 473 nm and 762 nm observed for the materials prepared at pH 7 and 8, respectively. But at pH 9, these two peaks were shifted towards higher wavelength such as 515 nm and 777 nm, respectively. These two absorption bands are associated with the O 2--to-Co 2+ and O 2--to-Co 3+ charge transfer processes in the Co 3 O 4 nanomaterials. The peak at 276 nm fits the bonding-antibonding nature of the (π -π * ) electronic transition between cobalt and oxygen [16]. The band gaps were calculated as 2.5, 2.4, and 2.2 eV for the materials prepared at pH 7, 8, and 9, respectively. Figure 5 3.5. Photoluminescence Spectra. The photoluminescence spectra of Co 3 O 4 nanomaterials are shown in Figure 6. In Figure 6, it is observed that the intensity of the peaks increased with respect to the alkalinity of the samples (increase in pH). The respective emission peaks observed International Journal of Photoenergy along 488.85 nm, 745 nm, and 488 nm in the spectra are referred to as the deep level emission as it asserts the radiative transition of donors to acceptors [21]. These results indicate that the radiative transition of the donors (cobalt interstitial, oxygen vacancy) to acceptors (cobalt vacancy, oxygen interstitial) is only active at the different pH, concentration etc. of the mother solution. The emission band at 488 nm is represented the interband transition due to the transition between the occupied state of Fermi level and the unoccupied conduction band. It is observed here due to the different particle size [22] and surface morphology [23]. Radial recombination occurs in the surface lattice of Co 3 O 4 due to the holes recombining with the electrons found in the singly ionized vacancies of oxygen atoms. So, the luminescence spectra of Co 3 O 4 are found to be the presence of the oxygen vacancies in Co 3 O 4 nanomaterials [24]. Figure 7. From the curve, it is seen that the prepared particles exhibit paramagnetic behaviour with coercivity value are shown in Table 1. In general, ferromagnetic ordering will be obtained when the excess surface spins present in nanomaterials which may contribute to the weak ferromagnetism in antiferromagnetic materials. In the present work, the prepared samples deliver a weak ferromagnetic behaviour, this may be ascribed to the uncompensated surface spins [25] or the partly inverted spinel structure possibly by local electron hopping [26]. Further, the ferromagnetic behaviour of the 5 International Journal of Photoenergy materials relies on defects, oxygen vacancies, and the surface morphology of the nanomaterials. In addition, the fine size of Co 3 O 4 particles provides the large surface-to-volume ratio, which favours the magnetic moment from uncompensated surface Co ions. Hence, the observed ferromagnetic nature of Co 3 O 4 nanomaterials clearly shows the role of surface spins and surface morphology on the magnetic properties of Co 3 O 4 nanomaterials. International Commission on Illumination-(CIE-) Chromaticity. The chromaticity coordinates x and y are calculated from the tristimulus values from the following [27]. Figure 8 shows the CIE-1931 colour coordinate diagram of Co 3 O 4 for the pH from 7 to 9. The CIE coordinates (x, y) are found to be (0.196, 0.374), (0.589, 0.373), and (0.018, 0.47) for the materials synthesized at pH 7, 8, and 9. The calculated colour coordinates are listed in Table 2. The cobalt with pH 7 and pH 9 appeared in the green region, whereas the pH 8 sample appeared in the red region. The illumination value of colour tumble emission can be tuned. In order to achieve the white colour, it is required more optimization of the pH value between 7 and 8. The red oxide phosphor-based devices are suffered due to the issues of low colour purity [28]. The CIE coordinates (x, y) for Co 3 O 4 with pH 8 are determined to be (0.589, 0.373), as compared to the National Television Standard Committee (NTSC) red (0.67, 0.33). This phosphor has to be tuned for improving colour purity. As depicted, phosphors of Co 3 O 4 with different pHs from 7 to 9 could be used as potential emission material, visible light display device, and lightemitting diode applications. Figure 9 shows the cyclic voltammetry (CV) curves of the prepared cobalt oxide nanomaterials. The curves show the rectangular type of voltammogram, and this is an indication of notable charge propagation at the electrodes. The shape of the CV does not change while increasing scan rate; the size of the strip only increased which indicates the good charge discharge characteristics with excellent reverse process [29]. This electrochemical study of Co 3 O 4 nanomaterials confirms the pseudocapacitive behaviour. The specific capacitance was calculated from CV curves according to the following relation: Electrochemical Behaviour of Co 3 O 4 . where C s is the specific capacitance (F/g), v is the scan rate (V/s), (V c − V a ) is potential window, I is the current, and m is the mass (g) of active material. Figure 10 shows the specific capacitance of the prepared materials. The maximum specific capacitance of 350 F/g was observed, and 1at 0 m/vs at pH-8, 300 F/g was calculated from 300 F/g and 275 F/g found for pH-7 only. However, the material prepared at pH-8 only has high specific capacitance because their well-defined cubic shape [14]. Figure 11 shows electrochemical impedance Conclusions The Co 3 O 4 nanomaterials were prepared by coprecipitation by maintaining the pH of the reaction solution at 7, 8, and 9. The XRD pattern showed cubic spinel structure, and the calculated crystal size ranges from 14 to 10 nm when the pH changes from 7 to 9. The variation in mor-phology of Co 3 O 4 nanomaterials was investigated by SEM. The CV curves indicate a rectangular type of voltammogram, signifying good charge propagation with the electrodes. The optical absorption band gaps of Co 3 O 4 nanomaterials were estimated at 2.5, 2.4, and 2.2 eV for pH 7, 8, and 9, respectively. The emission band at 488 nm is attributed to the transition taking place here; the size effect and surface morphology were observed in the XRD and SEM. The VSM studies show that the synthesized nanomaterials exhibit weak ferromagnetism in antiferromagnetic nature. This work concluded that the tunable properties such as particle size and band gap of Co 3 O 4 nanomaterials were achieved under different pH conditions. Also, the primary studies proved that the prepared Co 3 O 4 nanomaterials could serve as photoluminescence applications.
2023-07-15T15:57:08.297Z
2023-07-11T00:00:00.000
{ "year": 2023, "sha1": "e5aca2e1749b0d1ccc96da2e75c93cb27f5842dc", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijp/2023/2493231.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1b153c1da0379fde316236eff45271f0c491be7d", "s2fieldsofstudy": [ "Materials Science", "Physics", "Chemistry" ], "extfieldsofstudy": [] }
237005737
pes2o/s2orc
v3-fos-license
Stylo-Jugular Venous Compression Syndrome: Lessons Based on a Case Report Patient: Female, 28-year-old Final Diagnosis: Nutcracker phenomenon • Wilkie syndrome Symptoms: Epigastric pain • rapid weight loss • abdomen was tense and painful and pain increased after compression Medication: — Clinical Procedure: — Specialty: Radiology Objective: Unusual clinical course Background: Eagle syndrome is a vascular compression syndrome that is caused by a very elongated styloid process and/or calcification of the stylohyoid ligament compressing the vascular or nerve structures of the neck, resulting in vascular congestion, thrombosis, or neurological symptoms (eg, dysphagia, neck pain, ear pain). Stylo-jugular venous compression syndrome is a subtype of Eagle syndrome and is caused by compression of the internal jugular vein. Treatment varies according to the symptoms and the severity of the compression, and it can be pharmacological or surgical, with vascular stenting and/or removal of the styloid process. We describe a rare case of left cerebral venous sinus thrombosis and ipsilateral internal jugular vein stenosis sustained by excessive length of the left styloid process. Case Report: A 36-year-old woman presented with recurrent episodes of drug-resistant headache and recent memory disturbances. She underwent cerebral and neck multidetector computed tomography-angiography and Doppler ultrasound of the epiaortic vessels that respectively revealed thrombosis of the left cerebral venous sinus and left internal jugular vein stenosis due to a very long styloid process. The patient was treated with anticoagulant drugs and experienced a gradual remission of symptoms. Conclusions: Compression of the jugular vein by the styloid process is a rare entity, and it often goes undiagnosed when it is asymptomatic. Doppler ultrasound is a sensitive method for identifying jugular vein stenosis and can provide an estimated degree of stenosis, which is useful for treatment planning. Doppler ultrasound should be combined with multidetector computed tomography-angiography to rule out compression of other vascular structures and other causes of compression. Failure to treat these patients could have serious health consequences for them. Background Eagle syndrome [1] is a vascular compression syndrome that is caused by a very elongated styloid process (SP) and/or calcification of the stylohyoid ligament compressing arterial vascular structures (carotid arteries) [2], venous structures (internal jugular vein) [3], or nerve structures [4]. Eagle syndrome is uncommon and has a higher incidence in women compared with men, with a ratio of 3: 1 [5]. The asymptomatic presence of 1 or 2 longer SPs occurs in 4% of the population [6]. Eagle syndrome can be unilateral or bilateral, and typical symptoms occur in only 4% of patients with a longer SP [7]. The symptoms most commonly associated with this pathology are recurrent episodes of pain in the face and throat, dysphagia, foreign body sensation in the pharynx, neck pain, and ear pain. The differential diagnosis should therefore be made with all other pathologies that can cause the above symptoms. Discovery occurs very often after routine multidetector computed tomography (MDCT) or Doppler ultrasound (DUS) examination of epiaortic vessels. A recent study suggested that the SP elongation or ossification may be explained by the occurrence of development abnormalities and/or altered bone homeostasis [8]. Eagle syndrome has been classified into subtypes based on the structures that are compressed: classic jugular venous compression syndrome, stylo-carotid syndrome, and stylojugular venous compression syndrome (SJVCS). The classical syndrome involves glossopharyngeal nerves but may also involve the V-VII and X nerves. The range of possible symptoms include tinnitus, otalgia, pharyngeal pain, dysphagia, foreign body sensation, pain on extending tongue, change in voice, and hypersalivation sensation, among others [9]. Stylo-carotid syndrome is caused by impingements of internal carotid arteries and leads to transient ischemic attack and stroke. Neurological symptoms such as hemiparesis and speech disturbance have also been reported [10]. SJVCS (Figure 1) is little known, and only a few reports mention it. The first study was published in 2012 [11], and a proposal was subsequently made to consider it an Eagle syndrome subtype and rename it Eagle-like jugular syndrome [12]. An elongated SP occurs in 4-28% of the population and is more common in women, with a mean age of 50±15 years [13]. Jayaraman et al [14] found that the right and left sides are equally affected, but the incidence of the left side is more common (right 24.1% vs left 30.6%). Compression occurs because the jugular vein wall, which is thinner than that of the artery and devoid of smooth muscle and elastic fibers, is more likely to be compressed by extrinsic structures in the upper neck, such as the transverse process of C1 and SP. SP induces jugular vein compression and causes venous flow congestion, which predisposes to thrombosis [15]. The J3 segment of jugular vein has been found to be the most frequently involved [16]. SJVCS has received more attention in recent years and several comorbidities have been found, including intracranial hypertension and a high-pressure gradient due to stenosis [17], thrombosis of the transverse sigmoid sinus, and perimesencephalic subarachnoid hemorrhage. Three-dimensional computed tomography (CT) is the best method for evaluating the anatomical relationship between the SP and surrounding structures, such as nerves and blood vessels, and to measure the length and angle of the SP. CT also enables the exclusion of other causes of compression, and it provides details for the surgeon performing any extra-or intraoral excision. Therapy varies according to the symptoms and vascular compression. It can be conservative, with steroid, anticoagulant, or anesthetic drugs [18], or surgical, with SP removal [19] or endovascular stenting, which has been shown to be an effective treatment [20]. In this study, we describe a case of hemodynamically significant unilateral compression of the left internal jugular vein (LIJV) caused by a very long and anteriorly angled SP that resulted in ipsilateral transverse cerebral venous sinus thrombosis. Case Report A 36-year-old woman presented at our hospital owing to increased episodes of drug-resistant headache and recurrent memory disturbances worsening in the past 2 weeks. The patient reported no past episodes of deep vein thrombosis or anemia, recent infections, or kidney or pulmonary disorders. In laboratory tests, the D-dimer value was above normal (1500 mg/L). The patient underwent MDCT-angiography of the brain and epiaortic vessels, DUS of the epiaortic vessels, and conventional X-ray examination of the head. The DUS study was performed with a MyLab 9 XG device (Esaote) using a 5-to 15-MHz linear probe. MDCT was performed with an Optima 64-slice device (GE Healthcare). Ultrasound scans were performed in the laterocervical region and measurements were made at 3 levels of the jugular veins in a cranio-caudal direction: J3, below the jugular foramen before the passage of the internal jugular vein (IJV); J2, at the level of the thyroid gland; and J1, in the segment where the IJV joins the subclavian vein to form the brachiocephalic vein. DUS was used to assess the caliber of the jugular veins and carotid arteries and the peak speed velocities (PSVs) of the common, internal, and external carotid arteries and the right and left IJVs (Figures 2, 3). Table 1. On MDCT examination, the right SP had a length of 14.8 mm (Figure 4A). The left SP was longer than normal at 54.8 mm ( Figure 4B) and lay at an angle anterior, compressing the ipsilateral IJV against the transverse process of C1 ( Figure 4C). CT acquisitions with contrast medium showed a thrombosis of the left cerebral venous sinus, which could be seen in the reconstructions based on the axial ( Figure 4D) and coronal (Figure 4E) planes. The patient declined to undergo surgical resection of the SP and was therefore only subjected to conservative drug treatment with lowmolecular-weight heparin (100 U/kg subcutaneously twice daily) for 2 weeks followed by long-term oral anticoagulants (6 months). After 1 month the patient had a gradual and significant remission of headache episodes and a restoration of the normal values of D-dimer (<500 mg/L). We therefore advised the patient to continue the drug treatment and after 3 months to undergo cerebral MDCT-angiography. Discussion Given the clinical and laboratory data and the patient's clinical history, particularly the recurrent episodes of headache, memory loss, and high D-dimer values, we suspected a cerebral problem (eg, embolism, ischemia, venous thrombosis) and decided to perform MDCT-angiography. Subsequently, the findings from DUS and the severity of the stenosis (>70%) highlighted vascular hyperflow in the contralateral jugular vein, demonstrating circulatory compensation. The discovery of a longer SP often happens by chance during routine CT or conventional radiographic examinations as it does not always cause compression of the vascular or nerve structures. Moreover, symptoms can also be caused by an SP of regular length but with an anomalous tip deviation [15]. Among the various radiological diagnostic criteria described in the literature, most authors define an SP length greater than 2.5 cm as abnormal [13], while others suggest 4.0 cm [16]. The average length of the SP found in SJCVS is 3.5-3.7 cm and the distance between the transverse process SP and C1 is 3.9 mm. Different imaging modalities can be used for Eagle syndrome study, including conventional head radiographs in lateral projection ( Figure 5). However, radiographs do not always allow a correct visualization and measurement of the SP for the superimposition of the images. CT with contrast medium and 3-dimensional CT are the reference examinations that allow correct measurement of the length and angulation of the SP and evaluation of its relationships with the vascular and laterocervical nerve structures. In our case, the severe stenosis of the LIJV and the consequent venous hypertension probably induced the ipsilateral cerebral venous thrombosis. LIJV stenosis was revealed by DUS with a significant reduction or absence of flow in the stenotic tract, predominantly J3, and an increase in the contralateral flow (>30%). In the normal contralateral jugular vein, the maximum flow velocity was between 24 and 36 cm/s. A jugular vein contralateral dilation or tortuous vertebral venous plexus can be considered an expression of compensatory flow and may be a sign of SJVCS, but only occurs in 55.6% of severe strictures. Important data emerged in our study from the flow ratio measurement of the LIJV. The ratio was 3.2, corresponding to stenosis of 72%. These values indicate a very high risk of thrombosis, and the therapy of choice should be surgical removal of the SP. Endovascular therapy [17] could be a valid alternative but still requires more extensive case series and long-term data. In our case, the patient declined a styloidectomy, and we were not confident that there were sufficient guarantees for endovascular stenting in accordance with previous claims [18]. Long-term anticoagulant drug therapy should only be reserved for symptomatic cases with a flow ratio less than 2.5 (stenosis <50%). Conclusions IJV compression by the SP is a rare entity and can be asymptomatic, and most of the time it goes undiagnosed and treatment is delayed. DUS is a sensitive method for identifying IJV stenosis and can provide an estimate of the degree of stenosis, which is very useful for treatment planning. DUS examination should be combined with MDCT-angiography of the neck to rule out compression of other vascular structures and other causes of compression. In cases with light compression of the IJV and in patients who decline surgical treatment, we recommend a follow-up with DUS at 6 or 12 months. Failure to treat these patients, as in the case of other vascular compression syndromes [19][20][21], could have very serious consequences for the patients' health.
2021-08-14T09:36:52.315Z
2021-08-24T00:00:00.000
{ "year": 2021, "sha1": "528857a8d4f62ab2445d99d18c376570e711957f", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8404167", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "952664f1f56600de738bfb43b89b7e4355eca0ab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237292821
pes2o/s2orc
v3-fos-license
Liver transplantation during global COVID-19 pandemic The coronavirus disease 2019 (COVID-19) caused by severe acute respiratory disease respiratory syndrome coronavirus-2 has significantly impacted the health care systems globally. Liver transplantation (LT) has faced an unequivocal challenge during this unprecedented time. This targeted review aims to cover most of the clinical issues, challenges and concerns about LT during the COVID-19 pandemic and discuss the most updated literature on this rapidly emerging subject. INTRODUCTION Coronavirus disease 2019 (COVID-19) has significantly impacted the health care system globally. The virus is known to start in Wuhan, China, in 2019 and has now affected over 20 million people in the United States alone, with over 350000 deaths [1]. The causative agent severe acute respiratory disease respiratory syndrome coronavirus 2 (SARS-CoV-2) has a clinical range of clinical manifestations from asymptomatic infection to viral pneumonia, acute respiratory distress syndrome (ARDS), acute kidney injury, and immune hypersensitivity response leading to cytokine storm and vasodilated shock with multi-organ system failure and death [2][3][4][5]. Liver transplantation (LT) is the gold-standard treatment for end-stage liver disease (ESLD) patients. The cardinal challenge of any solid organ transplant is to successfully eliminate rejection of the transplanted organ graft with effective and tolerated immunosuppressive therapy. Induction therapy is initially started using anti-Tlymphocyte antibodies, which can be either polyclonal or monoclonal. Maintenance therapy commonly includes glucocorticoids, calcineurin inhibitors, and anti-proliferative agents. The goal of immunomodulating therapy is to keep a balance between minimizing the risk of rejection on the one hand and the risk of infections and malignancies on the other hand [6]. Being on immunosuppressive medications, it is unclear whether these drugs would reduce the risk of cytokine storm or result in more severe events during COVID-19 infection in these subjects. CHALLENGES, FACTS, AND VIEWS Liver transplantation is the only treatment option available for patients with acute fulminant hepatic failure, decompensated cirrhosis, or, under certain select circumstances, in hepatocellular malignancies [7]. The liver is the second most commonly transplanted organ after kidney transplants throughout the world [8]. Unlike in endstage kidney disease, where renal dialysis can sustain life, there is no effective medical technology to replace liver function, rendering LT a truly lifesaving procedure. Although different viral infections have been associated with acute liver failure[9], apart from the component of multi-organ system failure, COVID-19 infection has so far only shown elevations in liver enzymes and mild elevation in bilirubin in critically ill patients [10]. Being a complex surgery requiring a lot of pre and post-procedural arrangements, monitoring and follow-up, LT has faced a significant stress burden when COVID-19 has emerged. One of the major arguments against the continuation of liver transplant procedures was the 20.5% mortality rate in patients who underwent elective surgeries during the incubation period of COVID-19 infection [11]. Adding to the challenge, being on immunosuppressive medications increases the subject's risk of worsening comorbidities and potentially lethal infection [12]. Nonetheless, case series have shown favorable clinical outcomes in COVID-19 patients on immunosuppressive medications, possibly because of the abated cytokine release syndrome [13,14]. Given the uncertainty of COVID-19 outcome in solid-organ recipients and the weighing riskbenefit of such a procedure, many transplant centers held all their transplant procedures at the beginning of the pandemic. Merola et al [15]. reported that the successful completion of liver transplant depends on multiple factors affected by the COVID-19 pandemic such as donor evaluations, organ recovery, organ procurement organization availability, resources of donor hospitals, acceptance of organ offers by the transplant centers for their candidates. Moreover, the burden of the anticipated resource utilization needs to be considered at the performing transplant centers for carrying out the transplant procedures, such as the availability of ventilators, intensive care unit (ICU) beds, blood products and adequate staffing. In order to overcome these challenges, action plans were implemented. These included timely and reliable COVID-19 testing, limiting unnecessary travel, and promoting localized and central organ recoveries. The risk and resources for performing transplants at a particular center were closely assessed at all times in light of geographic constraints and local information regarding COVID-19 incidences in the hospital and the community at a given time [16]. As for deceased donor LT (DDLT), due to high mortality from COVID-19 infection, it has been questioned whether it was safe and appropriate to use the liver of patients who died of COVID-19 infection. There is a theoretical risk of infection to the recipients during DDLT and, in the absence of clear clinical consensus, it is left to the patients and individual transplant physicians to decide on this in the United States [17]. On the other hand, some countries already perform living donor LT (LDLT) due to deceased donor organ transplantation restrictions. This requires screening of the donors to decrease the risk of both transmission of COVID-19 infection and the risks to the donors themselves from the complications of the surgical procedures. It is recommended that living donors with confirmed active disease should wait until at least 28 days after the resolution of symptoms before transplantation [18]. Most recently he World Health Organization (WHO) and Center for Disease Control and Prevention (CDC) recommended a limitation in the outpatient and elective surgical procedures and use of personal protective equipment to limit the transmission during the first 2 waves of COVID-19 infection [7]. However, solid organ transplant is considered an essential urgent surgery; for example, the United States' Center for Medicare Services (CMS) designates it as a tier 3b procedure that should be continued when other elective surgical procedures are restricted [19]. To halt or continue with LT at the summit of the epidemic, challenges in DDLT and LDLT, many questions with unclear answers faced every transplant center worldwide. EFFECT OF COVID-19 PANDEMIC ON LIVER TRANSPLANT ACTIVITIES The disruption in the process of organ donation and recovery, especially in living donors, resulted in about a 25 % reduction in LT between March and May 2020 in the United States [15]. Taking a closer look at the situation, up until the beginning of May 2020, the rate of new listings or performance of DDLT in states with the lowest COVID-19 burden remained stable. On the other hand, in states with the highest incidence of COVID-19 infection, noticeably up to 34% fewer listings and DDLTs surgeries were observed. This could partially be attributed to lower hospital capacities accommodating an increased number of COVID-19 cases. Regarding Model for End-Stage Liver Disease (MELD) scores, there were 35.4% fewer DDLTs than expected for MELD 15-19 and 50.4% more DDLT than expected for MELD 30-34. Nonetheless, liver donor liver LDLT was 65% fewer than expected in states with the highest burden, likely due to its elective nature. In states with the highest COVID-19 incidence early in the pandemic, there was a 59% increase in waitlist mortality observed. By August 2020, the overall volume transplant evaluations and listings were restored to pre-pandemic averages. While early in the pandemic, the COVID-affected areas had significant changes to their transplant practice, later in the pandemic, the new COVID-affected areas did not seem to be affected the same extent. Many reasons can explain this decline. The American Association for the Study of Liver Diseases (AASLD) Expert Panel and the CDC endorsed safety precautions and liberal use of telemedicine, enabling resumed functionality of the transplant centers. Moreover, elderly patients and those with comorbid conditions have avoided clinic and laboratory visits to minimize interaction and, hence, remove competing services in favor of resuming essential transplant evaluation and listing processes[20]. ETHICAL CONSIDERATIONS IN PATIENTS WITH COVID-19 AND LIVER TRANSPLANT The rapid outbreak of COVID-19 led to a scarcity of resources and disrupted normal operations in many hospitals and transplant centers worldwide [21]. During the sixmonth hiatus in performing LTs at a transplant center in Hong Kong, lower adherence to follow-up and two deaths were reported [22]. Similar situations have occurred in healthcare facilities across the world, which has prompted the urge to restructure the management of at-risk patients by use of ethical guidance to deliver appropriate health care [21]. Additional challenges imposed by the COVID-19 pandemic included, in addition to candidate prioritization and organ availability, distance in hard-to-reach location and issues with transportation to transplant centers, disproportionate disease burden in a given geographical area, all of which have led to the non-applicability of standard protocols [21]. These factors need to be taken into consideration during decision-making [21]. Beneficence, non-maleficence, justice, and autonomy are the fundamental ethical pillars that should guide decision-making to ensure that all patients are suitably considered for transplantation [21]. From an ethical perspective, there should be a balance between beneficence and non-maleficence when evaluating candidates, such that for a high-risk patient with a high MELD score, acute liver failure, or status 1A, beneficence is favored over minimized harm [21]. This is because an early liver transplant could confer the highest survival benefit in high-risk patients, although there is also the risk of exposure to SARS-CoV-2[21]. However, in low-risk patients, in whom transplantation is not imperative, exposure to SARS-CoV-2, on the other hand, could be more harmful [21]. Additionally, as per the principle of distributive justice, critical resources such as personal protective equipment (PPE) kits, hospital beds and health care worker manpower, etc., should not be strained, directed, or diverged, especially during the COVID-19 pandemic [21]. A tiered approach to transplant surgery during the COVID-19 pandemic has been proposed, which shows the degree of reduction in transplantations that could be made based on the available resources, guided by ethical principles in decision making [23]. According to this tiered approach, transplant activity can be broken down from 0%, which is a state in which a health care system is wholly burdened and unable to provide surgeries, to 100% availability [23]. If in Tier 1 (0% capacity), given the complete lack of resources, consider transferring high-risk patients to alternative centers in case of an emergency[23]. In subsequent phases in which patients can be considered for transplantation, in phase 1 (Tier 2, 25% capacity), due to severe reduction in resources, surgeries should be prioritized for emergent cases only, i.e., for immediately life-threatening conditions, e.g., acute liver failure, MELD score > 30, or if the patient is unlikely to survive without intervention[23]. In phase 2 (Tier 3, 50% capacity), owing to a moderate reduction in resources, surgeries should be prioritized based on urgency; such as in those that are not immediately life-threatening or patients who cannot be managed in outpatient settings (acute liver failure or MELD score > 25), or for those who are unlikely to survive during the pandemic without intervention [23]. In phase 3 (Tier 4, 75% capacity), with a mild reduction in resources, elective cases should be considered, such as patients in non-life-threatening conditions, those who can be managed as outpatients with medical therapy, or if the patient is likely to remain stable for the duration of the pandemic [23]. With this guidance, the recommended stepwise evaluation can be helpful to assist healthcare centers to evaluate their capacity to deliver safe and effective surgical care and determine the level of surgical triage that can be accommodated [21]. The principle of distributive justice can be used to determine the best approach to allocate the available resources, especially to specific demographics, such as the elderly or lower socioeconomic groups[21]. INCIDENCE OF LIVER INJURY WITH COVID-19 In patients with COVID-19, the incidence of liver injury (defined by an alanine transaminase (ALT) and/or aspartate aminotransferase (AST) higher than threefold of the upper limit of normal, or gamma-glutamyl transferase (GGT) or total bilirubin higher than twofold of the upper limit of the normal reference range, is significantly higher in those with gastrointestinal symptoms, such as nausea, vomiting or diarrhea compared to those without gastrointestinal symptoms (17.57% vs 8.84%, P = 0.035) [24]. Chronic liver disease was found to increase ALT and AST levels, worsening liver injury from 14.8% to 53% in patients with COVID-19[25]. Liver injury was found to be more severe around the second week of the course of COVID-19 and is suspected to be related to SARS-CoV-2 infection of the regenerated liver cells from the bile duct, as indicated by high angiotensin-converting enzyme 2 (ACE2) expression in these cells [26]. Interestingly, the degree of liver injury was seen to vary with the severity of the symptoms of COVID-19 [27]. In patients with severe symptoms of COVID-19, the incidence of liver injury was significantly higher than that in patients with mild symptoms (36.2% vs 9.6%, P < 0.001)[26,28-31]. It is strongly suspected that the liver injury was possibly an outcome following administration of multiple drugs off-label to treat COVID-19, such as lopinavir and ritonavir (18.6%) [30]. In another report, the liver function of patients with COVID-19 who were admitted to the ICU was significantly different from that of those out of the ICU[32]. According to two reports of the total COVID-19 related deaths, liver injury incidence was 58.06% and 78%, respectively [4,33]. During statistical analysis of identifying risk factors leading to liver injury in patients with COVID-19, the occurrence of liver injury was seen to be only related to critical illness during multiple logistic regression, while patients who were administered several types of drugs were more likely to experience liver function injury (P = 0.002, P = 0.031, respectively)[26]. Thus, critical illness due to COVID-19 was established to be an independent risk factor for liver injury [26]. Surprisingly, the incidence of liver injury in patients with COVID-19 was also found to vary across geographical areas and age of patients [27]. One retrospective observational study reported that liver dysfunction was higher in the Wuhan province than in Jiangsu province [34]. This is possible because early detection and treatment of patients with COVID-19 in the Jiangsu province prevented liver injury, typically worsening as the SARS-CoV-2 infection increases in severity [27]. In reports on pediatric patients, thrombocytopenia accompanied by abnormal liver function was observed in two neonates born to mothers with COVID-19-related pneumonia [35]. In a single-center observational study, abnormal liver function was observed in four out of eight (50%) severe or critically ill pediatric patients with COVID-19[36]. MANIFESTATIONS AND OUTCOME OF COVID-19 IN LIVER TRANSPLANT PATIENTS The typical presentation of COVID-19 includes cough, fever, myalgias, and headache. Indicators of more severe disease include dyspnea and oxygen requirement: less commonly diarrhea, sore throat, nausea, vomiting, and anosmia. Besides the usual symptoms described above, COVID-19 has been associated with rare cases of conjunctivitis, skin rash, venous and arterial thrombosis, encephalitis, Guillain-Barre syndrome, myocarditis, and pericarditis [37]. Concerning liver transplant recipients, clinical signs and symptoms have shown some slight variations. In some studies, clinical manifestations in liver transplant recipients have shown to have less incidence of fever, likely due to concomitant immunosuppressive therapy, while other studies have shown fever to be the most common finding. Other symptoms such as cough and dyspnea were reported to be similar in incidence as in the general population[38]. Diarrhea is another variation found to be more prevalent in solid organ transplant recipients than in the general population [38]. A literature review found that 90% of patients have fever as the most common symptom, followed by cough (36%), shortness of breath (31.8%), and diarrhea (31.8%) [39]. Another multi-center cohort study found that gastrointestinal symptoms such as diarrhea were more common in liver transplant patients than the general population, although respiratory symptoms seem similar [40]. The mechanism is thought to be due to the increased expression of ACE2 receptors in the intestine and liver [41]. A report of 12 cases of living donor liver transplants, 25% (3 patients) acquired SARS-COV-2 early within three months while 75% (9 patients) acquired it later within 18 mo post-transplant. The majority developed mild COVID19 disease except for 2 cases, one with acute renal injury and the other one had severe COVID19 complicated by cytokine storm and death. The latter was at 82 months post-transplant and suffered from multiple comorbidities (diabetes mellitus, hypertension, and chronic rejection) [42]. Another study that included 38 liver transplant recipients revealed that all those who died (7 patients) and 92% of those hospitalized had at least one comorbidity [43]. An international study including data of 151 adult liver transplant recipients and 627 consecutive non-transplant cases from 18 countries with confirmed SARS-CoV-2 infection who presented at the same period for medical care were compared to their outcome. The percentage of patients who needed hospitalization was similar in liver transplant and non-transplant groups (82% vs 76%, P = 0.106). Admission to ICU and the need for mechanical ventilation were significantly higher in the transplant group, 43 patients (28%) and 30 patients (20%) compared to 52 (8%) and 32 (5%) in the nontransplant group (P < 0.0001). Mortality, however, was lower in the transplant group (19% vs 27%, P = 0.0046) [40]. A study in Spain that included 111 liver transplant recipients reported a lower mortality rate than the matched general population [44]. In another multi-center study, the strongest predictors of death were diabetes and acute liver injury, with 72.3% of included liver transplant recipients hospitalized, 26.8% required ICU-level care, and 22% expiring[45]. The overall mortality rate of the organ transplant recipients was 20% and all, except for two transplant recipients, had severe SARS-CoV-2 infection [46]. Comorbidities were observed in 38 cases (58% of total mortality), including hypertension in 58% of patients, diabetes mellitus in 29%, obesity or malignancy in 13%, ischemic heart disease in 11%, chronic obstructive pulmonary disease or hepatitis B in 5%, and bronchial asthma, hepatitis C virus disease, chronic kidney disease or HIV in 3%[46]. However, 14% of patients had no apparent comorbidities. In this cohort of recipients, ARDS was found to be the most frequent cause of death. Interestingly, hospital resource availability was not seen to affect the cause of death of organ transplant recipients [46]. All deceased transplant recipients had either reduced or stopped immunosuppressive therapy; however, no recipient had graft rejection, albeit no postmortem exams were reported [46]. In this review, the deaths of 65 recipients were attributed to complications related to COVID-19 [46]. In an American cohort of 90 patients, all adult solid organ transplant recipients from Columbia University Irving Medical Center and Weill Cornell Medicine, with a positive test for SARS-CoV-2 in an inpatient or outpatient setting were reviewed; of these, 46 (51%) patients were kidney recipients, 17 (18.8%) lung, 13 (14%) liver, 9 (10%) heart, and 5 (5.5%) dual-organ transplant recipients [38]. Sixteen organ transplant recipients were reported to die due to complications of COVID-19 [18% (16/90) overall, 24% (16/68) of all inpatients, 52% (12/23) of ICU patients][38]. Four of these patients who died preferred not to be admitted to the ICU or intubated. However, the age of these patients, clinical characteristics, or cause of death were not reported [38]. In another cohort of 18 solid organ transplant recipients with COVID-19 [8 (44%) kidney, 6 (33%) liver, and 3 (22%) heart] at a tertiary-care center in Madrid, the median age of transplant recipients was 71.0 ± 12.8 years, with median transplantation duration of 9.3 years and an overall case fatality rate of 28% was observed [47]. However, none of the patients discontinued immunosuppressive therapy [47]. In a case series of 5700 patients who were hospitalized with COVID-19 in the New York City area, 553 died; of these, those requiring mechanical ventilation (n = 1151, 20.2%), 282 (24.5%) died [48]. Interestingly, for male and female patients under 20 years, mortality was 0% (0/20); however, at every 10-year age interval over 20 years, mortality rates were higher for male than female patients [48]. Mortality rates were 76.4% and 97.2%, respectively, for patients who received mechanical ventilation in the 18-to-65 and older-than-65 age groups [48]. However, for those in the 18-to-65 and older-than-65 age groups who did not receive mechanical ventilation, the mortality rates were 1. 98% ADVICE FOR PREVENTION AND SURGICAL CONSIDERATIONS IN COVID-19 AND LIVER TRANSPLANT As per the recommendations from the Beijing Working Party for LT, liver transplant recipients with fever or respiratory symptoms must promptly inform transplant centers and avoid unscheduled visits to avoid exposure [50]. Surgical considerations when operating on a liver transplant patient with COVID-19 include the use of PPE and extensive hand hygiene [7]. Also, limitations of aerosol-generating procedures like suction, endotracheal intubation, colorectal surgeries, colonoscopies, and advanced endoscopy to prevent disease transmission is recommended [7]. Before surgery, a detailed history of the patient should be taken along with repeated physical examination, temperature measurement, and chest imaging [7]. Non-emergency procedures should either be delayed or canceled [7]. Like the general population, recipients were instructed to stay home for safety, avoid in-person visits and public places, undergo frequent handwashing, advocate telework options, always use masks, call their transplant team if they get a fever or any respiratory symptoms, and arrange for a telemedicine visit if possible. Patients were instructed to avoid areas with high COVID-19 prevalence, and international travel was discouraged. Labs were only ordered if indicated and not just for routine follow-up; refills were supplied to avoid coming to the hospital. All patient education, waitlist status, social work, dietary and financial issues to be resolved via videoconferencing to avoid any gatherings and decrease psychological load[20]. Society guidelines and recommendations Many society guidelines also were issued to address COVID-19 in transplant recipients. European Association for the Study of the Liver (EASL) and the European Society of Clinical Microbiology and Infection Diseases (ESMID) have issued a joint guideline for patients with liver disease. In the transplant section, they emphasized reducing direct exposure and more outpatient care while promoting telemedicine services with local laboratory testing and recommended against decreasing immune suppression[51]. Six months into the pandemic, another position paper was released and had more recommendations as screening donors with RT-PCR, close monitoring of drug levels, and recommended early admissions for COVID-19 transplant recipients [52]. American Association for the Study of Liver Diseases (AASLD) also have issued clinical guideline from an expert panel consensus during the pandemic had similar recommendations, but more detailed recommendations were added, including advice about staying home, ensuring availability of refills, and more inpatient care advice as to medication management and airway management [20]. Both EASL and AASLD recommended COVID-19 vaccines for patients with liver disease, but they were skeptical for liver transplant recipients given scarce data as the initial studies excluded transplant recipients, and there was a theoretical risk of immune-mediated rejection with newer vaccines[52,53]. MANAGEMENT OF LIVER TRANSPLANT RECIPIENTS WITH COVID-19 Immunosuppressed patients are among the risky groups susceptible to complications of COVID-19. On the other hand, protection from the inflammatory response responsible for tissue injury might be conferred due to immunosuppression [13]. According to clinical insights from AASLD, there is no need to reduce or stop immunosuppression for asymptomatic COVID-19 infected liver transplant recipients [54]. The WHO and the CDC strongly recommend that glucocorticoids (i.e., dexamethasone, hydrocortisone, or prednisone) are to be given orally or intravenously for the treatment of patients with severe and critical COVID-19 based on evidence of mortality reduction of 8.7% and 6.7% in these cases[55,56] as well as avoiding adrenal insufficiency [57]. Decreasing the dose of immunosuppressive drugs should be only considered in the presence of critical illness or complications like drug-induced lymphopenia, bacterial or fungal superinfection in severe or rapidly progressive COVID-19. According to the patient's clinical status, antimetabolite drugs should be minimized or discontinued in the setting of worsening COVID-19 infection [58][59][60]. It is well known that early antiviral treatment ameliorates the course of the influenza virus; therefore, it can be assumed that early initiation of antiviral therapy may help to prevent COVID-19 pneumonia in high-risk groups[51]. In liver transplant recipients, caution should be taken to avoid bacterial or fungal superinfection or reactivation of August 16, 2021 Volume 9 Issue 23 latent tuberculosis, HBV, and HSV[59]. Previous evidence from other viruses and SARS concluded that immunosuppression and comorbidities accompanied with solid organ transplantation might lead to severe clinical manifestations [61]. A study in Hong Kong included 29 Liver transplant recipients during the SARS outbreak in 2003, out of whom only four were treated for suspected SARS infection [22]. The authors concluded showed that immunosuppression in liver transplant recipients with COVID-19 may prolong the period of viral shedding but is not associated with increased mortality [22]. To date, there is no approved prophylactic drug against COVID-19 in liver transplant recipients [62]. The American Society of Transplantation recommends that all transplant patients and their households get vaccinated. In general, management of SARS-COV-2 infected transplant recipients should be a case-by-case approach, given that no specific treatment regimens are agreed upon till now. Essential considerations in managing COVID-19 in liver transplant recipients are drug-drug interactions and modifications of immunosuppression[51]. CURRENT IMMUNOSUPPRESSION REGIMEN AND IMPLICATIONS ON PHARMACOTHERAPY Immunosuppressive medications used for transplant patients, such as tacrolimus, cyclosporine, and mycophenolate, are known to have relatively narrow therapeutic indexes and highly variable pharmacokinetic parameters [63,64]. Medication used for COVID-19 infection has numerous potential drug-drug interactions with immunosuppressive therapy utilized for transplant patients. Tacrolimus and cyclosporine are substrates of cytochrome P450 (CYP) 3A4 and the P-glycoprotein efflux pump. In addition, cyclosporine weakly inhibits CYP2C9, CYP3A4, and OATP1B1/1B3, as well as P-glycoprotein. Mycophenolate is a substrate of specific organic anion-transporting peptides (OATP) and UDP-glucuronosyltransferase enzymes. Drug-drug interactions are summarized in Table 1. Antivirals Remdesivir and favipiravir are novel adenosine analogs that inhibit SARS-CoV-2 RNA-dependent RNA polymerase (RdRp), an essential enzyme required for viral replication [65,66]. In vitro, remdesivir is a substrate of CYP3A4, and the drug transporters OATP1B1 and P-glycoprotein and an inhibitor of CYP3A4, OATP1B1, OATP1B3, and Multidrug and toxin extrusion protein 1 (MATE1) [67]. As of January 2021, remdesivir is the only FDA-approved antiviral therapy for COVID-19 patients requiring hospitalization and oxygen therapy [68]. Although no clinical interactions are expected between immunosuppressive agents and remdesivir, careful monitoring of drug concentrations and liver enzymes is recommended due to the lack of literature evaluating the concomitant use of these agents [64]. It should be discontinued if ALT surge ≥ 10x higher than the upper limit of normal [59]. Lopinavir/ritonavir are protease inhibitors of cytochrome P450 3A and was used for COVID-19 infection early in a pandemic. However, tacrolimus is metabolized by cytochrome P450 3A, and its serum level will be markedly elevated when PI is given. A kidney transplant recipient on tacrolimus and prednisone was reported to have serum tacrolimus level elevated because of drug-drug interaction when the patient was treated with lopinavir and ritonavir [49]. The level went back into normal therapeutic range a few days later after protease inhibitors were switched to favipiravir. Lopinavir/ritonavir should not be coadministered with sirolimus or everolimus, and close monitoring of immunosuppressive drug levels is mandatory [65]. Therefore, it is essential to monitor daily immunosuppressant (especially calcineurin inhibitors such as cyclosporin and tacrolimus) serum levels from transplant patients with COVID-19 and hold the immunosuppressant when drug-drug interaction is suspected. Ivermectin Ivermectin is an antiparasitic drug with an antiviral effect against SARS-COV-2 in vitro, with controversial efficacy in clinical trials [69]. Ivermectin has significant interactions with immunosuppressive drugs [60,65] further compromising off-label considerations. Tocilizumab infection is thought to induce bronchial epithelial cell release of interleukin (IL)-6, a pleiotropic, pro-inflammatory cytokine produced by various cell types [73]. Tocilizumab is a recombinant humanized anti-IL-6 receptor monoclonal antibody approved for rheumatologic disorders and cytokine release syndrome [4]. In vitro studies in hepatocytes have shown that tocilizumab blocks IL-6-mediated downregulation of CYP450, mainly CYP3A4, and to a lesser extent CYP2C19 [74]. Blocking downregulation of CYP450 would lead to increased CYP450 activity and decrease the bioavailability of medications metabolized through that pathway. However, its clinical significance is unclear as the downregulation of IL-6 activity was demonstrated invitro at very high concentrations [75]. Due to the long half-life elimination of approximately 13 days in adult patients, it is prudent to closely monitor transplant patients for a prolonged period after tocilizumab administration [76]. No sufficient data is available to recommend other immunomodulators like IL-1 inhibitors or interferon beta, while the role of tocilizumab for severe COVID 19 remains debatable[55]. Mammalian target of rapamycin inhibitors By inhibiting the effect of IL-37 and IL-38 in the inflammatory state, mammalian target of rapamycin (mTOR) inhibitors are considered to have a potential anti-COVID19 effect [90]. Also, the association between obesity and inferior outcome in COVID-19 patients has been previously described [91]. Targeting the mTOR pathway carries a potential for obesity treatment, thus theoretically can decrease the risk of severe COVID-19 infection [91,92]. Other adjunctive therapies Various adjunctive therapies have been utilized to manage COVID-19, such as anticoagulation with heparin or vitamin supplementation with ascorbic acid, zinc, and thiamine. There are no known clinically significant drug interactions between immunosuppressive therapy for transplant patients and adjunctive treatment for COVID-19. Although with no proven benefit in COVID-19 patients yet still used in this cohort, chloroquine therapy may result in up to a 3-fold increase in cyclosporine A levels [93,94]. This interaction is not seen with tacrolimus [95]. Currently, insufficient data exist to recommend the use of convalescent plasma for COVID-19 patients except in clinical trials [60]. It appears to be valid only if given early in the course of the disease and containing a high titer of immunoglobulins[55]. PEDIATRIC LIVER TRANSPLANT Compared to adults, the pediatric population has a milder disease and rarely requires hospitalization. Data from the largest pediatric LT center in Lombardy, Italy, have shown little affection for pediatric liver transplant patients and suggested that immune suppression might be protective and recommended continuing transplant programs [96]. A survey was conducted for healthcare professionals across the European reference network on pediatric transplantation (ERN-TransplantChild) showed 12/18 transplant centers reducing usual activity with modification of outpatient visits and incorporating telemedicine tools. Reported cases in the survey did not show any severe cases in pediatric liver transplant recipients [97]. An another survey of the European Liver and Intestine Transplantation Association (ELITA) and European Liver Transplant Registry (ELTR) showed that during the height of the pandemic, 1% of liver transplant centers selected only children and 1% selected only high urgency children [98]. An Iranian pediatric study that included 40 newly transplanted liver recipients during the height of the pandemic there showed no affected children by COVID-19 and had the same conclusion as prior studies promoting continuing regular transplant programs [99]. A multi-center United States study has shown similar results with all pediatric transplant recipients showing mild to moderate presentation (including 10 Liver transplant recipients), and this study concluded that COVID-19 in such population may mirror those of immunocompetent children [100]. A Japanese group had another finding in 20 pediatric liver transplant procedures in the COVID-19 era with increased incidence of intraoperative portal vein thrombosis, although it was not statistically significant [101]. In India, similar results were reported, with one center reported no COVID-19 development in their cohort of 7 recipients during the pandemic, and this was attributed to strict protocol adopted by the hospital there[102]. CONCLUSION In sum, mortality in patients with COVID-19 and LTs was variable across different countries. Mortality was higher in elderly transplant recipients with comorbidities. Notably, there appears to be a higher incidence of mortality in solid organ transplant recipients than in the general population. Immunosuppressive drugs in this cohort should be carefully tailored on a case-by-case basis.
2021-08-20T13:16:24.370Z
2021-08-16T00:00:00.000
{ "year": 2021, "sha1": "7a2c8f7e99a7893fda64d17bcfc6a8bfb5e692cd", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12998/wjcc.v9.i23.6608", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1ba2acd51722a75239621bf78322e58d54bd395a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258471926
pes2o/s2orc
v3-fos-license
Multifaceted Sentiment Detection System (MSDS) to Avoid Dropout in Virtual Learning Environment using Multi-class Classifiers . I. INTRODUCTION The education system is crucially dependent on students' academic progress. The tremendous volume of data in educational databases has made it increasingly difficult to predict student performance. Low performance students may encounter several challenges, such as delayed graduation and dropping out. To quickly assist students who are performing poorly, educational institutions should regularly monitor the academic development of their students. One way to accomplish that is to use students' academic achievement prediction. The proposed method is based on hybrid approach of sentiment analysis to achieve quality education [1]. With the explosive growth of internet, digital technologies, IT infrastructure cloud-based online learning is growing at a quicker pace. Cloud computing technologies facilitate virtual platforms and assist students in favorable paths despite the barriers. The platform inherits countless benefits namely reducing installation cost, high storage possibility, virtualization, security, easy access, etc. T. Zarra et al. states that academic institutions suffer from general budget cuts and the growing number of students. Implementation of cloudbased e-learning platforms is very much motivated. The application of cloud community assists members of education institution to work together. The research aims to propose a web service which analyzes exchange of information between learners from various universities connected on cloud using sentiment analysis [2]. The key contribution of this research is to analyze and deal with a huge amount of online data during the virtual learning course for predicting dropouts with the help of sentiment analysis of learners' comments. SA is receiving much attention in HEI for predicting student dropout. By applying SA in the context of virtual learning, provides better insights of understanding learners' attitudes, emotional reactions towards entities, events, attributes, and the notion of sentiment predictors of dropout as this reveals specific patterns in learners' behaviors that can be of practical importance for course designers and instructors. The learning process in virtual courses dramatically differs from face-to-face courses, as the courses are taken exclusively online hence, in this proposed system, it is appropriate to detect students' text sentiments composed of multifaceted characteristics that helps in identifying their risk of college dropout. One of the formidable tasks is systematical surveying of students' opinions and perceptions, as there is huge volume of opinionated text and sentiment detection of real-world data is full of challenges. It's difficult for humans to extract sentences with sentiments, read, review, and classify them into usable formats. Automated sentiment discovery and summarization system are thus required. SA is classified as positive, negative, or neutral, that employs automatic process of text classification for opinion mining and finding wide array of sentiments expressed by learners explaining their personal attitudes and evaluation regarding provided services that determines high and low sentiment scores of their multidimensional characteristic phenomena. Thus, this research www.ijacsa.thesai.org aims at investigation of sentiment analysis based on learners' comments from VLE to avoid dropouts. The remnant paper is organized as follows. Section II discusses the background of SA with related research works. Section III specifies the explanation of research methodology used in this research. Section IV explains MSDS architecture in detail. Section V represents the experimental outcomes. Section VI discusses conclusion and future scope of research. II. LITERATURE REVIEW MOOC based teaching is implemented in traditional curriculum and educational practices. Sentiment analysis approaches can be used for course content evaluations that deliver extensive intimations to course designers and educators for evaluating courses periodically and introduce probable enhancements [3]. SASys (Sentiment Analyzes System) framework built on lexical approach and polarized frame network is proposed. Its primary objective is identifying early students' risk of dropout by emotional state detection. Text sentiment is essential for defining learners' motivational profile that depends on their activities in VLE. Students' class engagement can be identified by analyzing their frequency of access data and interactions [4]. Students' motivation to learn is greatly influenced by emotional elements, and in online education, emotions can be inferred from textual discussion while giving responses. Natural Language Processing methods can be used for emotional classification of students by extracting information from Whatsapp and organizing them into corresponding categories. RNN algorithm increases the accuracy by 75% for students' emotion analysis in online learning [5]. Emotion mining and SA were performed by gathering Arabic tweets regarding online education during pandemic. Results disclose that the proposed method performs efficiently in identifying peoples' opinion on online learning in context of the pandemic using SVM with greatest accuracy of 89.6%. By considering emotion analysis, anger is the top emotion. Most significant reasons behind negative sentiments were lack of face-to-face interaction, breakdown of network, ambiguity, and games [6]. Academic issues are one of the factors which cause stress or depression among students. For example, a drop in grades, a fear of failing, and challenging competitions. Most students are young, and when they encounter difficulties, they occasionally lack wisdom and may cause harm to themselves. Hence, advice and support from professionals, family members, experts, and others are crucial for preventing adolescents from engaging in risky behavior. Sentiment analysis using Naive Bayes algorithm classifies students into stress and depression [7]. In recent years, due to generation of voluminous data, technologies have been developed for storing and data processing effortlessly. A huge amount of data can be obtained, mined, and realized for sentiment analysis. This helps in making better policies for the education sector and the target users are teachers, students, and educational organizations. In every aspect of teaching-learning approach, education sectors can incorporate sentiment analysis extensively [8]. Learners are more interested in "Engineering and Technology" but hold negative thinking towards "Life Science and medicine", this research infers that it is an emergency to explore SA systems for cross-domain that accelerates SA application in multiple learning domains. SA is used for enhancing the learning process; additionally, it provides valuable information for educational institutions. Students receive results of SA through visualization tools such as word clouds, dashboards, virtual agents, etc [9]. Technological advancements like Blockchain, IoT, Cloud Computing and Big Data have broadened applications of SA permitting this to be utilized in any discipline [10]. Moreover, in Machine Learning and Natural Language Processing, SA has turned out to be a hot trend and is being accepted extensively all over the globe [11]. The primary objective of implementing SA approaches on students' feedback in an online learning system is to identify learners' emotions, feelings, participation and evaluate educators' performance [12]. From student learning and achievement, emotions are inseparable. Responses were collected from students to discover their emotional experience around test and an online quiz. Greater positive level emotions and lesser negative level emotions were experienced in online quizzes than tests. Future guidelines of research must integrate a complex relation between cognitive, emotional, and motivational aspects of learning [13]. Sentiment analysis accurately portrays students' learning circumstances in the online learning community. To improve quality of teaching practice, the proposed model can recognize students' sentiment tendencies. SSM (Sentiment Score Matrix) is formed to compute sentiment scores. This can efficiently identify sentiment tendencies of learners for enhancing information services quality in education practice [14]. HEI are increasingly looking for best ways to understand the learning experience of their students. Sentiment analysis helps to investigate emotions and attitudes of students regarding their course experience. To analyze sentiment text was fed into Google's Cloud-based Natural Language Processing. Results presented that students' sentiment in online interaction during two online courses is more positive than in face-to-face courses [15]. During COVID-19 pandemic, to support remote and distance learning useful learning resources are lecture recordings. Students with illness, learning disabilities, and work commitments have narrated that availability of lecture recordings has shaped an inclusive education setting. Sentiment analysis was conducted using Microsoft Azure cognitive services text analytics API. With a large text dataset, machine learning was employed that were labeled for sentiments along 0 and 1. Findings depicted that lecture recordings serve as an additional resource for preparing notes or exams [16]. Therefore, to solve real world problems through design and development of smart learning environment, we can use Azure cognitive services, a text analytics API that leverages natural language processing capabilities for deploying high-quality AI models [17]. Sentiment Analysis is the most crucial area in text mining, as the thoughts of several individuals are analyzed and compiled as a single dataset. E-learning is an educational attempt to deliver knowledge through computers. SA helps users to easily classify their emotional input information. Students' anti-course feelings can be tracked that serves as www.ijacsa.thesai.org feedback from online learning sites [18]. In the teachinglearning process virtual learning environments (VLE) deliver a set of communication and interaction tools utilized by learners and educators. Researchers presented the SentiEduc framework that uses Multi-Agent System (MAS) to gather and analyze opinion of texts posted by learners in VLE. SenticNet tool was used to analyze sentiments automatically. Educators with tutoring experience utilized the framework with real data for verifying the efficiency; as a result accuracy obtained was 73.88% [19]. From an organization, probable text reviews collected on 270 training programmes by 2688 participants were analyzed. RapidMiner Text Mining package was used to track tokenization, removal of stop words, stemming, and token filtering. Authors suggested that instead of content delivery and faculty expertise, the proposed approach can further be expanded to establish sentiment expressed over several aspects like internet connection, hospitality [20]. Researchers have developed a web-application system that uses text analytics and SA for providing educators with a deeper analysis of learners' feedback to assess the course they have taught that will enhance the students learning experience. Feedback was grouped into positive, negative, and neutral. The result depicted a larger number of neutral sentiments. Their system implementation was successful, and it significantly benefits students, lecturers, and administrators [21]. Twitter is the popular free social networking channel. In Anadolu University open and distance education system, sentiment analysis for learners was performed by fetching tweets. 400 tweets were used for validation and classification outcomes are presented. The negative feelings and student complaints can be concentrated by institution managers [22]. Numerous educational institutions utilize online education as media learning where each piece of media, maybe audio, video, or text, can accept learners' feedback. The lecture intends to realize emotions which learners experience when they access media, namely happiness, unhappiness, or disappointment and educators intend to recognize their delightfulness. The study developed a utility cellular for emotion detection from column comments in online media. Accuracy of mobile application utility is 70% for emotion detection [23]. Humans are easily prone to errors in interpreting text-based emotions. Four supervised ML classification techniques namely MNB, SVM, DT, and KNN were applied for analyzing basic emotions. The best performance was resulted by Multinomial Naive Bayes classifier with an average accuracy of 64.08% [24]. In the form of tweets, blogs, and updates of thoughts about interest, a lot of data is being produced. On a variety of subjects, including products, movies, politics, education, news, and more, people express their thoughts and ideas. Data analysis is useful to comprehend observations, sentiments, and attitudes of society. Additionally, decision-making would benefit more from such analysis. Naive Bayes, RF, Tailored RF and enhanced XGBoost were employed. Enhanced XGBoost achieved better accuracy of 72.26% [25]. With meticulous analysis of previous existing research, the limitations are they do not consider hidden structural features namely internet connection, mixed-emotional elements, and unstructured data. To address these limitations, the existing methods can be operationalized and extended by considering specific factors namely device efficiency, cognitive behaviors, and technical familiarity with cloud platform usage. Moreover, they infer that there is an emergency to investigate SA systems for cross-domain which accelerates application of SA in multiple learning domains. Furthermore, they suggest that future researchers must integrate relationships between cognitive, emotional, and motivational learning aspects. In this context the present research bridges gap in students' sentiment detection, giving a new definition of multifaceted characteristics of concept to gain a deeper understanding of how sentiments provide an indication to educators for identifying whether a student is motivated or discouraged with virtual learning and has an intention to dropout. Thus, in this research sentiments are explored as a process that truly reflects students' learning circumstances by considering necessary features in VLE across multiple disciplines. Hence, in higher education context, sentiments are linked to students' cognitive, emotions, psychological and learning factors in students' behavior. The model is explored with four multiclass classification algorithms, to perform sentiment analysis on text and key phrases. After experimentation, the proposed MSDS system successfully outperforms in each case compared to other methodologies. The best performance for mean accuracy rate was achieved by Logistic Regression for device efficiency with 98.49%, Linear SVC for cognitive behavior and technological expertise with 93.58% and 92.08%. III. RESEARCH METHODOLOGY The main objective of this research is detecting the students' intention of dropping out from virtual learning courses by considering their text sentiments. There are several methods for dropout detection. These methods are based on ML techniques and require students' activity records for training and creating predictive models depending on the features extracted from raw data. In this research, four machine learning algorithms are implemented, and its performance metrics are evaluated. The subsequent research questions are considered: 1) To develop a Multifaceted Sentiment Detection System (MSDS) architecture for predicting dropout students in VLE. 2) To estimate the evaluation steps for measuring the proposed architectures' efficiency in terms of mean accuracy rate and standard deviation using four multi-class classification algorithms namely RF, Linear SVC, MNB, LR. 3) To investigate the most suited ML algorithm for classifying students' sentiments depending on their multifaceted characteristics and visualize the results to educators as an early intervention. IV. MULTIFACETED SENTIMENT DETECTION SYSTEM (MSDS) ARCHITECTURE The MSDS framework detects the students' sentiments with information gathered from VLE interactions. MSDS architecture attempts to address sentiment analysis approach for characterizing the sentiments across multifaceted subjects namely device efficiency, cognitive behavior, technological expertise with cloud platform usage. The research flow begins with data extraction from comments posted by learners of (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 14, No. 4, 2023 360 | P a g e www.ijacsa.thesai.org VLE. The system reads the data stored with .csv (Comma Separated Values) format. Next, data pre-processing techniques are implemented to texts, namely removal of punctuation marks, stop words, etc. Feature extraction methods like count vectorizer, Term Frequency and Inverse Document Frequency (TF-IDF) are employed. To identify the real opinions of learners, sentiment classification is performed based on machine learning approaches. Then the flow ends with detecting students' sentiments with three polarities: positive, negative, and neutral. The model is assessed with various evaluation metrics namely accuracy, recall, precision, and F1-Score. Fig. 1 depicts proposed (MSDS) model. A. Data Collection from Participants SA can be integrated into a virtual learning environment that realizes real-time analysis of learners' feedback. Real data was collected from students belonging to various disciplines of numerous HEI throughout India. The data sources comprise dataset that were collected by educators through online questionnaire that was broadly classified into various factors namely demographic features, device usage characteristics, self-efficacy which was estimated with 5-point Likert scale, familiarity with cloud platforms using 4-point range for identifying sentiments based on their interactions with VLE. Totally, the dataset contains n=1590 comments, opinions and feedbacks posted by students of virtual learning classes. Depending on their multifaceted characteristics, three datasets are grouped each containing 530 comments for conducting experiments. The data instances are branched into multiple classes. The sentiments are labelled with score range from 0-2, representing negative, neutral, and positive. B. Data Pre-processing Text pre-processing is the process of cleaning and preparing text data, as machines can use that processed text to perform various tasks like analysis, prediction, etc. These procedures help to reduce the volume of data and processing time. The comments posted by students are in the form of natural English language and this must be converted into machine readable format. With text data, there are many challenges as it contains a lot of noisy, semi or unstructured data, punctuations, numbers, special characters, spelling mistakes, etc that require to be processed with NLP techniques. These are the pre-processing steps carried out for improving prediction performance. 1) Removing punctuations and numbers, converting all characters into lowercase: The basic pre-processing method is punctuations removal from textual data. This process helps to treat each text similarly. As numbers do not hold any vital information in the text it has to be removed. Then the input text is converted into same casing format namely lowercase. 2) Tokenization: This is a method of breaking the sentence into meaningful words and phrases. This process is achieved by using delimiters like white spaces and punctuation. Inbuilt NLTK (Natural Language Tool Kit) libraries have tokenization function capability to divide into words. 3) Lemmatization: Lemmatization is a text pre-processing technique in NLP models to break a word to its root meaning for identifying similarities. Text normalization changes the words down to a base form. WordNetLemmatizer() is assigned to a variable which is used to improve the algorithms' performance and facilitates to focus on meaning of the words. 4) Stop word removal: Next step is to eliminate stop words as they are commonly used words yet not useful for analysis and generally eliminated from text. This technique is applied for removing stop words namely I, am, you, she, he, the, a, an, so, what, etc. These words are not required for sentiment classification as it increases the dataset size and is considered irrelevant for results set. It's always better to remove these words for improving the model accuracy, reducing computational processes and complexity of data. 5) N-gram: In NLP this is a continuous series of n items created from a given text sample where items can be characters or words. This language model predicts the probability within any sequence of n words in the language. is considered for feature processing. 6) Part of Speech (POS) Tagging: This process labels each word in text format for a particular POS depending on its context and definition. This reads the text from a language and assigns some token (POS) for every word. nltk.pos_tag(tokenized_text) is applied. Parts of Speech Adjective, Verb, Noun, and Adverb are considered. C. Feature Extraction Techniques In text data classification feature extraction plays a dominant role in reduction of feature space and increasing classifier's accuracy. This process converts data into features applied for ML model, as ML algorithms are programmed with numbers. Textual data is converted into vector form using two feature extraction techniques namely Count Vector, Term Frequency, and Inverse Document Frequency (TF-IDF) vector. Fig. 2 represents the cleaned text results achieved after data pre-processing and feature extraction with cognitive behavior data. 1) Count vectorizer: CountVectorizer transforms a given text into a vector depending on frequency (count) of each word which occurs in entire text. This is beneficial when there are multiple such texts, and for converting each word in each text into vectors. These functionalities make it an extremely adaptable feature description module for text. 2) TF-IDF score: This is a weighting measure that quantifies the string relevance representations namely words, phrases, lemmas in a given document. Term Frequency (TF) describes in a document how frequently a term occurs against the total number of words in document, as in (1). IDF is measurement of selected term's weight in the document. This is given in (2). D. Machine Learning Techniques for Sentiment Analysis In the current situation, feedback is provided through grading methods. Although this grading method masks students' genuine feelings, the textual response gives them a chance to emphasize qualities. Three different ML algorithms SVM, MNB, and RF were implemented. Experimental outcomes suggest that MNB with 80% accuracy performs better than other classifiers [26]. The study applies SA to selfevaluation comments, a form of unstructured data that provides valuable information representing students' learning status over course duration for identifying at-risk students. SVM and Convolutional Neural Networks (CNN) were applied to predict student performance. The proposed model provides an effectiveness represented by F-measure of 0.66 (SVM) and 0.78 (CNN). Best effectiveness was presented by CNN, achieving an F-measure value of 0.78. Experimental results demonstrated that applying sentiment analysis to unstructured data can significantly improve accuracy of earlystage predictions [27]. In this research, NLP techniques are employed to pre-process and vectorize the text data. Vectorized data is then applied for training various ML models. Following are the list of steps required to train the model for sentiment analysis: www.ijacsa.thesai.org 1) Random forest: SA is used for analyzing unstructured text data for extracting positive or negative sentiments contained in student advisor's notes to predict college student dropout using RF model. The authors have quoted that their study is the first to apply NLP techniques for dropout prediction. RF classifier achieved 73% accuracy when compared to SVM, LR and CART [28]. M. A. Fauzi stated that due to the development of social media and online website reviews, SA is an efficient way for text classification. Experimental results confirmed that RF gives excellent performance with an average OOB score of 0.829 [29]. For increasing predictive power of Random Forest, this research utilizes hyperparameters n_estimators=100 and max_depth=5. n_estimators is the number of trees the algorithm creates before taking maximum voting. A higher number of trees enhances the performance and makes predictions more stable. The hyperparameter max_depth represents maximum depth of each decision tree in the forest. 2) Linear Support Vector Classifier (SVC): [30] suggests an opinion analysis system for amazon review for identifying comments received from UCI Website either positive or negative. The proposed approach is applied to review dataset and obtained an accuracy of 91% with Linear SVC over Naive Bayes and Voting. Authors have proposed that Linear SVC takes lesser execution time of about 0.0972s to test samples and provides output which is better than other classifiers namely Naive Bayes, Logistic Regression, Decision Tree [31]. The Linear SVC method is a faster implementation of Support Vector Classification that applies a linear kernel function for performing classification. This performs well for NLP based text classification tasks with a large number of samples. Linear SVC in scikit-learn library does not provide predict_proba function, instead decision_function is used that predicts the confidence scores for samples, which is the signed distance of that sample to hyperplane. 3) Multinomial Naive Bayes (MNB): This is a type of Naive Bayes (NB) classifier that finds probabilities of classes assigned to text that uses joint probabilities of words and classes and is often used as a baseline for sentiment analysis. MNB achieves significant results for text categorization with 90% accuracy. MNB algorithm is a fast, easy-to-implement, and modern text categorization algorithm [32]. Social networking has developed into a tool that is useful for gathering vital information about individuals. The sentiment of the user can be determined by extracting user comments via an API and feeding them to an algorithm that detects whether they are positive or negative. Results obtained show that Multinomial Naive Bayes performs good with classification accuracy of 85% when compared to SVM, Random Forest and Decision Tree [33]. MNB is specifically beneficial for problems that involve text data with discrete features, namely word frequency counts. This works on the principle of Bayes theorem and assumes that features are conditionally independent given the class variables. The computation is performed by adding logarithms of probabilities, as in (3). The class with the highest log probability score is most feasible. ∑ | ] (3) 4) Logistic regression: In a variety of situations finding the polarity of reviews is useful. NLP techniques can be applied for analyzing the reviews and optimizing the strategic decision making. TFIDF is used for feature selection and Logistic Regression for classification as it uses sigmoid activation function. The LR with grid search model classifies the text accurately with 94% [34]. Reviews and feedback are deciding factors for understanding the opinions of users. SA is a method of information extraction for improving the work by review analysis. The users' review text is cleaned by Count Vectorizer and TFIDF. Sentiment prediction is done using various classifiers for reviews without ratings. Logistic Regression accuracy is 93% and more compared to NB, MNB, Bernoulli classifiers [35]. Logistic Regression is a simple classification algorithm that can be generalized to multiple classes. The LR uses sigmoid function, which is given by (4), These popular classifiers do not directly support multiclass classification problems. There are certain heuristic methods available which can split multiclass classification datasets into several binary classification datasets. The binary classifiers are then trained on each binary classification problem and predictions are made operating the model which is most confident. To implement this method for multi-class classification, the OneVsRestClassifier method is used. V. EXPERIMENTAL RESULTS In this research, implementation is done with efficient NLTK, Scikit-learn libraries by applying the proposed machine learning techniques. NLTK is used to perform regular expression patterns & tokenization to parse text, lemmatization, stop word removal, n-grams, and POS tagging. Vectorization and classification were accomplished by Scikitlearn. Data was preprocessed, vectorized with TF-IDF, and classified with four multiclass machine learning algorithms. Dataset was split into 75% training set and 25% test set. 5-fold Cross-Validation was used for training classification model. RF, Linear SVC, MNB and LR were applied as they are the most popular machine learning classifiers used to analyze students' opinions. Depending on the classification task different metrics were used to measure the classifiers' performance. A. Multi-class Classification Positive/Neutral/Negative The performance of MSDS framework is evaluated using various metrics. Accuracy is one of the popular metrics for multi-class classification. This is the ratio of the number of accurately classified instances to total number of instances. Macro-average precision calculates precision for all classes individually and then averages them. Weighted average precision calculates precision per class but considers no. of www.ijacsa.thesai.org samples of each class in data. Macro-average recall score computes the arithmetic mean of all recall scores of different classes. Weighted-average recall computes recall per class but considers the number of samples of each class in the data. Macro average F1score computes arithmetic mean of all per-class F1 scores. Weighted average F1 score calculates mean of all per-class F1 scores by considering each class's support. 1) Device efficiency: The first perspective analyzes the students' device usage characteristics as VLE increases the portability of learning processes through smart devices. To investigate new educational opportunities that result from expanding device access to VLE, enables users to establish more fleeting links to virtual campus for providing instructional procedures built on a model that is significantly less time and space consuming. The parameters used for configuring each of the algorithms implemented are the type of smart device, mode of device availability, device connectivity, number of hours the device is connected online. The optimized smart device environment will provide services to the educational community, regardless of their functional and cognitive elements. N. A. S. Remali with other researchers, states that its' a challenging task for educational institutions to identify learners' opinions and difficulties during online education as few students may have poor internet connection problem, lack of bandwidth and the environment makes learners not to concentrate on class. Thus, SA is utilized to assess students' opinions (positive, neutral, and negative) of virtual learning [36]. Considering device efficiency features, Table I shows the evaluation metrics of four multi-class classifiers with LR model showing high accuracy rate of 98%. The cross-validation method, with cv=5, was performed to cross-validate baseline models with feature extractors of TF-IDF and CV. Mean accuracy and standard deviation for each fold validates performance of models, depicted in Table II. The highest mean accuracy of Logistic Regression is 98.49%. Fig. 3 shows mean accuracy with 5-fold cross validation. Results of sentiment distribution are visualized in Fig. 4. Fig. 5 displays classification metrics of device efficiency text characteristics. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 14, No. 4, 2023 364 | P a g e www.ijacsa.thesai.org 2) Cognitive behavior: The encompassing perspective analyzes the students' psychological cognitive behavioral model through self-efficacy theory stated by Psychologist Albert Bandura. The parameters used for configuring each of the algorithms implemented are manageability, finding means & ways, stick to aims & accomplishments, handling unforeseen situations, investing effort, finding several solutions, handling whatever situations in online learning. In addition, the researchers in [37] explored that predictive power and feature generalization of cognitive skill score estimates possibility of learners' success or failure in higher education course which provides suitable intervention to facilitate learners. Cognitive skill score proves to be efficient in identifying students' performance when exact metrics correlated to learning activities and students' social behavior are unavailable. Table III shows evaluation metrics of four multi-class classifiers for self-efficacy with Linear SVC showing a high accuracy rate of 93%. With cv=5, mean accuracy and standard deviation for each fold validates the models' performance depicted in Table IV. The mean accuracy of Linear SVC is higher with 93.58% when compared to RF, MNB and LR. Fig. 6 presents mean accuracy with cv=5 for cognitive behavioral features. Sentiment analysis outcomes are visualized in Fig. 7. Classification metrics of self-efficacy text characteristics are displayed in Fig. 8. 3) Technological expertise of cloud learning platform usage: Outside the classroom, cloud platforms prepare students to make reasonable study schedules, thereby promoting self-directed learning, innovation, collaboration, and ease of accessibility. The parameters used emphasize on the platform learners are familiar with accessing online teaching materials namely Google Classroom, Google Meet, Zoom, Facebook Classroom, Twitter etc. Moreover, J. Zhang in [38] suggested that educators could discuss with learners through the cloud class even after online class and they can set time-limited online test for enhancing results. With students' feedback the course assistants can track their problems in online teaching. Using bullet screen educators can communicate useful course contents to learners in time, whereby the emotional communication between them is deepened. Table V illustrates the evaluation metrics of four classification algorithms for technological expertise with Linear SVC depicting a high accuracy rate of 92%. Table VI depicts mean accuracy and standard deviation for each fold to validate models' performance. The highest mean accuracy score of Linear SVC is 92.08%. Fig. 9 visualizes mean accuracy with cv=5 for technological familiarity with cloud platforms. The sentiment analysis results are presented in Fig. 10. Fig. 11 shows classification metrics of technological expertise text characteristics. B. Performance Evaluation using ROC and AUC To analyze the performance of MSDS architecture, ROC curve can be used as it measures the classifier's predictive quality. Tradeoff between classifier's sensitivity and specificity can be visualized by the user using a ROCAUC (Receiver Operating Characteristic/Area Under the Curve). ROC curve exhibits true positive rate on Y axis and false positive rate on X axis when plotted. Ultimate point is the topleft corner of plot: false positives are 0 and true positives are 1, which directs to another metric, AUC. The higher AUC represents a better model. However, this is vital to examine "steepness" of curve, as this illustrates maximization of sensitivity while minimizing specificity. ROC curve is extensively used in this research to describe diagnostic accuracy and for finding the best cut-off value for a model trained through multi-class machine learning techniques. ROC curves for cognitive behavior through self-efficacy characteristics using RF, Linear SVC, MNB, LR are visualized in Fig. 12, Fig. 13, Fig. 14, Fig. 15 respectively as this is main framework emphasizing the learners' psychological cognitive behavioral patterns for risk prediction. The performance comparisons of ROC curves show that Linear SVC model performs the best for self-efficacy data. C. Data Visualization Sentiment analysis encompasses a variety of SA tasks, including subjectivity detection and emotion analysis. This reflects the wide range of user tasks and data domains used in SA research and applications, which include everything from social media and news monitoring to theoretical linguistic research and NLP. This implies the use of numerous visual channels and interpretations. The visual representation involving polarity data includes word clouds [39]. Word cloud represents a powerful textual data visualization technique that enables quickly identifying the words that are most often used within a particular body of text. These are frequently employed communication tools for processing, analyzing qualitative sentiment data. When educators need to visualize the opinions of learners, word cloud can be used for identifying the messages posted by them. This visual representation gives a clear way for educators to easily interpret the messages. Fig. 16 represents the word cloud visualization of multifaceted factors. VI. CONCLUSION Multifaceted Sentiment Detection System (MSDS) is proposed in this research to predict students' dropout using multiclass classification algorithms. For addressing this goal, the research analyzes higher education students' comments and reviews while attending virtual classes through VLE that have been classified and analyzed using machine learning algorithms and word clouds. The results obtained from the research explore multideterminant characteristic features of respondents' device efficiency, psychological cognitive behavior, and technological knowledge of cloud platform usage. This finding mainly contributes to improving the classifiers' predictive ability of college student dropout through text classification. The proposed method obtained the highest mean accuracy results for device efficiency using Logistic Regression with 98.49%, Linear SVC for cognitive behavior with 93.58% and technological expertise with 92.08%. Thus, this novel assessment mechanism MSDS, aims in exploring efficient sentiment classification by proposing multiclass machine learning algorithms to avoid dropouts. Furthermore, experimental outcomes demonstrate that proposed system has obtained better accuracy results when comparing to previous methods. For future work, the algorithms can be validated and applied against other computational methods such as deep learning algorithms for superior performance.
2023-05-04T15:03:44.569Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "cbf52dfd8a520135624735bbdbc19af92473b70e", "oa_license": null, "oa_url": "https://doi.org/10.14569/ijacsa.2023.0140440", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "860259c69816720606f9b327c0a8af04b4942975", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [] }
237494472
pes2o/s2orc
v3-fos-license
Generalized Modules for Membrane Antigens as Carrier for Polysaccharides: Impact of Sugar Length, Density, and Attachment Site on the Immune Response Elicited in Animal Models Nanoparticle systems are being explored for the display of carbohydrate antigens, characterized by multimeric presentation of glycan epitopes and special chemico-physical properties of nano-sized particles. Among them, outer membrane vesicles (OMVs) are receiving great attention, combining antigen presentation with the immunopotentiator effect of the Toll-like receptor agonists naturally present on these systems. In this context, we are testing Generalized Modules for Membrane Antigens (GMMA), OMVs naturally released from Gram-negative bacteria mutated to increase blebbing, as carrier for polysaccharides. Here, we investigated the impact of saccharide length, density, and attachment site on the immune response elicited by GMMA in animal models, using a variety of structurally diverse polysaccharides from different pathogens (i.e., Neisseria meningitidis serogroup A and C, Haemophilus influenzae type b, and streptococcus Group A Carbohydrate and Salmonella Typhi Vi). Anti-polysaccharide immune response was not affected by the number of saccharides per GMMA particle. However, lower saccharide loading can better preserve the immunogenicity of GMMA as antigen. In contrast, saccharide length needs to be optimized for each specific antigen. Interestingly, GMMA conjugates induced strong functional immune response even when the polysaccharides were linked to sugars on GMMA. We also verified that GMMA conjugates elicit a T-dependent humoral immune response to polysaccharides that is strictly dependent on the nature of the polysaccharide. The results obtained are important to design novel glycoconjugate vaccines using GMMA as carrier and support the development of multicomponent glycoconjugate vaccines where GMMA can play the dual role of carrier and antigen. In addition, this work provides significant insights into the mechanism of action of glycoconjugates. INTRODUCTION During the last years, nanoparticle systems have received increased interest for the display of carbohydrate antigens. Special physicochemical properties of nano-sized particles and the presentation of multiple saccharide epitopes support the development of novel and more effective glycoconjugate vaccines (1)(2)(3)(4)(5). Among nanoparticles, outer membrane vesicles (OMVs) combine antigen presentation with intrinsic adjuvant properties (5)(6)(7). Traditionally, outer membrane protein complex (OMPC) from Neisseria meningitidis has been used as carrier for Haemophilus influenzae type b conjugate vaccine (8,9). OMPC has been shown to possess TLR2-mediated adjuvant activity (10) and may contain TLR4 agonists such as lipopolysaccharides (LPS) since they derive from the outer membrane of Gram-negative bacteria. More recently, Escherichia coli OMVs have been used as carriers for the display of heterologous polysaccharides (PS), resulting in glycoengineered OMVs (glyOMVs) (11). Streptococcus pneumoniae CPS14 capsule, for example, displayed on engineered E. coli OMVs induced IgG levels and efficacy in opsonophagocytic activity tests comparable with those induced by PCV13 (12). Generalized Modules for Membrane Antigens (GMMA), OMVs naturally released from Gram-negative bacteria genetically manipulated to increase blebbing and modulate toxicity through modification of the lipid A portion of LPS (13,14), have recently been proposed as delivery systems for O-antigen chains naturally present on their surface (15)(16)(17)(18)(19). Oantigens displayed on non-typhoidal Salmonella GMMA have been shown to induce high levels of anti-O-antigen-specific IgG antibodies, comparable with corresponding CRM 197 conjugates formulated on alum (20). However, GMMA enhanced the IgG antibody isotype profile resulting in greater serum bactericidal activity than traditional protein conjugates. More recently, we have proposed GMMA as carrier for heterologous PS through chemical conjugation, and we have shown that GMMA glycoconjugates promote equal or enhanced saccharide immunogenicity as compared with more traditional glycoconjugates with CRM 197 carrier protein (21). It is well known that parameters such as saccharide length and density, conjugation chemistry, and attachment site can impact the immune response induced by glycoconjugate vaccines (22). Impact of such variables on the immune response elicited by OMV-based vaccines has not been greatly explored so far. Here, we have used different conjugation strategies to verify impact of saccharide length, density, and attachment site to proteins or LPS and lipooligosaccharide (LOS) molecules on GMMA surface on the immune response in animal models. Saccharide from different pathogens, having different structures, has been used as models and conjugated to GMMA from different pathogens. Synthesis and Characterization of the Generalized Modules for Membrane Antigens Conjugates Conjugates were synthesized as described below. The main characteristics of all the conjugates tested in this study are reported in Table 1. Conjugation via Adipic Acid Bis(N-hydroxysuccinimide) Chemistry MenA, MenC, or Hib oligosaccharides terminally activated with adipic acid bis(N-hydroxysuccinimide) (SIDEA) as previously described (29) were added to a suspension of GMMA in NaPi 50 mM pH 7.2. The mixture was stirred overnight at room temperature. Different conjugation conditions were used according to the PS linked and the GMMA used, as detailed in Table 1. Conjugates were purified by ultracentrifugation (110,000 rpm, 4°C, 1 h) and recovered in phosphate-buffered saline (PBS). Ultracentrifuge Thermo Scientific Sorvall MX 150+ Micro-Ultracentrifuge equipped with Thermo Scientific S110-AT rotor (K factor = 15) and 4-ml PC Thick Walled Tubes (Thermo Scientific Cat No. 45239) filled with 2 ml of solution were used. Conjugation Through Reductive Amination Chemistry GMMA oxidation. MenB GMMA at concentration of 8.0 mg/ml in NaPi 100 mM pH 6 were oxidized in the presence of NaIO 4 5 mM for 30 min in the dark at the controlled temperature of 25°C. Excess of periodate was quenched with Na 2 SO 3 10 mM for 15 min at room temperature before direct addition of the PS. S. Typhimurium GMMA at 5 mg/ml in 100 mM of sodium acetate pH 5 was oxidized in the presence of 10 mM of NaIO 4 for 2 h in the dark at the controlled temperature of 25°C. GMMA were purified by ultracentrifugation (110,000 rpm, 4°C, 30 min). Oxidized GMMA were resuspended in NaPi 100 mM pH 7.2. GMMA were characterized by micro BCA (>80% recovery), dynamic light scattering (confirming no aggregation), and high-performance anion-exchange chromatography with pulsed amperometric detection (HPAEC-PAD) (13% oxidation for S. Typhimurium GMMA and 41% for MenB GMMA). NaBH 3 CN at a 1:1.2:1.2 w/w ratio. The solutions were mixed at 30°C for 5 days. The derivatized PS were purified by chromatography on two PD10 column equilibrated with 3 M of NaCl and then water. HPAEC-PAD was used for saccharide quantification, while TNBS colorimetric methods were used to check derivatization degree (100% for Vi, 56% for GAC, and >80% for MenA and Hib) (30). Free ADH was estimated by reversed-phase ultra-performance liquid chromatography (RP-UPLC) (<10% free NH 2 ) (31). Conjugations Oxidized GMMA were added to the activated PS in the presence of NaBH 3 CN. Reaction conditions used for each conjugate are detailed in Table 1. The reaction was incubated overnight and purified by ultracentrifugation (110,000 rpm, 4°C, 1 h). The purified conjugate was resuspended in PBS. Conjugation via Bissulfosuccinimidyl Suberate Chemistry S. Typhimurium GMMA, at a protein concentration of 4.0 mg/ ml in MES buffer pH 6, was added to BS3 linker at a final concentration of 50 mg/ml in the reaction mixture. The mixture was incubated at 25°C for 30 min; then activated GMMA were purified by ultracentrifugation (110,000 rpm, 16 min, 4°C). Resulting GMMA (70% recovery by micro BCA) had 43.8% of NH 2 groups derivatized with the BS3 linker according to TNBS colorimetric method (30). After GMMA-BS3 ultracentrifugation, Vi-ADH was immediately added. In the conjugation step, a 1:10 w/w ratio of GMMA to Vi-ADH was used, at Vi-ADH concentration of 100 mg/ml in NaPi 50 mM pH 7. After overnight incubation at room temperature (RT), the conjugate was purified by ultracentrifugation (110,000 rpm, 1 h, 4°) and recovered in PBS. Conjugate Characterization Conjugates were characterized by micro BCA for total protein recovery (21), while amount of saccharide antigen linked was determined by HPAEC-PAD after performing acid hydrolysis directly on GMMA as previously described (24,(32)(33)(34)(35)(36). It was verified that there was no interference from GMMA in the quantification of each saccharide. A known amount of the conjugated PS was physically mixed to GMMA, and it was verified that analysis by HPAEC-PAD gave results comparable with those obtained by testing the same amount of the PS alone. For MenA, MenC, Hib, and GAC saccharides, conjugate formation was also confirmed by sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE)/Western blotting as previously described (21). For MenA, MenC, and GAC conjugates, polyclonal sera internally generated in mice were used as primary antibodies, while for Hib conjugate, a commercial antibody (Bacto Hib DIFCO 2236-50-1) was used. NanoTracking Analysis (NTA) was used to count the number of GMMA particles in solution and estimate the number of PS chains per GMMA. NS300 Nanosight instrument (Malvern) equipped with a CMOS camera and a 488-nm monochromatic laser beam was used. Data acquisition and processing were performed using NTA software 3.2 build 3.2.16, and more details on the analysis can be found in De Benedetto et al. (23). Immunogenicity Studies in Animal Models All animal sera used in this study were derived from mouse or rat immunization experiments performed at the GSK Animal were immunized s.c. at days 0 and 28 (37). Crl : CD 8-week-old female rats were immunized i.m. at days 0 and 28. Mice and rats were bled from the retromandibular plexus and the tail vein, respectively. Rats were preheated for 5 min in a warming cage at 37°C before bleeding. Final bleed in rats was performed under general anesthesia (alfaxalone 20 mg/kg + medetomidine 0.05 mg/kg + fentanyl 0.1 mg/kg). Blood was kept at 37°C up to 2 h or at RT up to 3 h in the untreated collection tubes and then centrifuged for 10 min at 2,851 rcf, 4°C before serum collection. Animal models, immunization routes, and schemes were selected according to the PS antigens tested. Anti-antigen-specific IgG levels were measured at days −1, 27, and 42 (40 for the study in rats) by enzyme-linked immunosorbent assay (ELISA) (38). Purified O-antigen from S. Typhimurium and Streptococcal Group A Carbohydrate conjugated to human serum albumin (HSA) were used for ELISA plate coating at 5 and 1 µg/ml, respectively, in carbonate buffer pH 9.6; purified Vi at 1 µg/ml in phosphate buffer pH 7.0; purified MenA and MenC capsular PS were used at 5 µg/ml in PBS pH 8.2; and Hib PS conjugated to HSA was used at 2 µg/ml in PBS pH 7.4. ELISA units were expressed relative to a mouse antigen-specific antibody standard serum curve, with the best five-parameter fit determined by a modified Hill plot. One ELISA unit is defined as the reciprocal of the dilution of the standard serum that gives an absorbance value equal to 1 in this assay. Each mouse serum was run in triplicate. Statistical Analysis Datasets were analyzed using two-tailed non-parametric Mann-Whitney test (for comparing the same time point for two different groups) or one-tailed non-parametric Wilcoxon matched-pairs signed rank test (for comparing different time points for a same group) with Prism (GraphPad Software). p-Values less than 0.05 were considered statistically significant. RESULTS Chemical Linkage of Polysaccharides to Generalized Modules for Membrane Antigens PS with different structural features and size were conjugated to GMMA from different pathogens. MenA, MenC, and Hib oligosaccharides were terminally activated with SIDEA linker (29) and randomly conjugated to lysines of GMMA surface proteins ( Figure 1A). Alternatively, the oligosaccharides were terminally derivatized with ADH and linked to LOS on oxidized GMMA by reductive amination ( Figure 1B). A similar approach was used for linkage of streptococcal GAC and S. Typhi Vi PS to GMMA. By playing with conjugation conditions, in particular by using different saccharide-to-protein molar ratios (as for meningococcal oligosaccharides) or different buffer pH (as for Vi), it was easy to modulate the number of sugar chains per GMMA particle ( Table 1). Formation of saccharide-GMMA conjugates was verified by Western blotting analysis ( Figure 2); and the amount of total saccharide and total protein were quantified by HPAEC-PAD and micro BCA, respectively. The saccharide-to-protein w/w ratio, coupled with an estimate of the number of GMMA particles per ml measured by NTA, allowed us to count the average number of saccharide chains per GMMA particle ( Table 1). The impact that sugar length and linkage site on GMMA could have on the immune response induced by the conjugates was initially studied with MenA oligosaccharides linked to MenB GMMA. MenA oligosaccharides of different and nonoverlapping length (polymerization degree (DP) equal to 5-12, 16-26, and >36) were conjugated to proteins or LOS on MenB GMMA ( Table 1, constructs 1-6) and tested in mice. GMMA alone or physically mixed to MenA oligosaccharides were used as negative controls, while MenA-CRM 197 conjugate was the positive control. Conjugation to proteins or LOS on GMMA resulted in induction of a strong anti-MenA IgG response at a level comparable with that of MenA-CRM 197 . We found that the sugar length did not influence MenA-specific serum IgG response, because no difference in antibody production was observed after immunization with the different MenA-GMMA conjugates ( Figure 3A), regardless of whether conjugation was directed to LOS or proteins. Interestingly, the conjugates generated from saccharides attached to proteins invariably elicited a higher MenA-specific IgG response 2 weeks after second immunization compared with MenA oligosaccharides linked to LOS ( Figure 3A). However, all GMMA conjugates, independently from the attachment site of meningococcal oligosaccharides to GMMA, induced antibodies with bactericidal activity against a homologous MenA strain ( Figure 3B). The ability of MenB GMMA to induce an immune response after attachment of MenA oligosaccharides was verified by testing the bactericidal activity of antibodies induced against three different MenB strains ( Figure 3B). While bactericidal activity against UK320 and UK104 strains was not impaired by conjugation, the one against the New Zealand strain was negatively impacted. As bactericidal activity against this strain is mainly mediated by the PorA antigen on MenB GMMA (40,41), we could speculate that the random conjugation of MenA oligosaccharides to proteins on GMMA could impact on PorA structure and conformation. However, the same was true for the glycoconjugates obtained by linkage of oligosaccharides to LOS on MenB GMMA, indicating that probably the saccharide chains masked some protein components shifting the immune response toward themselves. Next, Hib oligosaccharides were conjugated to MenB GMMA by targeting proteins or LOS (constructs 7-8, Table 1), and the Hibspecific serum IgG response was measured in rats, in comparison with Hib oligosaccharides mixed to GMMA and Hib-CRM 197 conjugate ( Figure 4). As observed for MenA conjugates, also in this case, the conjugate obtained by linkage of Hib to proteins induced a stronger anti-Hib IgG response than the conjugate produced by linking oligosaccharides to LOS. Both GMMA conjugates elicited anti-Hib PS IgG titers significantly higher than Hib simply mixed to GMMA and comparable with Hib-CRM 197 . After having investigated the impact of saccharide length and attachment via proteins or LOS on GMMA, we interrogated the effect of the density of saccharide conjugated to GMMA particles, which is another important feature for glycoconjugate vaccines. We produced conjugates differing for the average number of MenA or MenC oligosaccharides linked per GMMA particle (constructs 9-12, Table 1). No major impact of oligosaccharide density on anti-PS IgG response ( Figure 5A) and functionality of the sera induced in mice was found ( Figure 5B). However, control of glycosylation density could be useful to fully preserve the immune response induced by GMMA per se. In fact, by testing the bactericidal activity of the sera against a panel of different MenB strains, we assessed that the largest was the number of meningococcal oligosaccharide chains conjugated per GMMA particle, and the highest was the impact on the functionality of the elicited sera (this was particularly evident for MenB NZ98/254 strain). Therefore, linkage of fewer sugar chains per GMMA particle seems preferable. To further explore the effect of glycan length and density with a larger-size PS, conjugates formed by S. Typhi Vi PS attached to S. Typhimurium GMMA were generated (constructs 13-16, Table 1). Linkage of Vi to LPS on GMMA allowed to introduce a different number of PS chains to GMMA, while conjugation to proteins resulted in few Vi chains per GMMA particle only. As previously verified for MenA and MenC ( Figure 5), we did not find impact of antigen density on Vi-specific serum IgG response. However, the saccharide length in this case generated a significant effect, because longer Vi PS (48.5 kDa) were able to induce significantly higher Vispecific serum IgG titers than the shorter Vi (3.8 kDa) ( Figure 6A). All conjugates induced anti-S. Typhimurium O-antigen IgG response, preserving immunogenicity of GMMA per se ( Figure 6B). Glycoconjugation to Generalized Modules for Membrane Antigens Promotes a Shift Toward a T-Independent Humoral Immune Response Based on the Type of Conjugated Saccharide The display of PS on GMMA, especially if in high-density modality, generates repetitive epitope moieties on GMMA surface that can facilitate cognate B-cell receptor cross-linking, which could lead to B-cell activation in the absence of T-cell help. This shift toward a strong and fast T-independent B-cell stimulation would not promote germinal center formation and the consequent generation of long-lived plasma cells secreting high-affinity antibodies and memory B cells. Therefore, the T-independent Bcell response can have a negative impact on the efficacy of the humoral immune response, especially in infants or young children, and a detrimental effect on immunological memory and persistence of the antibody response. To investigate any potential Tindependent nature of the humoral immune response induced by PS conjugated on GMMA, we evaluated different PS-GMMA conjugates immunizing wild-type and nude mice, devoid of mature T cells. We used MenC, Vi, and GAC PS conjugated to S. Typhimurium GMMA, so as to test PS with different structural features (constructs 12 and 17-19, Table 1). MenC-GMMA conjugate was unable to induce a significant MenC-specific antibody response in nude mice compared with wild-type animals, clearly confirming the need of the T-cell help to promote a humoral response against MenC oligosaccharide ( Figure 7A). On the contrary, Vi-GMMA induced a strong anti-Vi-specific serum IgG response in wild-type as well as nude mice, revealing that this PS promoted a T-independent humoral immune response ( Figure 7B). Antibody levels induced in wildtype mice were high after the first dose, with no booster after reinjection. The same was verified by linking Vi PS to LPS or proteins on GMMA. Using GAC-GMMA conjugate, we observed an intermediate situation, since nude mice immunized with GAC-GMMA conjugate generated a GAC-specific serum IgG response, which was significantly lower than that induced in wild-type mice ( Figure 7C). Thus, GMMA-conjugated GAC PS elicit a weak Tindependent saccharide response. DISCUSSION Conjugation to appropriate carrier proteins, providing T-cell help, is an established way for improving immunogenicity of PS antigens giving rise to immunological memory, isotype switching, affinity maturation, persistence of antibody response, and ability to induce adequate protection in infants and children under 2 years of age (42)(43)(44)(45)(46). Few carrier proteins have been used so far for licensed glycoconjugates (1), highlighting in certain cases reduced immunogenicity against the PS hapten due to preexisting immunity toward the protein (the so-called "carrier epitope suppression") (47). Recent years have seen efforts to identify alternative carrier proteins, particularly with a dual role of carrier and antigen (3). Recently, we have proposed GMMA as carrier for heterologous PS, showing the ability to enhance the antigen- specific humoral immune response compared with the antigen alone or physically mixed with GMMA (21). Compared with traditional carrier proteins, GMMA are nanoparticle systems, with optimal size for immune stimulation and presenting multiple copies of the PS favoring B-cell activation. They possess immunostimulatory molecules, such as LPS, lipoproteins, or peptidoglycans, that can stimulate innate immunity and consequently enhance adaptive immunity (5, 7). Importantly, GMMA can be produced using a simple manufacturing process (15,48). Here, we developed conjugation chemistries to easily and efficiently link PS differing in structure and size to GMMA surface, targeting both LPS and proteins on GMMA. Also, conjugation to GMMA was modulated not to impact on the immune response induced by GMMA as antigen. This supports the use of GMMA with a dual role of carrier and antigen for the development of multicomponent vaccines covering various diseases at the same time. In particular, Salmonella and meningococcal diseases, here used as models, are both common in several countries of sub-Saharan Africa (49)(50)(51); Hib and MenB are two critical etiological agents of meningitis, and a unique pan-meningococcal vaccine could offer a unique opportunity to combat the meningococcal meningitis worldwide; finally, S. Typhi and non-typhoidal Salmonella are leading causes of disease and mortality in Africa (52). Antigen length and density are parameters that can play a role on the immune response elicited by glycoconjugate vaccines (22). Here, by playing with these features on GMMA, we observed that PS density seems not to play major role on the anti-saccharidespecific immune response induced in mice. Also, a limited number of oligosaccharide chains linked to GMMA is sufficient for inducing a strong immune response. On the other hand, saccharide length can play a role depending on the specific PS used. This confirms that, similar to traditional glycan-protein conjugates, saccharide length needs to be investigated and optimized specifically for each antigen of interest. Another relevant aspect of this work is the observation that the carrier effect of GMMA for PS is observed irrespectively of whether the antigen is linked on GMMA proteins or LPS/LOS and can be dependent or not on T-cell help, based on the nature of the PS. Recently, it has been shown that glycanprotein conjugates induce a T-cell-dependent response through generation in B cells of peptides or glycopeptides (depending on the nature of the conjugated sugar) that are presented to the helper T cells (53,54). Our finding suggests that the immunological mechanisms of the "carrier" effect of GMMA for PS could be the result of different coexisting mechanisms, which would be also depending on the nature of the PS linked. Interestingly, even when GAC was linked to LPS on GMMA, the immune response was strongly mediated by T-cell activation as verified by the much lower response induced by the conjugate in nude mice. This supports the finding that direct linkage of PS to proteins is not needed, although copresentation seems crucial. It is important that the interaction between protein and PS moieties is strong enough to allow internalization in the same B cell to assure T-cell engagement (3). Finally, our data show that linkage of certain PS (e.g., MenA and Hib) to proteins on GMMA can result in higher anti-PSspecific IgG response and seems preferable to conjugation to the LPS. Recently, E. coli glycoengineered OMVs have been proposed for the expression of heterologous PS that are anchored to lipid A-core as acceptor (11,12,55). Our findings can be informative also for this approach. Compared with the chemical conjugation proposed here, glycoengineering holds the potential for simplified and lower-cost vaccine production. However, chemical conjugation can be more easily applied to OMVs/GMMA from different pathogens and to PS with different structures and can represent a fast tool to investigate how parameters such as those investigated here impact the immune response elicited by these novel glycoconjugates, including glycoengineered OMVs. Also additional nanoparticle systems, such as Qb (56,57) and hepatitis B core antigen virus-like particles (58), are being proposed as novel carrier systems to provide a strong anti-PS immune response. It will be interesting to compare GMMA and OMVs with these other systems for their ability to induce strong response after one only injection, persistency of the response, memory, and ultimately efficacy in infants and to see if they will behave similarly mainly due to their particulate nature and display of multiple antigens or if there will be specific features for a difference. In conclusion, we found that optimization of parameters such as sugar length and density is crucial to fully exploit the potential of GMMA as platform for multicomponent vaccines, where GMMA can act as T-cell helper and antigen. The action of GMMA as carrier seems independent of the direct linkage of the sugar to the protein and present some specificity that deserves to be further investigated. Additional studies, including evaluation of IgG subclasses, IgM, antibody affinity, and cellular response, will be needed to further characterize the quality of the immune response elicited by GMMA conjugates and to better understand the mechanism of action elicited by these novel carrier systems. Unraveling these immunological mechanisms could guide the design of even more effective GMMA-based vaccines and would be informative for other nanoparticle based conjugates under development. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The animal study was reviewed and approved by Animal Welfare Body of GSK, Siena, Italy, and by the Italian Ministry of Health AUTHOR CONTRIBUTIONS FMi, RAd, and DP designed the study. FMi wrote the manuscript. FMi, RAl, RD, FS, FM, MC, DO, OP, CB, NB, GG, and BB performed the experiments and analyzed the data. FMi, FN, CB, BB, DP, and RAd supervised research and reviewed the data. All authors contributed to the article and approved the submitted version. FUNDING The authors declare that this study received funding from GlaxoSmithKline Biologicals SA. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.
2021-09-14T13:15:10.959Z
2021-09-14T00:00:00.000
{ "year": 2021, "sha1": "bff931f9240fb4bcf1d16a3a7541c9adda6d50f9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.719315/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bff931f9240fb4bcf1d16a3a7541c9adda6d50f9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234545524
pes2o/s2orc
v3-fos-license
Ovarian Cancer Revealed by Paraneoplastic Cerebellar Degeneration Anti-Yo Positive: A Case Report Summary Neurological paraneoplastic syndromes are rare, often associated with gynecological cancer or small cell lung cancer. This article reports a case of ovarian cancer to discuss the difficulties in the management of neurological paraneoplastic syndromes. This is a case of paraneoplastic cerebellar syndrome with anti-Yo antibodies. Neurological syndromes, testing for onconeural antibodies and testing for underlying cancer provide a basis for the diagnosis. The anti-tumor treatment constitutes the mainstay of the care. subacute Lambert Eaton's syndrome, opsoclonus-myoclonus, dermato-polymyositis and pseudo-occlusive syndrome. (DCP) complicates Introduction Paraneoplastic neurological syndrome (PNS) is defined by the occurrence of a neurological syndrome which cannot be explained by a metastatic, iatrogenic, toxic or deficient aetiology. There are several clinical presentations of which the most reported in the literature are: paraneoplastic cerebellar degeneration (PCD), limbic encephalitis, encephalomyelitis, subacute sensory neuropathy, Lambert Eaton's syndrome, opsoclonus-myoclonus, dermatopolymyositis and pseudo-occlusive syndrome. (DCP) complicates many tumors such as breast cancer, lung cancer and some gynecological cancers. We report here a case of ovarian cancer discovered following PNS in order to discuss the diagnostic difficulty and therapeutic. Case présentation 55-year-old patient with a history of arterial hypertension, admitted to the Neurology Department for rapidly progressive worsening static and kinetic cerebellar syndrome. On admission, the patient was conscious, well oriented, responding appropriately but slowly to a random question. The patient presented also a cerebellar dysarthria, major dysmetria predominantly in the upper limbs, with volitional dyskinesias, adiadokokinesia, asynergy, hypotonia of the lower limbs and abduction attitude with babinski on the right, bilateral vertical and rotatory nystagmus and in both lateral gazes, ocular fixation impossible. Cerebral MRI performed did not show any abnormalities in the intra-axial structures of the posterior cerebral fossa, Electroneuromyogram was in favor of a sensitive neuronopathy of the 4 limbs. The search for anti-neuronal antibodies: anti-Hu, anti-Ri, anti -GAD, anti-GABA-B, anti-NMDAR, anti-AMPAR, ANTI -gangliosides, anti-sulfatides in blood and cephalospinal fluid was negative. The anti-Yo antibodies, on the other hand, were positive. The Thoraco abdomino pelvic scanner revealed a heterogeneous bilateral ovarian tumor (65x42 mm on the right -26x10 mm on the left) with left external iliac lymphadenopathy, bilateral primary iliac and retroperitoneal lymphadenopathy (figure 1). The CA125 was 245 IU / ml, the patient underwent a total hysterectomy + adnexectomy + dissection + omentectomy + peritoneal cytology, the pathology anatomy concluded in a grade III endometrioid adenocarcinoma of tubo-ovarian origin. The hypothesis of a paraneoplastic cerebellar degeneration was retained. Adjuvant chemotherapy with carboplatin and paclitaxel was administered and the thoraco-abdomino-pelvic scan showed no signs of progressive recovery. On the neurological level, the patient benefited from 2 curures despite the treatment with polyvalent immunoglobulins and corticosteroids without clinical improvement. The patient is still severely hampered by a cerebellar syndrome whose recovery seems to be stagnating. Discussion This reported case illustrates a frequently encountered SPN is cerebellar degeneration. The triad of clinical neuralgic syndromes, specific autoantibody positivity and tumor background was complete. Our case had cerebellar syndrome; anti-Yo antibodies were positive in serum and CSF. The clinical presentations of SNP can be diverse but eight classic syndromes should attract the attention of clinicians: limbic encephalitis, acute or subacute cerebellar degeneration, encephalomyelitis, opsoclonus myoclonus, acute sensory neuropathy, de Lambert-Eaton, pseudo-occlusive syndrome and dermatopolymyositis. The manifestations are most often subacute or acute, in a tumor context and in a patient with major risk factors for cancer [1] . PCD is most often associated with lung and gynecological cancers, the incidence and prevalence of which remain unknown. It affects both women and men. The mean age of onset of PCD reflects the age of onset of cancers [2] . Symptoms are associated with bilateral static and kinetic cerebellar syndrome and dysarthria. Dizziness and nystagmus can be observed [3] . The pathophysiology of PCD is poorly studied. The hypothesis of an autoimmune mechanism involved has been suggested. It is believed to be a cross reaction due to ectopic expression by the tumor of proteins normally expressed by the nervous system. The presence of circulating autoantibodies (serum and cerebrospinal fluid (CSF)), specifically associated with PNS, is one of the characteristics of these syndromes. There are two types of SNP depending on the target of the antibodies associated with them and which can be directed against intracellular (onconeuronal) or membrane targets. Among the onconeuronal antibodies identified in DCP we find: Hu, Yo, CV2, Ri, Ma, Tr / DNER, amphiphysine, Sox1 [4] . Anti-Yo antibodies are associated with certain gynecological (ovarian and rarely uterus), breast, lung and gastric cancers. The presence of anti-Yo antibodies in a woman presenting with a cerebellar syndrome is in 90% of cases associated with breast or ovarian cancer. The discovery of these antibodies should encourage research into this type of tumor [5] . Treatment of PCD is based on the management of the primary tumor. The difficulty of care also lies in the therapeutic aspect. Probably due to the rarity of SPN, no treatment protocol has been established so far. However, in the reported cases two therapeutic approaches emerge: treatment of the underlying cancer if it is diagnosed and suppression of the immune response [6] . Antitumor treatment is the mainstay of treatment, which stabilizes or eliminates neurological signs and improves the prognosis. Symptomatic treatment without direct action on the tumor seems uncertain [6,7,8,9] . The combination of anti-tumor treatment with immunotherapy would give good results, especially in the acute phase of the symptoms. Corticosteroids, cyclophosphamide, tacrolimus, immunoglobulin, plasmapheresis and rituximab are commonly used concomitant treatments. They can be administered as monotherapy or in combination [10] . Conclusion PNS should suggest a classic subacute neurological syndrome in a patient with risk factors for cancer. The diagnostic certainty of the underlying tumor of a PNS is often achieved late. Antitumor drugs are the main treatment. The prognosis depends on the rapidity of diagnosis and treatment. In a patient presenting a cerebellar syndrome associated with anti-Yo positive CAs, active screening for gynecological cancer should be performed. Consent for Publication Consent from patient was obtained before publication of this case report. Availability of Data and Materials The data are available from the corresponding author on reasonable request.
2021-05-16T00:02:54.783Z
2020-12-20T00:00:00.000
{ "year": 2020, "sha1": "641e68886fd13b05bbcdaaf6b24c10a15d6c3aa7", "oa_license": "CCBYNCSA", "oa_url": "https://ijirms.in/index.php/ijirms/article/download/1016/780", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "902b4726cfd0a7041b4cde1569a5542ee40f9f5b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1100831
pes2o/s2orc
v3-fos-license
Enhanced production and organic solvent stability of a protease fromBrevibacillus laterosporus strain PAP04 A bacterial strain (PAP04) isolated from cattle farm soil was shown to produce an extracellular, solvent-stable protease. Sequence analysis using 16S rRNA showed that this strain was highly homologous (99%) to Brevibacillus laterosporus. Growth conditions that optimize protease production in this strain were determined as maltose (carbon source), skim milk (nitrogen source), pH 7.0, 40°C temperature, and 48 h incubation. Overall, conditions were optimized to yield a 5.91-fold higher production of protease compared to standard conditions. Furthermore, the stability of the enzyme in organic solvents was assessed by incubation for 2 weeks in solutions containing 50% concentration of various organic solvents. The enzyme retained activity in all tested solvents except ethanol; however, the protease activity was stimulated in benzene (74%) followed by acetone (63%) and chloroform (54.8%). In addition, the plate assay and zymography results also confirmed the stability of the PAP04 protease in various organic solvents. The organic solvent stability of this protease at high (50%) concentrations of solvents makes it an alternative catalyst for peptide synthesis in non-aqueous media. Introduction Organic solvents are extremely toxic to microorganisms. These chemicals have been shown to cause lysis following cell penetration, owing to disruption of the cell membrane and internal structures (1). However, some bacteria are able to develop stability in organic solvents by various adaptations, such as solvent efflux pumps, rapid membrane repair, lower cell membrane permeability, increased membrane rigidity, and decreased cell surface hydrophobicity (1). Organic solvent tolerance is a strainspecific property, and the toxicity of a solvent correlates with the logarithm of its partition coefficient in n-octanol and water (log P ow ) (2); organic solvents with low log P ow values (1.5-4.0) are considered more toxic than those with higher log P ow values (3). Most bacterial enzymes are both less active and less stable in the presence of organic solvents. Because of enzyme denaturation, peptide synthesis rates are also very low in organic solvents (4). Several methods have been employed to improve enzyme activity and stability in organic solvents, such as chemical modification, immobilization, protein engineering, and directed evolution (5). However, some naturally occurring solvent-tolerant strains are able to produce enzymes that retain their stability and activity in organic solvents without any need for modification or engineering (6). These natural solvent-stable enzymes can be found in bacterial strains collected from the environment (e.g., soil samples), following screening methods to isolate potent strains capable of producing protease in organic solvents. Proteases catalyze the hydrolysis of peptide substrates in normal aqueous conditions, and the synthesis of peptides in non-aqueous conditions (5,7). For effective industrial applications, enzymes must be active and stable at high concentrations of organic solvents, such that sufficient amounts of product can be recovered while contaminants and side reactions are eliminated (5). Microbial proteases represent one of the largest classes of industrial enzymes, accounting for approximately 40% of the total worldwide sales of enzymes (8). These proteases can be produced in large quantities and genetically manipulated to increase activity much more easily than proteases derived from plants and/or animals (9). Most organic solvent-stable proteases have been isolated and characterized from gram-negative bacteria (10-13); however, a few are available from gram-positive bacteria (14,15). The production of solvent-stable proteases by microorganisms can be influenced by factors such as growth media, incubation period, pH, temperature, and sources of carbon and nitrogen (4). Even small improvements in biotechnological enzyme production processes have resulted in greater commercial success. This study describes the isolation of the Brevibacillus laterosporus strain PAP04 and the production of an organic solventstable enzyme from this strain that, to the best of our knowledge, has not been previously reported. Material and Methods Isolation of organic solvent-stable microorganisms Soil samples were collected from cattle farm sites in South Korea. Organic solvent-stable bacteria were isolated from soil samples according to established methods (10). Briefly, 1 g of soil sample was suspended in 10 mL of sterile water by shaking, and 5 mL of this suspension were added to 250 mL bottles containing 25 mL of Lysogeny broth (Sigma, USA) supplemented with toluene and benzene (2.5% v/v each). Culture vessels were sealed with chloroprene rubber stoppers to prevent evaporation of organic solvents and then incubated at 37°C for 72 h on a shaker at 180 rpm. Next, 5-mL aliquots of culture were transferred into fresh media and cultured again under the same conditions. These cultures were diluted and plated onto skim milk agar media (1 g/L yeast extract, 20 g/L agar, 1% skim milk) lacking organic solvents, and then incubated at 37°C for 36 h to screen for protease-producing strains. These strains were purified and screened again on skim milk agar plates for further confirmation. Selection of a highly potent solvent-stable strain Bacteria were inoculated into 25 mL of Lysogeny broth medium and incubated at 30°C for 4 h with shaking at 180 rpm. About 0.5 mL of this culture was transferred into 50 mL of protease production medium (10 g/L peptone, 0.5 g/L (NH 4 ) 2 SO 4, 0.3 g/L MgSO 4 Á 7H 2 O, 1 g/L CaCl 2 Á 2H 2 O, 1 g/L NaCl, 10 mL glycerol, pH 7.0). The inoculated flasks were incubated at 37°C for 48 h with shaking at 180 rpm. After incubation, the culture was centrifuged at 11,100 g for 10 min at 4°C. To obtain a strain capable of highly producing organic solvent-stable proteases, strains isolated from plate screening were further screened with organic solvents (benzene and toluene). Solvents were added to 1 mL of supernatant, to reach a final concentration of approximately 25%, and tubes were covered with aluminum foil. These mixtures were incubated at 37°C for 24 h with shaking at 100 rpm. The residual protease activity was measured as described below. Based on the initial screening by plate assay and subsequent stability tests in organic solvents, the strain PAP04 was selected for further studies. Identification of the selected strain by 16S rRNA sequencing The selected strain PAP04 was identified by 16S rRNA sequencing as follows. Genomic DNA was extracted using a genomic DNA purification kit (Promega, USA) and then used as a template to amplify 16S rRNA sequences by polymerase chain reaction (PCR) using the universal 16S rRNA gene primers: 8-27F, 5'-AGAGTTTGATCCTGGCTCAG-3' and 1472R, 5'-TACGGYTACCTTGTTACGACTT-3'. PCR products were then sequenced, and 16S rRNA gene sequences were compared to other nucleotide sequences by Basic Local Alignment Search Tool (www.ncbi.nlm.nih.gov/blast). Optimization of solvent-stable protease production Protease production was assessed at 24-h intervals for incubation times up to 72 h. Cell-free supernatants were collected every 24 h following centrifugation at 11,100 g for 10 min at 4°C; these supernatants were used to determine protease activity. The following carbon sources were used at a 1% concentration in growth media: glucose, glycerol, lactose, maltose, and sucrose. Carbon sources were sterilized separately and added aseptically to autoclaved media. The following nitrogen sources were also tested: casein, corn steep liquor, gelatin, peptone, and skim milk. Protease activity was determined at pH values ranging from 6.0 to 11.0 (adjusted prior to autoclaving) and at temperatures ranging from 20 to 60°C. Protease assay Protease activity was measured using a previously described method (16) with modifications. Briefly, a 500-mL aliquot of culture supernatant was mixed with 500 mL of 100 mM Tris-HCl buffer, pH 8.0, containing 1% (w/v) casein (used as a substrate) and incubated for 30 min at 37°C. Reactions were stopped by the addition of 500 mL of 20% trichloroacetic acid and incubated at room temperature for 15 min, followed by centrifugation at 13,300 g for 15 min to remove precipitates. The absorbance at 280 nm for each supernatant was determined. One unit of protease activity was defined as the amount of enzyme required to liberate 1 mg of tyrosine in 1 min. Effect of organic solvents on the stability of crude protease Crude protease from supernatants was filtered through a 0.22-mm membrane. Enzyme solutions were placed in screw-capped tubes and mixed with the following organic solvents at a 50% final concentration: acetone, benzene, chloroform, dimethylformamide, dimethyl sulfoxide (DMSO), ethanol, hexane, methanol, isopropanol, and toluene. These mixtures were incubated at 30°C for 2 weeks with shaking at 100 rpm. After incubation, each sample was carefully withdrawn from the solution or aqueous phase, in case of water-immiscible solvents. Residual protease activity was determined as described above; controls contained the enzyme solution lacking organic solvents. Enzyme stability is reported as the protease activity relative to the control. Substrate gel electrophoresis analysis For gelatin zymogram analysis, sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) was performed according to a previously established method (17), with minor modifications. Samples were electrophoresed on 10% polyacrylamide gels containing 0.1% gelatin. Following electrophoresis, gels were rinsed with 0.25% Triton X-100 and incubated for 1 h at 37°C in 50 mM Tris-HCl buffer, pH 8.0. Protease activity was visualized by staining gels with Coomassie brilliant blue R-250 (Sigma, USA). Isolation and identification of solvent-stable strains Soil samples were mixed with media containing toluene and benzene (2.5% each), then incubated for 3 days at 37°C and plated on skim milk agar media. A total of 22 bacteria were able to produce clear zones that indicate hydrolysis of substrate, with strain PAP04 showing the highest clear zone. Screening media was supplemented with substrate for selection of potent protease-producing strains. To reconfirm each strain's stability in solvent, bacteria were inoculated into protease production media and crude enzyme was mixed with toluene and benzene, followed by evaluation of protease activity. Enzyme activity from the PAP04 strain was significantly stable in both solvents; therefore, this strain was selected for further studies. PCR was utilized to amplify the 16S rRNA sequence from the PAP04 strain, which was then purified and used for sequencing. The 16S rRNA sequence (1443 bp) from the strain PAP04 was compared to that from other bacterial species and shown to exhibit a high similarity (99%) with B. laterosporus LMG15441 by phylogenetic tree analysis (Figure 1). Organic solvent-stable protease production Culture conditions were optimized to enhance the level of protease production. The selected strain PAP04 was cultured on protease production medium and demonstrated enzyme activity up to 72 h, with maximum protease production occurring at 48 h of incubation (Figure 2A). The effect of various carbon sources on solvent-stable protease production is shown in Figure 2B: glucose, glycerol, and sucrose yielded low levels of protease, while maltose was able to increase protease production significantly; lactose yielded an intermediate effect. The effect of different nitrogen sources was also investigated to optimize protease production in media containing maltose as the carbon source. Skim milk was determined to be the most effective nitrogen source to improve the level of protease production in these conditions ( Figure 3A), while other nitrogen sources, such as casein and gelatin, yielded only moderate levels of protease, and corn steep and peptone inhibited protease production. Of the various pH values tested, significant protease production was observed from pH 6 to pH 8, with the highest enzyme activity observed at pH 7.0 ( Figure 3B). Protease activity decreased as pH levels increased above 8.0 ( Figure 3B); at highly alkaline pH (11), enzyme activity decreased by approximately 90%. Among the various temperatures tested, the highest enzyme activity was observed at 40°C ( Figure 4); enzyme activity was significantly reduced when the temperature was increased above this level. Stability of crude protease in various organic solvents Protease stability was assessed in the presence of various organic solvents (at a 50%concentration) with log P ow values ranging from -0.24 to 3.6. Residual activity was maintained at a 100% level (compared to that of media lacking solvent) or greater following incubation in all tested organic solvents (Figure 5), except ethanol, which otherwise maintained a significant level of stability (97.1%). Several solvents (acetone, benzene, chloroform, DMSO, hexane, methanol, isopropanol, and toluene) increased enzyme activity, in particular benzene, acetone, and chloroform, which yielded 74, 63, and 54.8% increases in activity, respectively ( Figure 5). Enzyme stability was also confirmed by plate assay and zymogram analysis using the same samples. For plate assays, solid medium was prepared with skim milk as the substrate; clear zones indicating hydrolysis of this substrate were observed for all solvent-treated samples to a level similar to or greater than control ( Figure 6). Zymography analysis confirmed that the protease stability was maintained in organic solvents; two clear protease bands present in the control lane were also present in all lanes bearing samples treated with organic solvents (Figure 7). These results confirmed that the protease produced by strain PAP04 demonstrated induced activity and enzyme stability in the presence of hydrophobic and hydrophilic solvents. Discussion To isolate solvent-stable bacterial strains, soil samples were cultured in media containing solvents such as toluene and benzene, which were selected for solvent-tolerant bacteria. Protease-producing strains were screened on the basis of the hydrolysis of substrate (skim milk) on agar plates. Among the 22 isolated strains, strain PAP04 showed the largest clear zone around colonies. This potent strain was further selected on the basis of organic solvent stability, as it was found to be significantly stable in the presence of toluene and benzene. Strain PAP04 was identified as B. laterosporus by the 16S rRNA sequencing method. Protease production is generally influenced by nutritional factors (carbon and nitrogen sources) and environmental conditions (pH, temperature, and incubation periods). There have been several reports on protease production by various bacteria (18)(19)(20)(21), including those that have investigated the production of organic solvent-stable proteases (4,6). In our study, peak protease production was observed at 48 h of incubation, after which activity likely decreased owing to nutrient depletion. In other studies, bacteria produced a high level of protease from 48 to 72 h of incubation (6,13,15). Culture conditions, particularly carbon and nitrogen sources, play an important role in stimulating the synthesis of organic solvent-stable proteases. The requirement for specific carbon sources differs from strain to strain, and in this study, maltose was found to increase protease production. The presence of maltose as a carbon source in culture media was also shown to enhance protease production by Pseudomonas aeruginosa PseA (22). In addition, PAP04 protease activity was increased approximately 2.2-fold in the presence of skim milk, reflecting the positive effects of this nitrogen source that have also been shown in Pseudoalteromonas arctica PAMC 21717 (23) and Bacillus sp. N.40 (24). These results confirm that the specific nitrogen source is also critical to improve protease production. Temperature is another important factor in enzyme synthesis. Most studies have reported that organic solventstable bacterial growth and protease production is optimal at temperatures less than 30°C, while some studies have utilized high temperatures to increase both the rate of biotransformation reactions and the solubility of otherwise water-immiscible substrates (25). The low level of protease production observed at high temperatures in our study is likely due to the thermoliability of the protease. Our results were similar to Gupta and Khare (4), who reported an optimum pH of 7.0 for protease production and decreased enzyme synthesis with increasing alkalinity. After optimization of media and culturing conditions, the yield protease was increased approximately 5.91-fold compared to standard conditions. Most bacteria are not stable in the presence of organic solvents, as these chemicals enter into bacteria and destroy the cell membrane, causing cell lysis (26). However, some gram-positive bacteria possess mechanisms to tolerate organic solvents, such as induction of general stress regulation, production of organic solventdeactivating enzymes, and formation of endospores (27,28). In the present study, the enzyme produced by B. laterosporus PAP04 was stable in all tested hydrophilic and hydrophobic solvents, which were assayed at 50% concentrations. Of note, the solvents benzene, chloro-form, and acetone actually increased enzyme activity. Organic solvent-stable enzymes are more attractive for many industrial applications, and most proteases are stable only in some hydrophilic or hydrophobic solvents (6,11). The log P ow value is defined as the logarithm of its partition coefficient in a standard n-octane/water biphasic system (29). Interestingly, benzene and chloroform have the same log P ow values, and similar levels of induced protease activity were observed with these two solvents in our assays. In contrast, some research has reported completely different stabilities in these solvents for proteases produced by Pseudomonas species (10,13). Feng et al. (30) reported that enzyme activity can be induced at log P ow values above 4.0, while approximately 58-65% of activity is lost at log P ow values less than 1.0; a similar effect was also observed with an organic solventstable protease produced by P. aeruginosa (11). Biocatalysis in non-aqueous media has several advantages, such as high solubility of hydrophobic substrates, reduced microbial contaminants, and reusability (6). Natural organic solvent-stable enzymes are useful for various applications employing organic solvents as reaction media, as they can be used without any modifications to stabilize the enzymes (5). Several studies have reported enzyme stability in the presence of low concentrations of solvents (4,15,30). However, high concentrations of solvents (above 50%) are required to reduce unwanted hydrolysis during synthesis of peptides and esters (31). Plate assays and zymography analysis also confirmed the stability of the PAP04 protease in multiple organic solvents. These results indicate that the protease from B. laterosporus PAP04 was highly suitable for various industrial applications, mainly for peptide synthesis in non-aqueous media.
2017-08-15T15:51:03.315Z
2016-03-18T00:00:00.000
{ "year": 2016, "sha1": "21e8fa7fbead7485346f81a3768e391a3d7dba53", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/bjmbr/v49n4/1414-431X-bjmbr-1414-431X20155178.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "21e8fa7fbead7485346f81a3768e391a3d7dba53", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
221521498
pes2o/s2orc
v3-fos-license
Properly initialized Bayesian Network for decision making leveraging random forest , INTRODUCTION In the era of TV and Radio, keywords for appealing a product had been unilaterally determined by its maker. SNS has now become popular and consumers are spreading impression and evaluation of various products with their own keywords across the Internet. Therefore, it is necessary for makers to understand which keywords should be used in the era of SNS. For this purpose, several useful studies are proposed. [1][2][3] In this study the authors are focusing on applying Bayesian network to select appropriate keywords. As described in Section 2, the authors pick up an existent ex-ample of a beer maker and its representative product, which are noted "the maker" and "the product" respectively in this paper. If a consumer posts a keyword related with beer such as "sharpness" in SNS, the maker needs to know how often this keyword occurs with the product, in order to determine whether they should include "sharpness" into their marketing messages in SNS. If the keyword more often occurs in SNS with the product, it is considered to be engaged with and closely related with the product. It is called "engaged keyword" in this paper. On the other hand, the keyword with less often occurrence with the product, it is called "non-engaged keyword". The business success for the maker depends on how to select engaged keywords effectively. For extracting engaged keywords, several methods such as leveraging Co-Occurrence network analysis [4,5] or Word2Vec [6][7][8] are already proposed. Even after applying these methods, there is still an issue of how to select actually used keywords from obtained keywords. As the budget for marketing and advertising is often limited in business, the maker needs to pick up especially effective engaged keywords among them. For the purpose above, the authors apply Bayesian network analysis, it has advantage such as relationships between the product and engaged keywords are visualized in the form of DAG (directed acyclic graph). [9,10] As described in Section 2, in Bayesian network analysis of this study, two sets of tweet records are retrieved from Twitter. One is "engaged tweets", which contain the product. The other is "non-engaged tweets", which only contain an ordinal word "beer". These two sets are combined into a dataset. Each record of the dataset has columns indicating whether contains each engaged keyword or not, and also has engaged/non-engaged flag as the last column. By applying Bayesian network analysis, the inference can be performed such as, tweets with keyword A are likely to be engaged tweets, but with the combination of keyword B, it does not. However, as mentioned in Section 2, the situation occurs such as, even when tweets with the combination of keyword A and keyword B are likely to be engaged tweets based upon the inference, the actual search result on Twitter shows the opposite result. The cause of such disagreement would reside in the characteristic of Bayesian network. In the example above, the engaged/non-engaged flag is a kind of target node (explained variable), which has different role from nodes of engagedkeywords (explanatory variables). But Bayesian network usually handles all these nodes equally. Therefore, some adjustment will be required to apply Bayesian network for decision making task like this case. On the other hand, Random forest is a proven method for analyzing the influence of explanatory variables upon explained variable. [11,12] As described in Section 3, the authors propose to configure initial state of Bayesian network leveraging the result of Random forest analysis. The initial state consists of a few nodes around the target node and several edges between these nodes and the target. The former is called "initial nodes" and the latter is called "initial edges" in this paper. Initial nodes are extracted by measuring mean decrease of Gini coefficient calculated with decision trees of Random forest, because explanatory variables with much influence on explained variable show significant decrease of the coefficient. Directions of edges correspond to condi-tional probability among nodes connected with those edges. Therefore, directions of initial edges are designated based on likelihood measured by similarity of conditional probability distribution between actual data and predicted result of Random forest. The similarity is calculated with Wasserstein metric. Initial nodes and initial edges are given as an initial state for the construction of Bayesian network. As confirmed in Section 4, the inference result of the Bayesian network with initial state coincides well with the actual search result on Twitter. Configuring initial state leveraging the result of Random forest analysis is considered to be a kind of adjustment of Bayesian network to perform decision making with explained/explanatory variable as nodes. In the following Section 5 the summarization of this study is described. At last in Section 6, the authors discuss the future work such as the case having multiple explained variables. As a tool for retrieving and analyzing of data, one of the representatives of statistical computing language and environment "R" is used in this study. Example case and current issue For clarification and demonstration, the authors pick up an existent example of a beer maker and its representative product. In this paper they are called "the maker" and "the product" respectively. Prior to this study, the authors extract 18 engaged keywords for the product leveraging Word2Vec. [13] At first two sets of tweets are searched and retrieved on Twitter. One is the set of tweets which include the product. The other consists of tweets including ordinal keyword in terms of the business domain such as "beer" in this case. Then Word2Vec analysis is applied for the mixture of the two sets. If a keyword shows closer direction to the product than to the ordinal keyword in vector space obtained by Word2Vec, the keyword is considered to be more closely related with the product than other keywords. With this procedure, engaged keywords for the product are obtained as shown in Table 1. The purpose is to pick up engaged keywords more related with the product from Table 1 with Bayesian network analysis. For performing Bayesian network analysis, a dataset is retrieved on Twitter. As tweet retrieving tool, "rtweet" library of R is used in this study. The dataset consists of two types of tweets. One is "engaged tweets", which contain the product. The other is "non-engaged tweets", which only contain an ordinal word "beer". Each record of the dataset has columns indicating whether contains each engaged keyword (contain=1/ not contain=0) and also has engaged/non-engaged flag as the last column by the name of "engaged"(engaged=1/not en-gaged=0). The total number of tweets is 1046 (engaged: 357, non-engaged: 689). The structure of the dataset is shown in Figure 1. Current issue The obtained Bayesian network is shown in Figure 2. Tabu search algorithm [14,15] in "bnlearn" library of R is used for learning the structure of Bayesian Network. The target node ("engaged") is marked as gray. It is directly connected with engaged keywords such as Word4, Word5, Word7, Word8, Word9, Word17 and Word18. The network shows several visual insights. For example, the connection of three nodes, "engaged", "word4" and "word8" includes a tail-to-tail relationship (there are two edges from "Word4" to both "engaged" and "Word8"), which is one of three basic connections composing Bayesian Network. In a tail-to-tail, after "Word4" is determined, "engaged" is conditionally independent of "Word8". But the actual network in Figure 2 is much more complicated, such as "Word8" has an edge directly connected to "engaged". A few examples of probabilistic inference for those three nodes as shown in Table 2. Table 2 shows that "engaged" is strongly affected by "Word8", but not under condition "Word4" =1 is given. The result of the inference tells that "Word8" is not preferable for being used with "Word4" for making marketing messages be effectively engaged with the product. As "Word4" has similar relationships among other nodes, the network suggests, the maker should pay attention in using "Word4". However, some of the probabilistic inference with the network in Figure 2 do not coincide with the actual search result on Twitter. For example, as "Word18" ("taste") is a commonly used word, the combination with other taste-related words, such as "Word8" ("dry") would be a candidate for appealing the product on SNS. Actually, the result of the inference is shown in Table 3. Table 3. A case of the inference not correspondent with the actual data Table 3 shows, tweets are likely to be engaged with the product when "Word8" is used with "Word18". On the other hand, actual search on Twitter is performed under these two search words, Conditon1: correspondent to Case1: the product and Word8 Conditon2: correspondent to Case2: the product and Word8 and Word18 The emerging ratio of tweets that match condition 1 is 58.0%. As for condition 2 is 9.0%. The actual search result on Twitter tells "Word18" should not be used with "Word8" in order to make "Word8" engaged with the product. The cause of this disagreement would reside in the characteristic of Bayesian network. In this case, the node "engaged" is a kind of explained variable, which has different role from other nodes acting as explanatory variables. But Bayesian network usually handles all these nodes equally. Therefore, some adjustment will be required. PROPOSED METHOD To resolve the issue in Section 2, the authors propose to optimize Bayesian network by configuring initial state which reflects and emphasizes relationships between the target nodes and others. The authors propose to configure initial state to Bayesian network leveraging the result of Random forest, as it is a proven method for analyzing the influence of explanatory variables upon explained variable. The initial state consists of a few nodes around the target node and several edges between these nodes and the target. The former is called "initial nodes" and the latter is called "initial edges" in this paper. In this chapter, detailed steps of the proposed method are described. The outline of the proposed method is shown in Figure 3. Applying Random forest At first Random forest analysis is applied to the dataset, in which "engaged" is explained variable and 18 engaged keywords ("Word1" -"Word18") are explanatory variables. As an analyzing tool, "randomForest" library of R is used in this study. Random forest is an advanced algorithm based on decision trees, [17] in which a lot of trees are generated according with randomly selected explanatory variables and the result is obtained as major vote of those trees. Therefore, these two parameters should be given properly in advance. Parameter1: Number of explanatory variables selected while generating trees Parameter2: Total number of trees generated Along with the result of grid searching approach, [18] param-eter1 is set to 4 and parameter 2 is set to 500 in this study. The dataset is split into train data (80% of 1046 records) and the remining is left for out of bag check. The estimated error ratio in out of bag check is 29.2%, which is higher than usual task for decision making. Because it is not decisively determined whether a tweet is engaged or not in this example. Extracting initial nodes via decreasing of Gini coefficient The second step of the proposed method is to extract a few explanatory variables as initial nodes. Initial nodes should be explanatory variables which have more influence on the target (explained variable). While processing Random forest, Published by Sciedu Press Gini coefficient [19] as defined in Equation 1 is calculated for a node in each tree. Gini(i): Gini coefficient of node i p(i, engaged = k): frequency ration of record within node i, of which value of "engaged" is k (=0 or 1) Furthermore, Gini decrease according to an explanatory variable is defined as in Equation 2. GiniDec ( As highlighted in Figure 4, Word4, Word5, Word8 and Word9 have relatively more influence on the target. Therefore, these four explanatory variables are selected as initial nodes. Designating initial edges according with Wasserstein metric The latter half of configuring initial state is to designate initial edges between initial nodes and the target. As shown in Figure 5, in Bayesian network, an edge from Node A to Node B represents that the conditional probability of Node A given Node B is defined and vice versa. In order to properly reflect the influence of initial nodes on the target, initial edges also should be defined leveraging the result of Random forest analysis. If one direction is more appropriate than the other in terms of stochastic model, its conditional probability is also more similar to the distribution of the dataset than the other. As described in Section 3.1, 20% of the dataset is remained for out of bag check. By using this remained data, predicted result of the target node can be obtained. Then two values of the conditional probability for initial nodes are obtained. One is from the remained data (true distribution) and the other is from predicted data (learned distribution). The purpose here is to determine which is closer between the conditional probability in the left of Figure 5 or in the right when true distribution and learned distribution are compared. Kullback-Leibler divergence [20] is often used to compare several probability distributions, but it can't be applied in this case. Because it is a measure for bringing learned distribution closer to a specific true distribution and does not provide the distance between two different distributions. Wasserstein metric, [21] which is one of methods in the area of optimal transport, is commonly used for defining the distance between two different probability distributions. Given two probability distributions A and B, Wasserstein metric of p-th order is described as in Equation 3. E(d(A, B)) p (3) d: the distance defined in the domain of the two probability distributions. W S(A, B) = inf E: Expected value of probability distribution inf: Operation to calculate the infimum Figure 5. Relationship between direction of edge and conditional probability As the domain of probability distributions are node values which are 0 or 1 in this case, it is sufficient to calculate the absolute value among each random variable. Therefore, the order of Wasserstein metric is configured as 1. For calculating Wasserstein metric, "wasserstein1d" library of R is used in this study. Here, let W S * (nodeX, nodeY ) be Wasserstein metric between true distribution and learned distribution in terms of the conditional probability according with an edge from nodeX to nodeY. Smaller value of W S(A, B) means the two probability distributions A and B are closer. Therefore, the direction of edge between the target ("engaged") and initial edges ("Word4", "Word5", "Word8", "Word9") are determined by comparing W S * (engaged, word * ) and W S * (word * , engaged). The corresponding edge of smaller W S * value should be designated as initial edges. The result is shown in Table 4. Inference with adjusted Bayesian network By the procedure in Section 3.3, the initial nodes and initial edges are obtained. In the final step, these two conditions are set into initial state of Bayesian network and structure learning is performed in the same way in Section 2. The result of Bayesian network analysis with initial state is shown in Figure 6. The target node ("engaged") is directly connected with engaged keywords such as Word4, Word5, Word7, Word8, Word9, Word17. Different form ordinal Bayesian network in Section 2, the edge between Word4 and the target is opposite and the edge between Word18 and the target is eliminated. The initial state leveraging the result of Random Forest analysis makes this difference. As described in the next chapter 4, Bayesian network with initial state coincides well with actual search result on Twitter. Therefore, Bayesian network in Figure 6 is considered to be adjusted for decision making as for the target node. RESULT IN EXAMPLE CASE For confirming effectiveness of adjusted Bayesian network in Figure 6, probabilistic inference as for "Word8" and "Word18" in the same ways as in chapter 2 is performed. The result is shown in Table 5. Published by Sciedu Press Figure 6. Result of Bayesian network with initial state As described in Section 2, the actual search result on Twitter shows "Word18" should not be used with "Word8". Original network falsely suggests that a tweet is more likely to be engaged with the product when "Word8" and "Word18" are used together. Adjusted network does not recommend this combination for improving engaged ratio. Although it would be best if the value of Case2 is much lower than that of Case1 in Adjusted network, the adjusted one provides better result than the original. In the same way, as for nodes which are directly connected to the target ("engaged") with the same directions, such as "Node5","Node7", "Node8", "Node9" and "Node17", the comparison of the inference between original and adjusted network and the emerging ratio in actual search on Twitter are shown in Table 6. The search conditions used for "Emerging ratio on Twitter search" column are as follows. "Word*" is "Word5", "Word7", "Word8", "Word9" or "Word17". Conditon1: correspondent to the item "without Word18": the product and Word* Conditon2: correspondent to the item "with Word18": the product and Word* and Word18 The actual search result shows, "Word18" is not appropriate for combined use with other words for making marketing messages engaged with the product. The Diff column in Table 6 of adjusted network shows less value than that of original network. That means adjusted network less recommend the use of "Word18" with other words than original one. In this way, Bayesian network gives suggestions of combination use of multiple keywords by probabilistic inference. Furthermore, adjusted Bayesian network leveraged with the result of Random forest matches better than ordinal one in terms of decision making concerned with particular target node. CONCLUSION SNS has now become popular and consumers are spreading impression and evaluation of various products with their own keywords across the Internet. Therefore, it is necessary for makers to understand which keywords should be used on SNS for appealing their products. For extracting these keywords, several methods such as leveraging Co-Occurrence network analysis or Word2Vec are already proposed. Even after applying these methods, there is still an issue of how to select actually used keywords. As the budget for marketing and advertising is often limited in business, makers need to pick up especially effective keywords. For this purpose, the authors apply Bayesian network analysis. Bayesian network analysis has advantage to visualize relationships among keywords, which is useful for considering combination of several keywords in appealing products. However, it sometimes occurs, even when the network suggests keyword A and keyword B should be used together, the actual search result on SNS shows the opposite result. In an example of this study, the network suggests the probability of a tweet being related with the product becomes 80.8%, when a keyword "dry" is used with "taste", increasing from 78.1% in which "dry" is solitarily used. But the emerging ratio of tweets including both "dry" and "taste" on Twitter is 9.0%, much smaller than 58.0%, which is the case of solitary use of "dry". The result of the inference does not coincide with the actual data on SNS. This case includes particular relationships between explained variable (a node which indicates whether a tweet is related with the product) and explanatory variables (keywords as "dry" or "taste"). But Bayesian network usually handles all these nodes equally. The cause of disagreement above would reside in this difference. The authors leverage Random forest to solve this issue. Random forest is a proven method for analyzing the influence of explanatory variables upon explained variable. The authors propose to configure initial state which consists of a few nodes around explained variable and several edges between them. These nodes are extracted by measuring mean decrease of Gini coefficient calculated with decision trees of Random forest, because explanatory variables with much influence on explained variable show significant decrease of the coefficient. On the other hand, edges are designated based on likelihood measured by similarity of conditional probability distribution between actual data and predicted result of Random forest. The similarity is calculated with Wasserstein metric. The Initial state are given as a start point for the construction of Bayesian network. This is called adjusted Bayesian network in this study. As for the case including keywords "dry" and "taste" above, the probability of a tweet being related with the product does not increase regardless of the combination with them, when applying the inference of adjusted Bayesian network. That means, the result of inference with adjusted Bayesian network coincides well with the actual data on SNS. The authors compare other result between original and adjusted network and the effectiveness of adjusted network is confirmed. DISCUSSION FOR FUTURE WORK In this study there is only one target node to be considered. But in real business scenes, there are a lot of situations in which several target nodes exist. For example, in product development management, several KGIs such as annual sales, market share and the number of patents should be considered simultaneously. [21][22][23][24] Further experiments are expected to apply the proposed method to cases with multiple target nodes. There are three patterns in relationship between two target nodes shown in Figure 7. Pattern 1 in Figure 7 is the case in which two targets are directly connected. In this case, it would be one of practical solutions to merge these two targets into one node and apply the method in this study. Contrary to Pattern 1, two nodes are separated by efficiently multiple nodes in Pattern 2. In this case, the proposed method can be applied to each target node separately, because there is no concern that initial conditions for each node affect each other. But in Pattern 3, in which two targets share a Published by Sciedu Press few nodes as neighbors, some problems might occur. For example, in Pattern3 of Figure 7, if the result of Random forest analysis for target 1 suggests opposite directions for "NodeA" and "NodeE", cyclic direction including "NodeE", "target1", "NodeA", "NodeB", and "target2" is generated. In other cases, such as there are two edges from NodeE to both target1 and target2 respectively, the same situation might occur. In Bayesian network, cyclic direction is not allowed. Therefore, further adjustment will be required.
2020-09-03T09:09:46.242Z
2020-08-27T00:00:00.000
{ "year": 2020, "sha1": "1cec80fb7a88d391a6c1ec8ef60034131ae7c70a", "oa_license": null, "oa_url": "http://www.sciedupress.com/journal/index.php/air/article/download/18011/11527", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1a087ca3e7ea0d63acfe326c7407f1f2f0e11d1b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
253501157
pes2o/s2orc
v3-fos-license
The evolution of regional entrepreneurship policies: “no one size fits all” In the last two decades, entrepreneurship policies have gone through a radical transformation in many parts of the world. New theoretical and empirical approaches have helped to identify better the drivers of entrepreneurial creation, the main actors in the process, and the significant contribution of entrepreneurship to socio-economic prosperity. One of the main conclusions of these new theoretical and empirical approaches is that the drivers and outcomes of entrepreneurship are heavily shaped by place. There is no single ideal entrepreneurship policy formula because entrepreneurial mechanisms take a different form depending on different places. However, concepts such as path dependency, industrial ecology and heritage, connectivity, culture, and intra-and interregional knowledge spillovers are all linked in different ways with regional entrepreneurship in general and the Entrepreneurial Ecosystems literature. This paper discusses the impacts of these different influences on the evolution of modern entrepreneurship policies, examines what the current evidence points to, and identifies areas for further consideration. Examples will be drawn from different countries and regions. On the basis of the evidence reviewed, the paper contends that both conceptual and policy-thinking regarding the relationships between entrepreneurship and place are increasingly shifting to the challenges facing less successful regions, even though the current approaches are heavily based on the insights of successful places. Introduction In recent decades, entrepreneurship policies have gone through a radical transformation across the world. From traditional perspectives based on subsidising startups, the emphasis has shifted in current perspectives towards creating, maintaining, and improving established worldwide value chains (McCann and Ortega-Argilés 2016a; McCann and Ortega-Argilés 2019). New theoretical and empirical approaches have helped identify better the drivers of entrepreneurial creation, the main actors in the process and their significant contributions to socio-economic prosperity. One of the main conclusions of these new theoretical and empirical approaches is that entrepreneurship is heavily determined by place (Acs et al. 2015;Audretsch 2015;Fritsch and Mueller 2004). A single ideal entrepreneurship formula does not exist because entrepreneurship takes a different shape depending on the regional basis. Regions differ in many aspects. Some differences can be found in whether they are rural, peri-urban or urban; natural resources-based or not; in terms of their degrees of fiscal decentralisation; whether they are lagging or innovation-driven; and also on the way that regions respond to changes and their capacity for resilience when confronting grand challenges such as globalisation, automation or ageing societies (Barca et al. 2012;Iammarino et al. 2019;Prenzel et al. 2018;Terzidis and Ortega-Argiles 2021). Concepts such as path dependency, embeddedness, industrial ecology and heritage, connectivity, culture and local norms, and interregional knowledge spillovers all play an essential role in how we think about entrepreneurship in its local context and how we can best construct meaningful policy frameworks for industrial upgrading and resilience (Rocchetta et al. 2021;Neffke 2011). The importance of thinking about the drivers and inhibitors of entrepreneurship at the regional level is highlighted by the nature and scale of the fundamental changes in the global economy. During the last three decades, the world economy has changed almost out of recognition, and new opportunities and challenges have emerged and affected regions differently (Barca et al. 2012;Iammarino et al. 2019;Terzidis and Ortega-Argiles 2021). Technological changes such as advances in telecommunications, automation, 3D printing and the 'internet of things' have reduced the need for prior scale economies in many industries, thereby potentially opening up new opportunities for new and smaller firms. While this is true in manufacturing, this is also the case in service industries, as witnessed by the rise in innovative business models such as Uber and Airbnb. At the same time, the globalisation and fragmentation of production processes have created opportunities for new forms of local economic Specialisation to respond to consumer demand. In particular, these changes may make it easier for groups of related small and medium-sized enterprises (SMEs) to respond to modern consumer demands for more personalised goods and services in situations which may be unattractive for large corporations. In particular, increasing disposable incomes have created demands for high-quality products, and there may be new opportunities for startup SMEs to capitalise on new niche markets. In order to best help firms respond to these opportunities, governments in many countries have implemented new policy measures and instruments based on which they hope to increase economic activities and hold on to comparative advantages. In particular, given the importance of entrepreneurship for local economic growth and development, governments increasingly opt to use public policy to help make places more entrepreneurial (Reynolds 1999;Zacharakis et al. 2000;Murdock 2012). However, this policy shift also reflects more fundamental shifts in both analytical and 1 3 The evolution of regional entrepreneurship policies: "no… empirical approaches to entrepreneurship which has taken place in recent years, and it is not without criticism (Parker 2007;Shane 2009). Therefore, understanding these shifts is essential if we are to make sense of the changing policy landscape. This paper will discuss how our understanding of the importance of the local context in shaping entrepreneurship has evolved and how our growing understanding has reshaped the entrepreneurship policy frameworks that are being developed. We will examine different policy approaches emerging in different parts of the world and identify the core elements which shape these various approaches. The rest of the paper is structured as follows. The following section discusses the relationship between entrepreneurship and regional development in general and in the context of regional entrepreneurship ecosystems. The third section discusses the traditional and evolving basis for entrepreneurship policy, including market failures and governance issues. The fourth section provides a brief discussion of the experience of entrepreneurship policy in the US and the EU, and the fifth section concludes. Entrepreneurship and regional development Entrepreneurship is the process of establishing and expanding a new business. Entrepreneurship is a process composed of different activities and social phenomena emerging within a broader society (Lundström and Stevenson 2005). Following Joseph A. Schumpeter, who conceived of the entrepreneurial venture as "the fundamental engine that sets and keeps the capitalist engine in motion" by creating new goods, inventing new production methods, devising new business models, and opening new markets (Schumpeter 1942, 83). Since the late 1990s, a wide-ranging literature has considered entrepreneurship to be the driver of prosperity (Birch 1987;Brock and Evans 1989;Carree et al. 2002;Carree and Thurik 2003;Harper 2003;Coyne and Lesson 2004;Audretsch 2006;Audretsch et al. 2006;Gilbert et al. 2006;Baumol and Strom 2007) and a key factor of economic development (Holcombe 2007;Naudé 2010;Brown and Thornton 2013;Valliere 2016;Jia 2018). Low unemployment rates will characterise robust entrepreneurial societies as more businesses are created to take advantage of the potential profit opportunities. In the long run, the same societies will generally have a more developed economy, with more (and more complex combinations of) physical capital and higher levels of investment in human capital, with a population that typically will be richer than otherwise (Lucas et al. 2018). However, within the literature on regional development, there has been a clear focus in trying to determine what are the mechanisms underlying the promotion of local entrepreneurial capital and its potential knowledge spillover processes (Audretsch and Kleiback 2004;Acs et al. 2009;Welter 2011;Varga et al. 2018). The evidence suggests that these impacts are dependent variously on: the levels of economic development ( Van Stel et al. 2006); the interregional disparities in countries (Verheul et al. 2001;Porter 2003); the national institutional arrangements and the social payoff structure (Baumol 1990); and their ability to turn knowledge into regional growth through the creation and dissemination of knowledge (Audretsch and Keilbach 2004;Acs et al. 2009). Figure 1 demonstrates that all countries display interregional differences in the survival rate of new firm startups. In particular, countries with higher rates of new firm survival also tend to display higher interregional differences in the same rates. These differences are vast in some countries, whereas in other countries, these differences are minimal. Specifically, interregional differences in startup survival rates are very high in the UK, followed by Romania. A group of countries including Austria, Bulgaria, Czech Republic, Denmark, France and Italy display interregional differences in the survival rates of firm startups which are less than one-third of those displayed by the UK. In contrast, the interregional differences in Spain, Finland, Hungary, South Korea, Norway, Portugal, and Slovakia are less than one-quarter of those evident in the UK. We know that entrepreneurship has an increasing role in explaining: economic growth, productivity, employment and competitiveness (Carree and Thurik 2003;Acs and Armington 2006;Braunerhjelm et al. 2010;Audretsch et al. 2015;Varga et al. 2018); the creation of employment and wealth (Fritsch and Mueller 2004;Mueller et al. 2006;Malchow-Møller et al. 2011;Varga et al. 2018); and economic dynamism and the innovation landscape of locations (Acs and Audretsch 2005;OECD 2013). In terms of regional development, the available evidence strongly points out that entrepreneurship potentially has short-term and, more importantly, medium-and long-term consequences for regions. From Table 1, the differences in the scale of these interregional startup survival rates in some countries suggest that the medium-and long-term regional development trajectories may be very different in these cases. In contrast, in more equal countries, these differences will be minor. The evolution of regional entrepreneurship policies: "no… The reasons why entrepreneurship is so important in influencing each of these critical regional economic dimensions is because of its influence in reshaping the composition of the regional industrial base and the emergence of new economic structures (Acs and Varga 2005;Feldman and Audretsch 1999;Mueller 2004, 2007). New entrants seem to be important catalysts of technological innovation, even when they prove to be business failures, as they often do (Scherer 1992; Utterback 1994) because, as we know from Schumpeterian thinking, failure is also a key component of innovation. These new entrants help to reshape the existing patterns of industrial agglomeration and diversification, industrial relatedness and their influence in regional market selection processes (Audretsch andThurik 2001, 2004;van der Panne 2004;Frenken and Boschma 2007), in ways that are closer to the current market trajectories. However, although entrepreneurship can foster local technological and structural change, the evidence suggests that the degree to which new entrants can prosper and effectively reshape and modernise regional economies varies significantly between places. In terms of new firm formation, the key differences here relate to three broad themes: location, human capital and innovation, and the state's role. Over the last thirty years, the rapid developments in economic geography and urban and regional economics, especially since the seminal works of Krugman (1991a,b), Porter (1990), Scott (1988 and Glaeser et al. (1992), have transformed our understanding of the role played by location in shaping economic development. In particular, the role of spatial externalities in explaining the entrepreneurial performance of places is linked with the concept of agglomeration economies. Due to their economic structure and history, certain types of places offer key advantages for entrepreneurs, such as access to finance and access to key knowledge networks and even better infrastructures in today's digital world (Welter et al. 2008;Goldfarb and Tucker 2019;Cusumano et al. 2019). Aspects related to population growth and density and the size and market potential of the region (Modrego et al. 2014) will determine the diffusion of externalities from entrepreneurship. This is especially so if these places also offer clustering advantages associated with job-matching and the sharing of inputs. Nevertheless, as well as scale, we also know that knowledge diffusion is heavily determined by industrial Specialisation, with more structurally diverse regions tending to be more conducive to entrepreneurial startups. Diversified regions offer more varied sets of knowledge networks, thereby allowing entrepreneurs access to a broader array of knowledge domains and sources (Bishop 2012;Colombelli 2016;Guo et al. 2016;Tavassoli and Jienwatchamaramongkhol 2016;Basile et al. 2017;Fritsch and Kublina 2018;Content et al. 2019;Ejdemo and Ortqvist 2020). Also, such places tend to exhibit higher degrees of bridging social capital (Putnam 2000), such that the spillovers of specialist knowledge from one arena to another tend to be more easily facilitated in these types of contexts. The result is that in terms of the contribution of entrepreneurial growth to regional growth, the differences in the role of sectoral composition tend to persist over time (Audretsch and Fritsch 2002;Fritsch and Schmude 2006). Another aspect of places that is seen as essential for fostering entrepreneurship is human capital and especially the cultures of creativity and innovation that 1 3 The evolution of regional entrepreneurship policies: "no… some places exhibit. As Florida (2002) and other authors have demonstrated, the attraction of human capital to a region and the facilitation of clustering and networking significantly contribute to the level of entrepreneurship in places due to the potential knowledge spillovers that arise from their presence in local economies. Empirical analyses have typically proxied the role of human capital by indicators of educational achievement, allied with the presence of universities and research centres. There is strong evidence that higher education positively impacts local high-growth entrepreneurial activities (Autio 2005). Moreover, university expenditures on R&D typically have a noticeable impact on new firm formation in regions surrounding the universities and research institutions, especially in 'technology-oriented firms' (Kirchhoff et al. 2002). On the other hand, while higher education generally increases self-employment, it can also reduce its quantity (Reynolds et al. 1994;Burke et al. 2000). At the same time, however, in recent years, there have also been significant efforts to develop more subtle and nuanced indicators of creativity and innovation-orientation, beyond only educational scores or institutional research expenditure, which better reflect the complex relationships between different types of human capital and research activities and highly creative entrepreneurship. Substantial, diversified and high human capital locations also tend, in general, to be places with a solid local financial sector conducive to entrepreneurship. However, startup and entrepreneurial credit availability tend to differ markedly between places. In particular, the presence of a buoyant local venture capital market will tend to directly affect the success of a local, regional system of entrepreneurship (Szerb et al. 2020) and digital entrepreneurship (Giones and Brem 2017). In general, given the right institutional environment, entrepreneurship is seen as the fundamental force driving economic performance (Baumol 1990). Entrepreneurial attitudes, startups' characteristics and new firm performance are all influenced by the institutional and macroeconomic context, which itself tends to display a stickiness or persistence (Andersson and Koster 2011;Andersson et al. 2011;Wyrwich 2014, 2017;Fritsch et al. 2019). Young firms' post-entry performance differs between countries because of different market, institutional and regulatory mechanisms, and differences in labour and product markets (Aghion et al. 2005;Audretsch andKeilbach 2007, 2008;Welter 2011). Typically, more demanding regulatory environments tend to reduce the levels of entrepreneurship (Djankov et al. 2002;Capelleras et al. 2008;Van Stel et al. 2006). At the same time, in terms of the processes of firm creation, access to finance, the quality and quantity of human capital, and proximity to scientific and technological infrastructures, all play longstanding and fundamental roles (Boschma and Lambooy 1999;Okamuro and Kobayashi 2005). Also, governance issues are nowadays increasingly understood as being critical. Nevertheless, the nature or the quality of the governance arrangements, conducive to both entrepreneurship and good entrepreneurship policies, may also be somewhat different in different contexts, depending on the overall national governance arrangements. Environmental characteristics such as institutions and their evolution over time (Scott 1995;Kostova 1997) and the ability to maximise regional competitive potential by producing the suitable institutional capacity are all argued to be critical for regional development (Amin 1999). In the entrepreneurship arena, the role of public policy is influenced variously by the size and structure of the governance and institutional system, and the quality of multi-level government approaches to entrepreneurship. In terms of the regional entrepreneurial policy, what might be possible depends on the sub-national governance structure. In general, more devolved and decentralised governance systems will allow for more locally tailored entrepreneurial policy approaches. However, the performance of sub-national devolution processes is also conditional on the types of regulations, bureaucracy and administrative procedures in places and the levels of transparency, accountability or corruption in a region or country. Regions differ both within as well as between countries. Therefore when discussing either entrepreneurship or entrepreneurship policy at the sub-national level, this interregional heterogeneity means that it is a more complex discussion than with similar discussions purely at the national level. Both entrepreneurship and entrepreneurship policy are heavily influenced simultaneously by multiple interacting factors of a local, national, or even international nature. Therefore, a holistic and systemic understanding of these factors and interactions is required to build up a comprehensive and detailed contextual picture of the regional drivers and inhibitors for entrepreneurship and the likely optimal entrepreneurship policy responses. A growing focus for policymakers in emerging and developing economies is the promotion of Entrepreneurial Ecosystems-EE (Isenberg 2010;Mason and Brown 2014;Hechavarria and Ingram 2014;Kenney and Von Burg, 1999;Audretsch and Belitski 2017;Brown and Mason 2017;Roundy et al. 2018;Stam 2015;. Many scholars have contributed to explaining the development of entrepreneurship in a geography and its interactions; however, more recently, a new sub-discipline in the literature developing the concept, functioning and evolution of EE has been created. This literature strand also contributes to position the concept in a much wider 'Geography of Entrepreneurship' literature . EEs are understood as the interrelated set of actors, organisations, resources and values that generate and support local or regional entrepreneurial activities (Roundy and Fayard 2019). Entrepreneurship activity is considered as not being developed in isolation; on the contrary, it is dependent on its historical, temporal, institutional, spatial and social contexts (Welter 2011), including aspects such as infrastructures, social and cultural values and norms, a system of providers and customers, human capital, learning opportunities, as well as policies and institutions including financial ones (business angels, seed and venture capital, stock market and crowdfunding). This strand of the literature is based on concepts such as the ecological perspective on entrepreneurship (Aldrich 1979 or Hannah andFreeman 1977), entrepreneurial embeddedness and local environment dependence (Aldrich and Martinez 2001;Smith and Stevens 2010) or entrepreneurial dynamic capabilities (Roundy and Fayard 2019). Authors tend to consider the equally important emphasis on the literature on the roles of the environment (Dubini 1989) and "infrastructure" (Van de Ven 1993) supporting entrepreneurship. EEs have been subject to criticism (Stam 2015;Roundy et al. 2018), because their illustrations typically have tended to focus on thriving and high-profile places such as Silicon Valley (Kenney and Von Burg 1999) or Edinburgh (Spigel 2016) treating places sometimes in isolation (Welter et al. 2017;Roundy 2019) 1 3 The evolution of regional entrepreneurship policies: "no… or providing static approaches not considering their evolution. New contributions have focused on their gaps, theoretical approaches (Roundy and Fayard 2019); life cycle (Mack and Mayer 2016); ecosystem performance (Stam 2018) and evaluation (Szerb et al. 2020;Varga et al. 2020) or other types of places (Roundy 2019). EEs have also been adapted to the context of the digital economy and the 4th Industrial Revolution with new contributions extending the scope and actions of EEs by concentrating on the concept of Digital Entrepreneurship Ecosystems (Spigel 2016;Nambisan et al. 2017;Sussan and Acs 2017;Du et al. 2018;Elia et al. 2020). This research provides the DEE framework as a collective intelligence system, and thus a virtually global and context-independent system able to favour people and machine interaction and the creation of digital startups, considering technology not only as an "input" (Giones and Brem 2017) but also as an "enabling" factor (Sussan and Acs 2017). Some approaches, considering the role of emerging digital technologies or automation, such as the Internet of Things or 5G, question the role of place and its influence in the nature and interactions amongst entrepreneurial acts and digital agents. However, evidence shows that these digital startups tend to geo-colocate in a limited number of places attracted by the availability of talent and other inputs, thereby creating potential situations of digital dependency and digital exclusion for other locations, which eventually may create even higher potential intra-regional and interregional differences, making stronger the call for place-based measures (Goldfarb and Trefler 2018;Klinger et al. 2018). Previous waves of technological breakthroughs have shown that new technologies do not spread evenly across space and result in a variety of outcomes across regions. As a common less from past industrial revolutions is that preparations to benefit from new trends need to start early as a common lesson from past industrial revolutions because regions with a more educated and skilled workforce are those best placed to reap the benefits of new opportunities (OECD 2020). Therefore, understanding the requirements of digitally centred entrepreneurship ecosystems and their comparison and interaction with placebased ones seems to be important for addressing issues of their inclusiveness and resilience when thinking on the deployment of infrastructures or services to support entrepreneurship (Elia et al. 2020). Such a systemic and holistic picture of EEs is provided, for example, by the Regional Entrepreneurial Development Index (REDI) methodology (Szerb et al. 2014(Szerb et al. , 2020, which integrates multiple economic, psychological, social and institutional influences on local entrepreneurship, and ranks these influences in terms of their importance in shaping the local entrepreneurial context. The REDI approach captures the dynamics of the overall regional entrepreneurial ecosystem and shows how the different factors and influences on entrepreneurship influence each other. All regions are shown to be different, with different drivers and inhibitors dominating in different contexts (Szerb et al. 2014(Szerb et al. , 2020Acs et al. 2015). However, although the REDI index has a holistic and systemic analytical and empirical approach, this approach's critical issues are identifying the systemic weaknesses or bottlenecks. A system is only as strong as its weakest link. An example of the application of this methodology is the case of Spanish regions and their differing entrepreneurial capabilities. Based on the sub-national application of the Global Entrepreneurship and Development Index (GEDI) for Spain (Acs et al. 2015), Table 1 illustrates the grouping of regions in Spain according to their conditions for entrepreneurship. Three main groups can be drawn. The group of Leader Entrepreneurial Spanish regions includes Madrid, Cataluña, País Vasco, Asturias and Navarra. This group comprises mainly urbanised regions with high levels of income, high innovation ratios and higher shares of the educated population compared with the average Spanish regional levels. The group of Average Entrepreneurial Spanish regions contains the regions of Aragon, La Rioja, Comunidad Valenciana, Galicia, Castilla Leon and Canarias. Average Entrepreneurial regions are represented by entrepreneurial environments, which are similar to the overall Spanish national averages. Finally, the group of Lagging or Low Entrepreneurial contains the rest of the regions Andalucía, Baleares, Cantabria, Murcia, Castilla La Mancha and Extremadura. These regions display lower income levels than the Spanish average, with high unemployment rates, especially amongst the youth, with economies primarily based on agriculture and a lower rate of innovative companies or patents. These regions also have lower shares of their workforce with higher education. These Spanish regions are each seen to display different strengths and weaknesses, not only between the broad groupings but also within the broad groupings, in terms of their attitudes towards entrepreneurship, the industrial structure, their quality of governance, their knowledge and research base and the labour market skills, amongst others. These types of data are crucial for helping policymakers decide their priorities and optimal responses to their local challenges. Policymakers may decide to strengthen existing assets and areas of strength, or alternatively, they may decide to reduce weaknesses in certain key areas. However, the systemic approach championed by the REDI is that the strength of the regional entrepreneurial ecosystem is only as strong as its weakest link. Therefore, this holistic, systemic approach shines a spotlight on the critical areas for improvement, which, if appropriately tailored policy interventions are successful, will most improve the whole local entrepreneurial ecosystem. Indeed, the holistic REDI type of methodology reflects a general shift in entrepreneurship research, which increasingly emphasises the multidimensional nature of entrepreneurship and increasingly involves research that is both quantitative and qualitative and at the intersections between different methodologies. This broad methodological basis also provides for much richer ways of contemporary thinking about entrepreneurship policies than previously. Entrepreneurship policy Entrepreneurship policy is a concept and a phrase whose time seems to have come. Although it was rarely used in the past, it has begun to achieve importance, particularly in Europe. Nevertheless, entrepreneurship policy has evolved gradually in industrial policy, as modern thinking about these issues has evolved (previous reviews can be found in Gilbert et al. 2004;Stenberg 2009;Thurik et al. 2013). From the 1980s onwards, traditional industrial policy, which at times had a protectionist flavour to it, increasingly aimed to promote competition while balancing the 1 3 The evolution of regional entrepreneurship policies: "no… interests of the market and producers via regulatory measures. However, the entrepreneurial sector remained largely excluded from this arena (Minniti 2008). As the economy shifted towards a more knowledge-based and service-oriented composition, the smaller and more flexible entrepreneurial firms gained new importance (Audretsch andThurik 2001 and2004). Since the 1990s, industrial policies have therefore undergone a radical change. As a result, a new set of interventions designed to promote entrepreneurial activities have emerged, focussing on improving the business environment for risk-taking (Link 2007; Lundstrom andStevenson 2001, 2005;Minniti 2008). The types of intervention priorities associated with new entrepreneurship policies are outlined in Table 2 and, in each case, are compared with the respective managed economy priorities in traditional pre-1990s industrial policies. Yet, entrepreneurship ventures are not the same as small businesses. Even within the possible portfolio of new industrial policies, entrepreneurship policy is quite distinct from small business policy or innovation policy. More specifically, entrepreneurship policy tries to encourage entrepreneurs to rearrange economic resources into what they perceive will have more valuable and more productive uses to contribute to a more dynamic, creative and growing economy (Baumol 1990). The result is that nowadays, various areas of innovation policy in many countries are increasingly moving their emphasis away from the support of SMEs towards the support of entrepreneurship (Henrekson and Stenkula 2009). Entrepreneurship policy tries to Regulation (antitrust, competition policy, regulation and public ownership) Stimulation Regulate or constrain the activities of the existing large, powerful corporations and to provide protection for workers Shift towards more knowledge-based activities and industries by creating a stimulating environment that supports the activities of newer and smaller firms Targeting output (creating higher demand for existing products) Targeting input (mainly knowledge-based inputs) Realising specific outputs for known markets to maintain a comparative advantage. Picking winners-selected industries or firms were targeted and supported as national priorities Growth policies are targeted at creating inputs for value creation, especially the creation and commercialisation of new knowledge and its externalities as a source of competitive advantage for new firms National policy Local policy Motivated by the desire to prevent special interests from having undue influence on the national economic agenda Policy initiatives developed at the local levels influenced by the local conditions and needs that should result in policies that better support the creation and exploitation of opportunities Low-risk capital Risk capital Easy liquidity to existing companies with investment in tangible assets Venture capital, private equity, startups finance, angel capital foster a socially optimal level of business venturing by raising the level of entrepreneurship both amongst actual entrepreneurs and amongst the 'nascent' entrepreneurs who are seriously considering starting a firm (Reynolds et al. 2000). However, the notion that entrepreneurship and business venturing should be an explicit focus of policy design, choice, and implementation (McCann and Ortega-Argilés 2016a, b) is a relatively recent policy development, termed 'the entrepreneurial turn' (Cox and Rigby 2012). Fundamentally, entrepreneurship policy should be aimed at solving market and systemic failures, and most of the entrepreneurial market failure theories centre on correcting for informational asymmetries and externalities (Tuszynski and Stansel 2018). Informational asymmetries can result in adverse selection, which disincentives risk-taking, while the existence of positive externalities means that entrepreneurs cannot capture the full benefits generated by their risk-taking. Entrepreneurship policy scholars appear to have broadly reached a consensus that these sources of market failure are a pervasive phenomenon, thereby underpinning the case for government intervention in areas such as venture capital markets, knowledge, commercialisation, R&D and skill-upgrading efforts, and clustering (Audretsch et al. 2007). Entrepreneurial policies have often been classified as either 'hard' or 'soft' (Storey 2005). Hard policies usually assist in the form of finance (loans and grants). At the same time, soft measures include counselling activities to entrepreneurs before business startup, counselling at the startup phase, facilitating financial assistance, enhancing technology and access to technology, and improving access to physical infrastructure or advice after the start. As seen in Table 3, the entrepreneurship policy results from a series of policy interventions to solve market failure situations mainly around information and coordination externalities. In order to tackle these information or knowledge externalities, entrepreneurship policy interventions increasingly centre on the rewarding of entrepreneurs who discover new domains and the provision of incentives for non-traditional sectors, such as prizes for inventions, fiscal incentives or innovation vouchers. Other standard policy initiatives include creating platforms and mechanisms for facilitating intra-regional and interregional interactions, creating SME support organisations, demonstration projects, technology extension services, cluster creation programmes or technology banks. These initiatives have in common the need to coordinate the use of knowledge-related investments between knowledge-related actors and their associated decision-making processes. Even if increasing in popularity, entrepreneurship policy has also been criticised as becoming excessively biased towards high-growth or high-tech focused, and accused of being impossible to replicate elsewhere (Parker 2007;Shane 2009). In some cases, entrepreneurship policy focuses on actors that need relatively more stimulation or support for business venturing, such as female, migrant or senior entrepreneurs, to support a broader inclusive economic growth agenda (Dutz et al. 2000). Entrepreneurship policy is also sometimes targeted at improving enterprise rates in disadvantaged areas with low rates of entrepreneurship and groups underrepresented in business ownership (Dutz et al. 2000). These targeted policies involve a range of 'soft' interventions aimed at facilitating access to essential business 1 3 The evolution of regional entrepreneurship policies: "no… Table 3 Entrepreneurship Policy: market failures and policy interventions Source: own elaboration adapted from OECD (2013) and Rodrik (2014) Market failure Policy intervention Example of existing and new policies/initiatives Information externalities Low "self-discovery" activity Low information exchange flows Lack of intra-and interregional interactions that restrict the knowledge spillovers Incentives to reward entrepreneurs who discover new domains Incentives to involve non-traditional actors Creation of platforms and mechanisms to facilitate-intra and interregional interactions Public policies can assist further this process by providing key infrastructures (e.g. information about emerging technologies and commercial opportunities and constraints, product and process safety standards for domestic and export markets, and external sources of finance) Prizes for inventions and discoveries, fiscal incentives, IPRs Incentives for public sector innovation (e.g. public procurement) Public web consultations Regional workshops Innovation vouchers Internationalisation support services Coordination externalities Low "self-discovery" activity due to the high fixed costs and large-scale investments required by some projects Prevention of emerging trends for regional economic growth Table 4 provides examples of these policy interventions that enhance entrepreneurship amongst disadvantaged people and places. Case studies of entrepreneurship policy It is possible to get a good sense of how entrepreneurship policies are being developed and delivered by looking at the US and the European Union cases. In the US, policymakers alone have devoted billions of dollars to targeted entrepreneurship policies over the past half-century (Lucas et al. 2018). Yet the context in which this takes place is fundamentally shaped by three initiatives, namely the Bayh-Dole Act (1980), the Small Business Innovation Research Programme (SBIR) (1982), the State Small Business Credit Initiative (SSBCI)-Small Business Jobs Act (2010). The 1980 Bayh-Dole Act authorises the Department of Commerce to create standard patent rights clauses to be included in federal funding agreements with non-profit organisations, including universities and small businesses (Mowery 2005). The key change over the previous set-up introduced by Bayh-Dole concerned the ownership of inventions made with federal funding. Before the US Bayh-Dole Act, federal research funding contracts and grants obligated inventors wherever they worked to assign inventions and intellectual property they made using federal funding to the federal government, whereas the Bayh-Dole permits a university, small business, or non-profit institution to elect to pursue ownership of an invention in preference to the government. As a result, the Bayh-Dole Act significantly increased the incentives for universities, small business and non-profit institutions to engage in research which commercial potential. Soon after, in 1982, the US Small Business Innovation Research Programme (SBIR) 1 was established by the US Small Business Innovation Development Act of 1982 (Public Law 97-219). The SBIR programme mandate, as stated in the 1982 Act, was to: (1) promote technological innovation; (2) enhance the commercialisation of new ideas emanating from scientific research; (3) increase the role of small business in meeting the needs of federal research and development; and (4) foster and encourage participation by minority and disadvantage persons in innovative activity. 2 Each government agency with an SBIR programme is currently required to set aside and allocate 3.2 per cent of its extramural research budget to US small firms with less than 500 employees. The US State Small Business Credit Initiative (SSBCI) was created via the Small Business Jobs Act of 2010. 3 This initiative funded by the US government with $1.5 1 https:// www. sbir. gov/ about/ about-sbir 2 When the 1982 Act was reauthorized in 1992 through the Small Business Research and Development Enactment Act (Public Law 102-564), the language of purpose (4) was modified and broadened to focus on women as well as disadvantaged persons: "to provide for enhanced outreach efforts to increase the participation of socially and economically disadvantaged small business concerns, and the participation of small businesses that are 51 percent owned and controlled by women" (Audretsch et al. 2019). 3 https:// www. treas ury. gov/ resou rce-center/ sb-progr ams/ Pages/ ssbci. aspx 1 3 The evolution of regional entrepreneurship policies: "no… billion to strengthen state programmes that support the financing of small businesses in places. The US Treasury awarded funding to all but three US states and Territories and municipalities in 3 states, based on their proportion of unemployed persons as a percentage of the national total. Participating States were required to fund new or existing state programmes under the categories: Capital Access Programme (CAP), Collateral Support Programme, Loan Guarantee Programme, Loan Participation Programme, or Venture Capital Programme. In terms of investment returns, the broad remit of the policy was that actions should be initiated where states and territories had a reasonable expectation of a tenfold leveraging of new business financing. These various initiatives have changed the overall US entrepreneurial climate. Evaluations of the SBIR programme have been broadly positive (Cooper 2003;Wessner 2008;Audretsch et al. 2019), in that it has facilitated entrepreneurship, innovation and employment growth and contributed to the economic performance of cities, states and regions. In particular, firms in receipt of SBIR funding tend to exhibit more innovative activity and stronger growth and survival. Recently, Audretsch et al. (2019) show that the impact of the SBIR programme goes beyond simply providing financial resources for R&D to entrepreneurs and small firms, in that the programme has significantly contributed to strengthening the relationships between the private sector and the academic sector. In contrast, recent US State Small Business Credit Initiative evaluations tend to show rather mixed results regarding their public venture capital programmes (Brander et al. 2015;Tuszynski and Stansel 2018). However, given that new technology innovations often take at least a decade to be developed, it may be that it is still too early to identify the effectiveness of this policy. In the case of the European Union policy, entrepreneurship as a policy priority began to seriously emerge as part of the Lisbon Agenda (European Council 2000), while the links between entrepreneurship and regional development policy were articulated in the reforms to EU Cohesion Policy 2013-2020, as part of the Europe 2020 agenda. The Entrepreneurship 2020 Action Plan was devised. New financial instruments JEREMIE and JESSICA were articulated through the European Investment Bank, and the 'smart specialisation' agenda, which is a central plank of the reforms, was enshrined in the EU Cohesion Policy programming regulations. The Entrepreneurship 2020 Action Plan (European Economic and Social Committee 2020) is built on three main pillars, namely: entrepreneurial education and training; strengthening framework conditions for entrepreneurs by removing the existing structural barriers and supporting them at different stages of their business lifecycle; dynamising the culture of entrepreneurship in Europe by nurturing the new generation of entrepreneurs, including reaching out to specific groups whose entrepreneurial potential is not being tapped to its fullest extent. In the case of entrepreneurship and regional development, all entrepreneurship and SME-related actions and interventions arising specifically from Cohesion Policy operate under the Thematic Objective of the Cohesion Policy Operational Programmes 2014-2020 "Enhancing the Competitiveness of Small and Medium Enterprises (SMEs)". The place-based logic underlying the EU smart specialisation policy prioritisation framework (Foray et al. 2015;McCann and Ortega-Argilés 2015) and how it fits into the reforms of EU Cohesion Policy (Barca et al. 2012) have been discussed in detail elsewhere (McCann and Ortega-Argilés 2016a,b). These reforms were the result of a series of publications in 2009 and 2010 about regional development policy intervention by the World Bank (2009), the European Commission (Barca 2009), the OECD (2019a; b), the Corporación Andina de Fomento (CAF 2010) and Sapir et al. (2004) report. These reports called for a change and adaptation of development policies due to significant changes in cities and regional performance as a result of the divergent effect of globalisation, bringing back the importance of aspects such as human capital and innovation (endogenous growth theories), agglomeration and distance (new economic geography), and institutions (institutional economics) and, in sum, the role of space. Globalisation has made localities and their interaction more important for their economic growth and prosperity (Garcilazo et al. 2010;Rodríguez-Pose 2011); therefore, place-based and place-sensitive approaches are argued to be the way forward to adapt places to their new realities (Iammarino et al. 2019). Place-based policies also called for an essential role of policy adaptation and experimentation (Rodrik et al. 2014). For our purposes, what is important is that enhancing entrepreneurial search and entrepreneurial discovery processes (Hausmann and Rodrik 2003) are central to the smart specialisation approach (Szerb et al. 2020). Smart Specialisation provides a collection of tools and concepts to help regions identify relevant domains at the right level of granularity and implement an action plan within each domain (Foray et al. 2015;McCann and Ortega-Argilés 2016b). Many of these processes are based on upgrading the value chain of activities embedded in the region by diversifying in technological related sectors and strengthening the regional capabilities while boosting innovation-led growth (McCann and Ortega-Argilés 2015). Importantly, Smart Specialisation links closely to the wider developments in entrepreneurship policy being advocated in many countries. In recent years, major contributions to the agenda have been made around the role of regional branching (Boschma and Gianelle 2014), as well as the development of indicators to evaluate the smart specialisation agenda (Boschma 2017;Colombelli et al. 2014;Montresor and Quatraro 2017;Santoalha 2019). Smart Specialisation has been described as a check-and-update, test-and-recast exercise, with a clear emphasis on monitoring and evaluation (Kyriakou 2017; McCann and Ortega-Argilés 2013a, 2013b) and the early evidence suggests that the results are very promising in many regions, including regions with low to medium levels of prosperity. In contrast, the weakest regions with poor governance and institutional arrangements may struggle to realise any benefits from the policy. Smart Specialisation has also been subject to critique in recent years and similar to the SBIR US programme, has not been seen to deliver its expected results yet (Gianelle et al. 2020); however, it can be conducive to promoting sustainability and industrial resilience (Crescenzi et al. 2020;Montresor and Quatraro 2020;Szerb et al. 2020). Gianelle et al. (2020) examined evidence based on 39 regional and national Smart Specialisation strategies in Italy and Poland, and 285 calls for proposals published in the period 2014-16 in Poland, Italy, Portugal, Czechia, Hungary, Lithuania and Slovenia, and the analysis sheds light on whether and how the Smart Specialisation approach has been translated into strategic decisions and policy interventions. The research finds that the regions examined tend to identify large sets of narrowly defined priorities, contradicting the Smart Specialisation principle of prioritisation. Moreover, while most interventions contain specific priority-alignment mechanisms, they are not generally customised to the need and specificities of each priority area as a result of lobbying activities, higher political returns from public support measures, policymakers' risk-averse attitudes and a lack of capacity. However, it may also suggest that Cohesion Policy legislation has embedded an illdefined incentive structure, which did not support the intervention logic of Smart Specialisation. The US and EU cases discussed here illustrate the comparison in the evolution and the approaches to entrepreneurship policy in different geographical and administrative contexts. To begin with, both geographies and administrative and governance arrangements are significantly different. The US is composed of 50 federal states that constitutionally have authority over broad parts delivering socio-economic services; in contrast, the EU is composed only of 28 member states that differ markedly in terms of their governance and institutional systems, ranging from large federal or quasi-federal states (Germany and Spain) to large centralised states (UK and Poland), through to small centralised states (Estonia and Ireland) along with small decentralised states (Austria). Furthermore, the US has a much longer tradition of designing and implementing national entrepreneurship and SMEs initiatives. In contrast, the EU has only started to implement entrepreneurial strategies coherently across the EU member states since a few decades ago. Moreover, US initiatives tend to be based on top-down approaches in terms of their design, helped by a state-based implementation; in the case of EU initiatives, an explicit fragmentation due to its multi-level governance system (EU-nationalregional and local) seems to facilitate a bottom-up approach to entrepreneurship initiatives from its design to its execution. Finally, as pointed out by other authors (Stough et al. 2018), while the US does not have a long history of government experimentation, the EU has been characterised by implementing new and innovative initiatives at the sub-national, national and supra-national governmental levels. As we survey the entrepreneurship literature in the context of regions and local economic development policy, we see that in recent decades, there has been something of shift away from a focus on the entrepreneurial dynamics of primarily successful regions, and towards the challenges associated with economically weaker regions. From the late 1980s onwards, modern thinking about entrepreneurship and regions took great inspiration from the experiences of dynamic and prosperous regions in places such as Silicon Valley, Route 128 Boston, Cambridge, England, Sophia-Antipolis, France, Emilia-Romagna in Italy, and North Brabant in The Netherlands. These places were both driving the development of, and also the exploitation of, the new generations of information and communications technologies, which were transforming the global economy. Interest in new modes of financing, such as angel investors and venture capitalists, along with observations of fast-growing and scale-up companies, heavily influenced the research agenda, as did insights about university-industry spillovers and the formation of allied clusters. However, in the years following the 2008 global financial crisis, entrepreneurship research increasingly started to ask questions about other types of places; either those which have suffered adverse shocks or those which had failed to generate steady growth over 1 3 The evolution of regional entrepreneurship policies: "no… recent years. This raises the fundamental conceptual and observational question of the extent to which these types of approaches to entrepreneurship and place, which we so heavily influenced by the experiences of dynamic and prosperous places, are also fit for purpose when discussing economically weaker and more vulnerable regions. At this stage, the evidence from the different policy settings reviewed here suggests that to some extent, the jury is still out. The US and EU experiences have shown some progress and success in this regard, although this is rather patchy, and it may be the case that such policy frameworks are only realistically effective, above certain thresholds of development at the local or regional level. At present, entrepreneurship theory has little to say on these matters, and progress in the field is largely reliant on inferences from observations. In this regard, the weaker link approach of the REDI breaks new ground, turning on its head many of the approaches which emphasised strengths, whereas a system-wide framing of the problems emphasises the correcting for weaknesses as being essential. Conclusions Over more than thirty years of development, both entrepreneurship studies and entrepreneurship policy have gradually and increasingly acknowledged the role of the local and regional context. Nowadays, there is an increasing shift in many different countries towards a greater place-based understanding of entrepreneurship and a greater place-based emphasis on entrepreneurship policy. In turn, the regional development field has increasingly acknowledged the crucial role of entrepreneurship as a growth driver and has increasingly initiated policies to promote local entrepreneurship. Both literatures have become increasingly intertwined and nowadays share many common concepts and analytical frameworks, including the systems perspectives. Economic agents and institutions interact amongst themselves and with their environment, and these interactions explain differential local economic performances (Acs et al. 2017). As part of these shifts, a mix of hard and soft policy interventions is becoming increasingly common and promoting entrepreneurship has become a significant element of regional policy in many places. Nevertheless, limitations in the implementation of regional entrepreneurship policies can still be found in addressing important disparities between places and new environments and the importance of evaluation. For example, the EE perspective is still largely untested outside of the large urban Global North (Spigel 2018;Tsvetkova et al. 2020 offer some of them) or in a digital context (Kenney and Zysman 2016;Nambisan et al. 2017;Cusumano et al. 2019;Goldfarb and Tucker 2019;Elia et al. 2020) and there are still limited attempts for entrepreneurship sub-national policy evaluation and optimisation (Szerb et al. 2020;Varga et al. 2020). As seen in this paper, our understanding of these evolving approaches has increased significantly in recent years, but probably more so than our understanding of their effectiveness. Weakest link approaches offer a new framing of the problems associated with entrepreneurship, both conceptually and in terms of policy implementation. Research on these topics is an ongoing and unfinished process which is likely to continue well into the future. In particular, there is a need to better understand what role 'place' plays in shaping entrepreneurial activities and effective policy approaches, especially in non-superstar cities and regions. The challenges of fostering entrepreneurship in economically weak places are much greater both conceptually and operationally than in already prosperous places.
2022-04-29T15:35:29.969Z
2022-04-26T00:00:00.000
{ "year": 2022, "sha1": "6348759a8a8314bd839a9baa8c325816defcca06", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00168-022-01128-8.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "2ecceead58ec62bac5bdfd04a1c63964b0079908", "s2fieldsofstudy": [ "Economics", "Business", "Political Science" ], "extfieldsofstudy": [] }
237467469
pes2o/s2orc
v3-fos-license
Inflammatory Factors as Potential Markers of Early Neurological Deterioration in Acute Ischemic Stroke Patients Receiving Endovascular Therapy – The AISRNA Study Background and Purpose This study aimed to explore several peripheral blood-based markers related to the inflammatory response in a total of 210 patients with acute ischemic stroke (AIS) caused by large artery occlusion in the anterior circulation who received endovascular therapy (EVT) from an observational study of clinical significance of circulating non-coding RNA in acute ischemic stroke (AISRNA). Methods We collected baseline characteristics of 210 AIS patients participating in an observational acute stroke cohort: the AISRNA study. The following inflammatory factors were measured in these participants: interleukin-2 [IL-2], IL-4, IL-6, IL-10, tumor necrosis factor-α [TNF-α], and interferon-γ [IFN-γ]. The National Institute of Health Stroke Scale score increase of ≥4 within 24 hours after EVT defined as early neurological deterioration (END). Results Compared with patients without END, patients with END had a higher incidence of atrial fibrillation (P=0.012), and also had higher levels of IL-6 and IL-10 (P<0.01). Furthermore, we found that the area under the curves (AUCs) of IL-6 and IL-10 for predicting END were 0.768 (0.697–0.829), and 0.647 (0.570–0.719), respectively. Adjusting for age, sex, and atrial fibrillation, the odds ratios (ORs; 95% confidence interval) for incident END for IL-6 and IL-10 were 1.98 (1.05–6.69) and 1.18 (1.04–1.33), respectively. Additionally, we found significant changes over time in the expression levels of IL-4, IL-6, and IL-10 in patients with END compared with patients without END (P<0.05). Conclusion IL-6 and IL-10 levels at admission may be potential markers of END after EVT, and the time course of IL-4, IL-6, and IL-10 is correlated with stroke progression. Further larger studies are needed to confirm the current findings. Trial Registration ClinicalTrials.gov NCT04175691. Registered November 21, 2019, https://www.clinicaltrials.gov/ct2/show/NCT04175691 Introduction Stroke is commonly considered to be a major cause of disability and mortality in adults. Although the early advent of endovascular therapy (EVT) has improved the clinical outcomes of acute ischemic stroke (AIS), over 50% of patients still suffer from disabilities and deficits, which may be the result of neurological and medical complications. 1 The precise prediction of clinical outcomes during the acute phase of ischemic stroke after EVT remains challenging. While our previous studies have found that demographics and several clinical characteristics are associated with prognosis after acute ischemic stroke, 2-4 the accuracy of prediction remains limited, especially during the acute phase of IS. 5 Therefore, the development of precise scores to predict the prognosis in the acute stage may benefit from the identification of individual biomarkers. A correlation between stroke and the acute inflammatory response can be found in various stages of acute infection. Accumulating evidence has indicated that stroke induces a rapid immunodepression through the autonomic nervous system. 6,7 Immune factors, including cytokines, are indicative of stroke-associated infection and are associated with clinical outcome after AIS, 8,9 which may be attributed to dynamic changes in the secretion of inflammatory cytokines in the central and peripheral immune responses affecting the progression of AIS. 10 These inflammatory cytokines are mostly produced by activated peripheral immune cells and resident microglial cells. 11 Mounting evidence showed that Th2 type cytokines including interleukin-10 (IL-10) and IL-4 are involved in the repair of brain injury and the inhibition of stroke-associated inflammation. 12,13 Unexpectedly, IL-6, well known for a Th1 type cytokine presenting the proinflammatory effect, also represents neurotrophic and regenerative capabilities after cerebral infarction. 14 Therefore, these inflammatory cytokines may be associated with stroke pathogenesis and outcome. Previous studies have revealed the functions of inflammatory cytokines in an acute phase of ischemic stroke. 9,15 However, the link between inflammatory cytokines and stroke progression remains unclear in AIS patients receiving EVT. In the present study, we enrolled a total of 210 patients who received EVT from the Clinical Significance of Circulating Non-coding RNA in Acute Ischemic Stroke (AISRNA) study to investigate a range of inflammatory factors (interleukin-2 [IL-2], IL-4, IL-6, IL-10, tumor necrosis factor-α [TNF-α], and interferon-γ [IFN-γ]) and early neurological deterioration (END) after stroke caused by large artery occlusion in the anterior circulation. Additionally, the influence of END after EVT on time course of inflammation-related biomarkers was further explored. Study Population All subjects provided informed consent. In the present study, we enrolled consecutive patients with AIS patients caused by large artery occlusion in the anterior circulation who received EVT. Data from a total of 210 AIS patients who received EVT were prospectively collected from an observational study of the AISRNA study (www.clinical trials.gov, NCT04175691) to investigate the association of inflammatory factors at admission and their dynamic changes with END for 7 days. The clinical characteristics of the patients are summarized in Table 1. All the patients were from Nanjing First Hospital, Nanjing Medical University which is a stroke center affiliated with China Stroke Association. Eligible patients were enrolled in the present study if they met the following inclusion criteria: (1) AIS patients who received EVT and (2) patients with anterior circulation occlusion. Patients were excluded from the study if they met the following exclusion criteria: (1) patients with intracranial hemorrhage; (2) patients with infection on admission; (3) patients treated with antibiotics or immunodepression medical therapy within the past 4 weeks; (4) patients under 18 years of age; (5) patients with a history of a malignant tumor; and (6) patients who died within 7 days. Clinical outcomes (modified Rankin Scale [mRS]) were further investigated after 3 months. Eligible participants were followed up after 3 months via telephone and contact in an outpatient clinic. Our study protocol was approved by the Nanjing Medical University Ethics Committee and complied with the Declaration of Helsinki. Clinical Data The data collected included demographics, medical history, stroke severity at admission and 24 h later, stroke etiology, laboratory parameters, modified Thrombolysis in Cerebral Infarction (mTICI) Score, and time of onset to door, groin puncture, and reperfusion. Several data definitions were used in the present study: the National Institute of Health Stroke Scale (NIHSS) was used to assess stroke severity; the delta NIHSS scale (delta NIHSS 0-24h) = NIHSS on admission -NIHSS after 24 h; early neurological deterioration (END) was defined as ΔNIHSS ≥4 points; 16 and stroke etiology was determined according to the Trial of Org 10172 in acute stroke treatment (TOAST) criteria. 17 Results Of the 1236 patients screened from the AISRNA study between November 2019 and February 2021, a total of 210 participants (79 female; mean age, 67.98±12.1 years) met the inclusion criteria ( Figure 1). A total of 1026 patients were excluded according to the following criteria: lack of EVT (n=960), posterior circulation occlusion (n=15), died within 7 days (n=8), preexisting dysphagia (n=7), intracranial hemorrhage (n=8), infection on admission (n=10), antibiotic or immunodepression therapy within the past 4 weeks (n=6), and no blood samples (n=12). The baseline characteristics of the eligible patients are summarized in Table 1. Of the 210 patients included, 88 (41.9%) received intravenous thrombolysis. A total of 23 (11.0%) patients suffered END after EVT. The association between inflammatory factors at admission and incidence of END is shown in Table 1. The NIHSS on admission was not significantly different between the groups (P=0.074). However, compared with non-END, NIHSS after 24 h and 90-day mRS were significantly higher in patients with END (P<0.001). We found that END was associated with increased IL-6 and IL-10 levels having a higher proportion of atrial fibrillation (P=0.012). As the delta NIHSS was considered END, we performed a correlation analysis to investigate the correlation between inflammatory factors and delta NIHSS 0-24 h. The results showed that the delta NIHSS was correlated with the expression of IL-6 (r=0.260, P=0.016) and IL-10 (r=−0.238, P=0.028) ( Figure S1B and S1C), but a correlation was not observed with the expression of IL-2, IL-4, INF-γ, and TNF-α (P>0.05, Figure S1A, and S1D-F). Furthermore, we also performed a correlation analysis to explore the association between inflammatory factors and 90-day mRS. The findings suggested that serum concentration of IL-6 was significantly associated with 90-day mRS after EVT (r=0.171, P=0.025, Figure S2B), but not IL-2, IL-4, IL-10, INF-γ, and TNF-α (P>0.05, Figure S2A, and S2C-F). Additionally, we performed an ROC curve analysis to explore the predictive powers of these factors. We observed that the AUCs of IL-6 and IL-10 for predicting END were 0.768 (0.697-0.829), and 0.647 (0.570-0.719), respectively (Figure 2), revealing that IL-6 outperformed IL-10 in predicting END (P<0.05). Given that association of inflammatory factors with END, we conducted a binary logistic regression to analyze the impact of inflammatory factors on END. The results of the univariate analyses showed that the IL-6 and IL-10 levels were prognostic factors for END, and the findings remained stable after adjustment ( Table 2). To further study the clinical utility of inflammatory factors after EVT, we analyzed the time courses of the inflammatory factors within 7 days for patients with and without END (Figure 3). A total of 97 blood samples were collected for day 1, day 2, day 3 and 7 among these patients. We found significant changes over time in the expression levels of IL-4, IL-6, and IL-10 in patients with END compared without END ( Figure 3A-C, P<0.05), and the IL-6 levels were obviously increased from days 1 to 7 after EVT ( Figure 3B, P<0.05). Furthermore, we observed IL-4 and IL-10 peak levels at day 2, and then rapidly decreased at day 3 ( Figure 3A and C). However, No significant change over time was found in the expression levels of IL-2 ( Figure 3D), TNF-α ( Figure 3E), and INF-γ ( Figure 3F) between patients with END and patients without END (P>0.05). Discussion This study of EVT participants from the AISRNA study showed a strong association of IL-6 and IL-10 levels at admission with risk of END. Our study suggests that patients with increased IL-6 and IL-10 levels had a higher risk of developing post-EVT END. By contrast, there was no correlation of baseline levels of IL-2, Il-4, INF-γ, and TNF-α with END. The predictive power of IL-6 for END was superior to that of other inflammatory Inflammation is a hallmark of stroke etiology and progression. Poststroke inflammation is considered a requisite pathological process involved in ischemic brain injury. 18 A series of detrimental complications occur and the bloodbrain barrier (BBB), damaged after the initial brain injury, is crossed by activated peripheral immune cells including monocytes and T cells. 19 Furthermore, poststroke inflammatory response is associated with stroke severity at admission as determined by NIHSS. 20 Additionally, patients who suffered END were more susceptible to a poor functional outcome after 90 days. 21 However, the association between poststroke inflammatory response and END remains uncertain. Therefore, we performed a prospective study from the AISRNA study to further investigate the influence of inflammatory factors on END after stroke. The biological function of IL-6 in AIS remains controversial. 22 IL-6 is considered as multipotent functions involving in brain damage and nerve regeneration. 23 Firstly, IL-6 would enhance the brain injury and weaken the proliferation of neural stem cells in the acute phase of ischemic stroke. Secondly, IL-6 could inhibit collagen deposition to protect glial cells and repair brain injury in the subacute phage of cerebral ischemia. 24 A previous study has demonstrated that the IL-6 levels are increased in peripheral blood samples during the first 7 days after stroke onset. 25 For example, increased IL-6 is associated with the infarct size and stroke severity at admission 12,26 as well as risk of incident stroke, 27 but another study reported the opposite, that early levels of IL-6, as a neuroprotective factor, are inversely correlated with lesion size and functional outcome. 28 Our findings also suggest that increased IL-6 was associated with risk of END after EVT. Thus, this controversial phenomenon needs to be further investigated. However, there is less agreement on the time point of the peak of IL-6 levels. Some studies reported IL-6 peak levels at day 3, 29,30 which is similar to our findings. Others describe high levels of IL-6 that ranged from a few hours until one day or a week after stroke. 28,31,32 IL-10 is a major anti-inflammatory cytokine that is secreted from monocytes and T cells, suggesting its participation in a plethora of immunomodulatory functions. High expression of IL-10 promotes glial and neuronal cell survival and weakens inflammatory responses in the brain. 33 In a permanent middle cerebral artery occlusion model, IL-10 suppresses proinflammatory molecules and reduces infarct volume. 34 Several investigators also reported that IL-10 may serve a neuroprotective role and predict clinical outcome after stroke. 12,35,36 However, other studies have suggested that IL-10 is a marker of incident stroke. 37,38 Our previous studies from experimental rats and human stroke patients observed that IL-10 is positively associated with stroke risk. 39,40 In the present study, we also reported a positive correlation between IL-10 and END after EVT, suggesting that increased IL-10 levels at admission were associated with END after EVT. We observed IL-10 peak levels at day 2, and patients with END had decreased IL-10 levels compared with those with non-END at 7 days. This phenomenon may be due to a stress response of IL-10 regarding END, which leads to high levels of IL-10, but its levels were subsequently decreased after day 2. Therefore, IL-10 may be involved in the process of END, and its molecular mechanisms remain to be explored in further studies. IL-4, an anti-inflammatory cytokine, can drive the differentiation of Th2 cell, which play beneficial roles in inhibiting poststroke inflammation, repairing damaged brain tissues and inducing neurotrophic factors in astrocytes. 41,42 A previous study has demonstrated that IL-4 is significantly correlated with stroke severity and functional outcome in 4404 AIS patients. 12 However, our results showed no association of IL-4 with END after EVT, but IL-4 levels were increased in patients with END at day 2 and then rapidly decreased at day 3. At present, we are also investigating the molecular mechanisms of IL-4 and IL-10 regarding dynamic change in stroke progression. Several limitations should be acknowledged in the study. First, this study included a small number of patients from a single center. We are seeking for subcenters to complete the AISRNA study. Second, we did not analyze the association between the initial infarct burden and END after EVT. Third, although the incidence of atrial fibrillation in END patients was higher than those in non-END patients, the effect of inflammatory factors on END remained stable after adjustment for atrial fibrillation. Therefore, a randomized controlled study is urgent to confirm this finding. Finally, molecular mechanisms regarding dynamic changes of these inflammatory factors should be further explored in patients undergoing EVT. In summary, this study illustrates the correlation of inflammatory factors on END and their time course after EVT among AIS patients caused by large artery occlusion in the anterior circulation. We found that the serum concentration of IL-6 and IL-10 at admission may be a potential marker of END after EVT, and the time course of these factors is correlated with END. Additional, further larger studies are needed to confirm the current findings. Data Sharing Statement All data supporting our results are available from the corresponding authors upon reasonable. Funding This work was supported by the National Natural Science Foundation of China [No. 81901215 (to Qi-Wen Deng)
2021-09-11T05:22:42.923Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "354670c7499c59d87ad0782fbaffa914d8548f9d", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=73265", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "354670c7499c59d87ad0782fbaffa914d8548f9d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255838273
pes2o/s2orc
v3-fos-license
Involvement in decisions about intravenous treatment for nursing home patients: nursing homes versus hospital wards Many of the elderly in nursing homes are very ill and have a reduced quality of life. Life expectancy is often hard to predict. Decisions about life-prolonging treatment should be based on a professional assessment of the patient’s best interest, assessment of capacity to consent, and on the patient’s own wishes. The purpose of this study was to investigate and compare how these types of decisions were made in nursing homes and in hospital wards. Using a questionnaire, we studied the decision-making process for 299 nursing home patients who were treated for dehydration using intravenous fluids, or for bacterial infections using intravenous antibiotics. We compared the 215 (72%) patients treated in nursing homes to the 84 (28%) nursing home patients treated in the hospital. The patients’ capacity to consent was considered prior to treatment in 197 (92%) of the patients treated in nursing homes and 56 (67%) of the patients treated in hospitals (p < 0.001). The answers indicate that capacity to consent can be difficult to assess. Patients that were considered capable to consent, were more often involved in the decision-making in nursing homes than in hospital (90% vs. 52%). Next of kin and other health personnel were also more rarely involved when the nursing home patient was treated in hospital. Whether advance care planning had been carried out, was more often unknown in the hospital (69% vs. 17% in nursing homes). Hospital doctors expressed more doubt about the decision to admit the patient to the hospital than about the treatment itself. This study indicates a potential for improvement in decision-making processes in general, and in particular when nursing home patients are treated in a hospital ward. The findings corroborate that nursing home patients should be treated locally if adequate health care and treatment is available. The communication between the different levels of health care when hospitalization is necessary, must be better. ClinicalTrials.gov NCT01023763 (12/1/09) [The registration was delayed one month after study onset due to practical reasons]. Background Decisions about life-prolonging treatment should be based on a clinical evaluation of the patient's best interestand on the patient's own wishes [1]. Ethics and law should secure sound decision-making processes and that the patient's voice is heard, also regarding the right to abstain from treatment. Approximately 40,000 elderly people live in nursing homes in the last stage of their lives in Norway. About 80% suffer from cognitive impairments of different kinds and degrees. Nursing home patients live with an average of 5-7 diagnoses [2]. Nearly half of all deaths in Norway occur in residential care facilities [3]. A permanent resident in a Norwegian nursing home lives there for an average of 1.5-2 years before they die, but there are large variations [4]. When a nursing home patient suddenly gets worse, it is necessary to determine whether or not to give the patient lifeprolonging treatment or palliative care, and whether or not to admit to hospital. Life-prolonging treatment is defined as all treatment and interventions that can delay a patient's death [1]. Examples of this in nursing homes are intravenous fluids and antibiotic treatment. When a patient no longer can express his or her own opinions, for instance due to cognitive impairment or confusion resulting from acute illness, the need to secure sound decision-making processes is even greater. When the patient lacks the capacity to consent, Norwegian health law demands more involvement of next of kin, and that other qualified health personnel are consulted [5]. The next of kin should be asked about their knowledge of what the patient would have wished if competent, for example previously expressed wishes on future treatment. It is important to clarify the patient's preferences and values related to the intensity of life-prolonging treatment before it is too late, for instance through advance care planning (ACP) [1]. Wishes expressed in ACP or advance directives are not legally binding in Norway, but when the patient lacks capacity to consent the professionals should only provide treatment that are in the patient's best interest and when it is likely that the patient would have consented to it [5]. That is, relevant previously expressed wishes about limiting life-prolonging treatment should as main rule be expected [1]. According to Norwegian law, it is the responsible professional to assess the patient's capacity to consent and to make the final decision in cases where the patient's capacity to consent is lacking. The assessment of the patient's capacity is by and large entrusted to the responsible professional, but should assessed in relation to the concrete decision that has to be made and in accordance with justifiable professional standards. In questions about medical or life prolonging treatment, the responsible professional will be a physician. The Norwegian health law on informed consent, assessment of capacity and involvement of next of kin went into effect in 2001. Similar laws have been passed in many other countries the last decades, and reflect ethical and political discussions on the balance between paternalism and autonomy. The increased emphasis on patient autonomy is also reflected professionals' codes of ethics, for example in the Norwegian associations for physicians and nurses. Many countries have also passed laws on advance directives and durable power of attorney. This is not yet the case in Norway. Thus role of the next of kin in Norway is not to consent on behalf of the patient in situations where capacity to consent is lacking, but to rather to inform the decision that the responsible professional should make. Appropriate involvement of the next of kin and the patient of still presupposes that the patient's capacity to consent is assessed. Patients, next of kin and health care staff may have different opinions on treatment intensity, which in turn may complicate decision-making. The question of treatment intensity is especially pressing for very ill patients who have reduced quality of life, short life expectancy, and when there is an increased risk of side effects from the treatment. The risk of confusion and side effects will often increase when frail nursing home patients are sent to the hospital [6]. In order to ensure the right level of treatment and treatment intensity, medical and ethical competence is necessary for nursing home staff. In addition, sufficient physician presence in the nursing home and good cooperation between the levels of health care are needed. Many nursing home residents need to be admitted to hospital in order to get intravenous fluids or antibiotics because the nursing home lacks the skills or capacity to give this sort of treatment. This study is a part of the 3IV study: a large clinical intervention trial where all nursing homes in one administrative district were trained to administer intravenous treatment [7]. Because decisions about intravenous fluids or antibiotics to nursing home patients bring up many ethical dilemmas, research on the decision-making processes became a subproject. Earlier studies [8][9][10][11][12][13][14] indicate suboptimal decision-making processes, but there has been relatively little research done on how decisions about life-prolonging treatment for nursing home patients are made. This study is, as far as we know, unique by containing a large number of concrete patient trajectories. The purpose of this study was to elucidate the decision-making process when life-prolonging treatment was given, here in the form of intravenous fluids or antibiotics, to nursing home patients, and in particular to compare how these decisions are made in nursing homes and in hospitals. We especially wanted to know more about who is involved in the decision making, whether capacity to consent is considered, the influence of capacity evaluations on patient involvement, how often ACP is carried out, and whether there were doubts about the treatment. Study setting All 34 nursing homes in a county were invited to participate in the study. Four declined to participate, two because the nursing home managers perceived little need for intravenous treatment among their residents, two because they used the hospital in the neighboring county. The 30 participating nursing homes had 12-124 beds, from one to eight wards, and either only one kind of ward, or a combination of different kinds: for rehabilitation, short and long term care, palliative care and separate wards for dementia. The nursing homes had 1 to 6 doctors employed (mean 1.9, median 1.5). Seven of the nursing homes (including the two largest nursing homes) had nursing home doctors employed in full time or half time positions; 21 used general practitioners (GPs) working 20% in the nursing homes (the majority of these split their time into 40% presence at the nursing home and 60% availability for telephone consultations); two of the large nursing homes had a combination of the two. Mean man-years for nurses in the nursing homes was 14.1 (range 3.5-40.2), mean man-years for nursing assistants were 26.2 (range 5 to 105). All admittance to hospital included in this study was to inpatient wards in the Department of Medicine at the local hospital, which is the only hospital in this county. Study design The study is based on a sub-set of data from the 3IVstudy [7], a using a modified stepped wedge cluster randomized trial with randomization on the nursing home level, each nursing home representing one cluster [15]. The intervention, a structured training program in intravenous treatment of dehydration and infections in nursing homes, was carried out in the 30 nursing homes following the randomization plan, from November of 2009 to December of 2011, with patient inclusion and data collection in the same period. In nursing homes that had not received the training, the patients were admitted to the hospital for intravenous treatment. In nursing homes that had received the training, the patients were treated locally, given the staff had satisfactory skills and capacity, otherwise, they were hospitalized. Some of the nursing homes had the capacity to provide intravenous treatment before the project started. Study population We included patients that were treated with intravenous fluids and/or intravenous antibiotics for pneumonia, upper urinary infections, or deep skin infections, either at the nursing home or at the hospital. Patients receiving intravenous antibiotics were defined as "intravenous antibiotics-patients" whether they were receiving intravenous fluids in addition, or not. Patients who were only receiving intravenous fluids were defined as "intravenous fluid-patients," though several of them were also given per oral antibiotics. Patients with sepsis or patients who were admitted for accessory symptoms like anemia or other co-morbidity, were not included in the study. The total material of the 3IV-study consists of 330 patients; 108 patients were provided intravenous treatment in the hospital, 222 patients were provided treatment in the nursing homes. In this article, we present the results of the questionnaire about the decision-making process, filled out for 299 (91%) of the patients: 24 (22%) of the patients who were treated in the hospital, and 7 (3%) of those treated in the nursing home were excluded from the analysis due to incomplete answers. Data collection In every nursing home, and on the wards of the Department of Medicine, there were one or two coordination nurses, who were responsible for inclusion of patients at the onset of intravenous treatment, and for registration of data in standardized questionnaires (Additional files 1, 2, 3, 4 and 5). Clinical data was registered on day one, and on specified days throughout the treatment course, such as diagnosis, vital signs (blood pressure, pulse, temperature, respiratory rate), and delirium (assessed using "Confusion Assessment Method (CAM)". For the patients treated in the nursing homes, Barthel Index of Activities of Daily Living 14 days before disease onset was also recorded. The focus of this paper is a separate section of the questionnaire that consisted of questions to the treating physician about the decision-making process before initiation of intravenous treatment: assessment of capacity to consent, involvement of patient, next of kin or other health personnel, and ACP. We also asked whether they had doubts about the treatment, and if so, reasons for their doubt. The hospital doctors were also asked whether they were in doubt that hospital admission had been the right course of action. The specific questions are listed in Table 2. It was the attending physician, a dedicated study nurse, or a nurse involved in the treatment of the patient that filled out the forms. The questions were developed based on earlier studies and questionnaires, and created to be as similar as possible for both the hospital and nursing home setting. The response alternatives were "Yes", "No" and "I don't know". Analysis IBM SPSS® statistics program version 22 was used for statistical analysis. For the purpose of this study, the analysis was conducted as in cross-sectional studies. In the comparisons between nursing homes and hospitals, and between nursing home doctors and GPs with part time positions, we used chi-squared test with significance level p < 0.05; Fisher's exact test (two-sided) when expected cell count was < 5. Answers to open questions in the questionnaire were analyzed qualitatively using qualitative content analysis [16]. Results Of the 299 included patients, 215 (72%) received intravenous treatment in the nursing home; of these, 107 (50%) were treated with intravenous antibiotics. Among the 84 patients who received intravenous treatment in the hospital, 66 (79%) received intravenous antibiotics. Age, gender and course of disease among the patients who received treatment in the nursing homes and patients treated in hospital are presented in Table 1. Patient characteristics for the patients included in the 3IV-study are presented elsewhere [7]. Involving the patient, next of kin and other health professionals Overall, the respondents in the nursing homes reported more often than the respondents in the hospital that the treatment was discussed with the patient (133 (62%) vs. 19 (23%), p < 0.001), next of kin (113 (53%) vs. 21 (25%), p < 0.001) and other health professionals who knew the patient (191 (89%) vs. 18 (21%), p < 0.001). The responses from the hospital were more often than from nursing homes "I don't know." The patient's capacity to consent was reported considered in 197 (92%) of the patients treated locally and in 56 (67%) of the patients treated in the hospital (p < 0.001). Among these 253 patients, the treatment was discussed with 126 (83%) of the patients who had the capacity to consent before treatment commenced, and with 14 (14%) of the patients without capacity to consent (p < 0.001). Patients with capacity to consent were more often involved in treatment decisions in the nursing homes: 112 (90%) of patients treated in the nursing homes; 14 (52%) of patients treated in the hospital (Fisher's exact test < 0.001 (two-sided)) ( Table 2). There was no significant difference in discussing treatment with next of kin in the patients with and without capacity to consent (67 (44%) vs. 50 (49%), p = NS). Among the patients lacking capacity to consent, treatment was discussed with next of kin more often in the nursing homes than in the hospital (47 (64%) vs. 3 (10%), Fisher's exact test < 0.001 (two-sided)) ( Table 2). The treatment was more often discussed with other health professionals in patients with, compared to patients without, capacity to consent (123 (82%) vs. 66 (65%), p < 0.05). Discussions among other health personnel were more common in the nursing homes than in the hospital, regarding both patients with and without capacity to consent ( Table 2). The reasons given for not discussing the treatment with patients who had the capacity to consent, and the reasons given for not discussing treatment with the next of kin to patients without capacity to consent are summed up in Table 3. Advance care planning (ACP) It was reported that ACP had been carried out in 108 (50%) of the patients treated locally versus in 14 (17%) of the admitted patients (p < 0.001) ( Table 2). The hospital responded "I don't know" in 58 (69%) of the cases. Doubt The respondents reported doubt about whether intravenous treatment was right for 69 (23%) of the patients. In 45 (15%) of the patients, doubt was reported about whether the treatment would have an effect; in 38 (13%) whether the treatment was in the patient's best interest; in 25 (8%), doubt about the patient's preferences. There were not significant differences in doubt in the nursing homes compared to the hospital (55 of 215 (26%) vs. 14 of 84 (17%), P = 0.1). Further, there were not significant differences in doubt in cases where the treatment had been discussed with the patients, compared with the cases where the patient was not involved (31 of 152 (20%) vs. 32 of 111 (29%), p = 0.11). There was more doubt reported in cases where treatment was discussed with next of kin than when next of kin was not involved (40 of 134 (30%) vs. 21 of 112 (19%), p < 0.05. There was no correlation between reported doubt and involvement of other health personnel. There was no doubt about treatment reported in the 34 patients treated in nursing homes with a Barthel Index score of > 11 of 20, and there was less doubt about the 101 patients who after 30 days returned to their former level of functioning, than the 196 patients with reduced health status or death (16% vs. 27%, p > 0.05). The hospital respondents reported doubt about whether the admission was right for 30 (36%) of the 84 patients admitted. Among the patients treated at the hospital, reasons for doubt were: The patient could have been treated in the nursing home (21 (25%)); uncertainty about whether the benefits of admission outweighed the disadvantages (11 (13%)), and doubts about whether life-prolonging treatment was right for the patient (9 (11%)). The influence of the nursing home doctors' position The influence of the nursing home doctors' position on the decision process was explored among patients treated in the nursing homes (Table 4). For patients from nursing homes with full or half-time nursing home doctors, it was reported that the treatment was discussed with the patients more often than for patients from nursing homes with GPs with 20% positions. Oppositely, for patients from nursing homes with GPs, it was more often reported that the treatment was discussed with the next-of-kin. We did not find significant differences in reported ACP, or doubt about treatment. Discussion This study shows that nursing home patients are, at least sometimes, given intravenous treatment without the patient, next of kin or other health personnel being involved in the decision-making process, as the law and ethical guidelines mandate they should be. There is also a relatively large proportion of patients who have not been given the opportunity for ACP [17]. Due to demographic changes and an intensified effort in community care of the elderly [18], residents in European nursing homes have over the past decades become increasingly frail, often with multiple active diagnoses [19], and the situation is similar in Norway. The quality, quantity, and nature of care in nursing homes varies across nations [20,21]. Nevertheless, they share a common need for solid and legal decisions regarding health interventions for nursing home patients. Quality of care in nursing homes is considered a complex and multidimensional phenomenon, influenced by resident, staffing and ward characteristics [22]. International studies indicate that, for the most part, higher total staffing levels and higher educational level among staff are positively associated with quality of care [23]. Norwegian nursing homes have high levels of nurses and physicians compared to many other countries [20,24], and are thus in a position to emphasize and improve both medical and ethical aspects of life-prolonging treatment. Involvement of the nursing home patient, next of kin, and other health professionals, as well as ACP and assessment of capacity to consent, more commonly follows legal and ethical guidelines in nursing homes than in the hospital, according to this study. Our findings suggest that the nursing homes that have a physician present most of the time, more often involve the patient in the decisionmaking processes. This coincides with an earlier qualitative study from Norway [17]. The responses also shows that when a nursing home patient is treated in the hospital, the professionals know less about how the different parties have been involved, whether the patient's capacity to consent has been assessed, and whether or not ACP has been carried out. This can mean that even if ACP was carried out in the nursing home, the relevant information does not follow the patient when admitted to the hospital. Or, that if such conversations were carried out earlier in the hospital during this or previous admissions, the information is not easily accessible to use as relevant support in the decision-making processes. From earlier studies, we know that many health professionals find it hard to assess capacity to consent [25]. Table 4 Questions that give insight into the decision-making process for patients treated in nursing homes with nursing home doctors on staff, versus in nursing homes with general practitioners (GPs) in 20% positions Patients treated in nursing homes with full-or part time physicians n = 125 Patients treated in nursing homes with GPs n = 68 P-value The patient's capacity to consent was assessed 112 (92%) 60 (91%) NS The treatment was discussed with the patient before commencement 82 (67%) 31 (47%) < 0.05 The treatment was discussed with the next of kin before commencement 54 (44%) 41 (62%) < 0.05 Was the treatment discussed with health personnel who knew the patient? In our study, it seems that capacity has been assessed for a large majority of the patients. The answers to why patients who have the capacity to consent are not involved in treatment decisions, for instance that "Patient suddenly got worse and was not possible to get through to/ communicate with" indicate, however, that some of the capacity assessments are deficient. A possible reason for this is that training in how to assess capacity has been given limited attention in medical training in Norway. The patient's capacity to consent seems to have little impact on whether next of kin and other health professionals are involved, in contrast to what the law and ethical guidelines mandate. If we see that in connection with the fact that patients and next of kin are rarely included, and the reasons given for not including them, one possible interpretation is that the medical assessment of what is necessary is what steers the treatment of nursing home patients. The patient's individual preferences have a more peripheral role, and are in many cases left out of decision-making regarding intravenous treatment. In this study, a relatively large proportion (28% in total) of the patients died within 30 days, while doubt about intravenous treatment being right was more rarely reported (23% in total). One may ask whether doubtin the treatment of very frail and sick, and sometimes dying, nursing home patients -is expressed or admitted too rarely. We know that both under and over-treatment are problems in health care for the elderly [26]. In this study, we have only included situations where the patient received treatment. However, the topics we have studiede.g. doubt -are equally relevant in situations where treatment is withheld or withdrawn. In the hospital, there was more often doubt about admissions than about the treatment itself. This may mean that they feel that the patient should be treated, but at a lower level, and that once a patient is admitted, treatment is the right course of action. Combining this with our findings that imply a lesser degree of inclusion of involved parties in decisionmaking processes, and challenges regarding the flow of information between the levels of health care [27], this is an important reason to treat as many patients as possible in the nursing home, if the nursing home is able to provide diagnostics and treatment of similar quality to the hospitals. The results from the 3IV study [7] show that the nursing homes are able to do this when more extensive or advanced diagnostics or treatment are not needed. Our findings also reveal a need for better communication between the levels of health services, development of better documentation systems, routines and competence regarding capacity assessment -and the use of such assessments, ACP, and finally, a need for better inclusion of patient, next of kin and other health personnel in decision making regarding life-prolonging treatment in general. Strengths and weaknesses A decision-making process is complex and consists of clinical aspects in addition to the formal and legal guidelines for how to do it. A questionnaire is suited to map out a situation, but can result in over-reporting because the respondents intuitively know what the "right" answer is. A high number reporting ACP in this study, compared to other studies, may indicate this kind of bias [13,28]. An earlier study carried out by von Hofacker et al. pulled information from patient charts [29]. Using that method may result in under-reporting. Not all questionnaires were answered by the treating physician as we requested, but instead by a nurse. However, nurses are often precise, and access to the physician chart was necessary in order to answer other questions in the study. In those cases where a nurse answered the questionnaire, we called afterwards to talk to them. They usually said that the questions wereanswered after consulting the doctor. Using a questionnaire, you risk differing interpretations of questions. "To discuss" is a phrase that may be associated with different things, such as disagreement or conflict. Yet, since so many have responded in the positive to the question of discussing with the patient prior to treatment, we can assume that most of them have understood this to mean a conversation with the patient. Although we through the inclusion criteria aimed to ensure comparability between the patients treated in nursing homes and the patients admitted to hospital, the two groups are not identical. We assume that in the study as well as in clinical practice, there is a trend towards more seriously ill patients being hospitalized [7]. However, among patients given intravenous treatment locally, there will be some that are provided intravenous treatment as palliative care in a terminal phase who would not been hospitalized for the same treatment. How this affects the decision processes and thereby the outcomes of the study is difficult to assess. A second difference between the groups is that among the patients treated in the hospital decision about hospitalization has already been madewhich may lead the hospital doctors to think that a decision to provide life-prolonging treatment has already been made. This is often not true; reasons for hospitalization of these patients are multifactorial and not necessarily based on the patients will or what is best for the patient [29]. In this study, hospital doctors expressed more doubt about the decision to admit the patient to the hospital than about the treatment itself. In a qualitative sub-study of the 3IV project we showed that both nursing home-and hospital doctors were concerned by unnecessary hospitalizations and overtreatment in the hospital [30]. Thus, ethically and legally sound decision processes are equally important when providing treatment to the elderly patients in the hospital as in the nursing home. An advantage with our study design is that we get comparable data from many concrete patient treatment trajectories, and many different types of nursing homes, nearly all in one administrative district. The 3IV study also contains a number of clinical as well as qualitative data [7,30,31], that have been used as assistance in the interpretation of the data from the questionnaires about decision making. The current study included the vast majority of public and private nursing homes in one, relatively big county, i.e. the whole spectrum of resident and ward characteristics from both rural and urban areas. The number of physician hours per resident per week in the county was 0.62 in 2016, the mean for Norway was 0.55 (0.38 to 0.75) [24]. Although we cannot claim that the presented results are fully representative for the situation in Norway, the nursing homes in this study are probably without major differences from Norwegian nursing homes in general. Conclusions In our study we find better decision-making processes and access to information relevant do such decisionsfor example the patient's preferences and assessments of capacity to consent -in nursing homes than in hospitals. The results point to a potential for better involvement of nursing home patients and their next of kin before decisions are made about life-prolonging treatment both in hospitals and nursing homes. Our findings also indicate that the patient's capacity to consent is not always considered, and that ACP is often not carried out. Patient preferences expressed through ACP can be difficult to interpret when a situation arises. Still, it is important that capacity to consent is assessed, that patients are involved when able to consent, and that a decision about life-prolonging treatment is made by a doctor in collaboration with someone who knows the patient well, if the patient lacks the capacity to consent. Only then can the professionals decide whether the health intervention is in line with the patient's interest. Determining the right thing to do for a severely ill nursing home patient certainly requires biomedical expertise, but it is also to a large degree a value question, where the patient's wishes and values need to be central [1]. Adequate decision making processes probably requires sufficient training, appropriate routines and documentation systems, and that the professionals have time for involvement of patient and next of kin, documentation, and to discuss the ethical dilemmas that arises.
2023-01-16T15:08:05.797Z
2018-05-08T00:00:00.000
{ "year": 2018, "sha1": "f5e3ce98468057b3ff3e75fc329b48b2a1f6ffdc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12910-018-0258-5", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "f5e3ce98468057b3ff3e75fc329b48b2a1f6ffdc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
52000900
pes2o/s2orc
v3-fos-license
Augmented Lagrangian for treatment of hanging nodes in hexahedral meshes The surge of activity in the resolution of fine scale features in the field of earth sciences over the past decade necessitates the development of robust yet simple algorithms that can tackle the various drawbacks of in silico models developed hitherto. One such drawback is that of the restrictive computational cost of finite element method in rendering resolutions to the fine scale features, while at the same time keeping the domain being modeled sufficiently large. We propose the use of the augmented lagrangian method commonly used in the treatment of hanging nodes in contact mechanics in tackling the drawback. An interface is introduced in a typical finite element mesh across which an aggressive coarsening of the finite elements is possible. The method is based upon minimizing an augmented potential energy which factors in the constraint that exists at the hanging nodes on that interface. This allows for a significant reduction in the number of finite elements comprising the mesh with concomitant reduction in the computational expense. Introduction The quantum of work devoted to modeling of fine scale features in the subsurface in the recent decade has spawned a need for simple yet powerful algorithms to simulate the same in silico with low computational cost. The main barrier to these simulations lies in the restrictively fine mesh that needs to be invoked to resolve the finer features of the corresponding physics while at the same time keeping the domain under consideration sufficiently large. The most logical approach to this problem is to allow for a fine mesh to exist in the regions which need a fine mesh and a coarse mesh to exist in regions which do not need a fine mesh. The authors previously developed a method to simulate subsurface flow on a fine mesh and subsurface mechanics on a coarse mesh while allowing for the coupling between the physics of flow and mechanics via a staggered solution algorithm [1]. The aforementioned work though is restrictive in the sense that the mesh for the mechanics domain needs to be uniformly coarser than the mesh for the flow domain as shown in Figure 1. This makes the algorithm infeasible for problems involving fine scale features for the mechanics. With that in mind, we propose an addendum to the algorithm of [1] by invoking the concept of hanging nodes in finite elements [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16] and the augmented lagrangian method [17][18][19][20] for treatment of hanging nodes. A depiction of geomechanics mesh with hanging nodes is given in Figure 1. The problem is looked upon as minimization of a functional C with a constraint g = 0 which dictates the geometry of the interface of the hanging nodes. The penalty formulation is where ǫ is a penalty parameter. A large enough ǫ lends to more accuracy while at the same time leading to highly ill-conditioned stiffness matrix in the eventual system of equations obtained at the discrete level. As a result, the choice of ǫ is a compromise between solution accuracy and solution stability. The lagrangian without hanging nodes with hanging nodes reservoir grid geomechanical grid free surface Figure 1: The method in [1] allows coarse grid for geomechanics coupled with fine grid for flow as shown on the left. The presence of hanging nodes in geomechanics grid as shown on the right allows the capability of capturing fine scale geomechanical features. The hanging nodes are represented by black dots to the right. where λ is the force conjugate to the constraint and is refered to as the lagrange multiplier. Although this method allows for the exact satisfaction of the constraint, the increase in number of degrees of freedom of the original system by the number of lagrange multipliers makes the augmentation computationally expensive. The perturbed Lagrangian formulation is Perturbed lagrangian formulation This allows for the lagrange multiplier to be posed in terms of the constraint thus negating the need to solve for the multiplier as an additional degree of freedom. This method, though, suffers from the same problem that the original penalty method suffers from, i.e. a careful compromise between accuracy and stability must be made in the choice of the penalty parameter. The augmented Lagrangian formulation is Augmented lagrangian formulation where λ k is the lagrange multiplier evaluated at the k th iteration. As is evident from the formulation, the lagrange multiplier is evaluated iteratively till it reaches an asymptotic value. The lagrange multiplier, is not an additional degree of freedom, and hence the system size does not increase as compared to the original minimization problem. The biggest advantage of this method is that the solution stability is not a function of the penalty parameter, and furthermore the lagrange multiplier iterative process reaches the true asymptotic value regardless of the value of the penalty parameter. Formulation As shown in Figure 2, the presence of hanging nodes essentially means that there is an interface in the mesh across which an aggressive refinement is possible thus allowing for fine elements on one side of the interface and coarser elements on the other side of the interface. The fine and coarse elements are refered to as 'slave element' and 'master element' respectively while the faces of the slave and master elements making up the interface are refered to as 'slave surface' and 'master surface' respectively. Let u s and u m represent the displacement fields evaluated at Γ s and Γ m respectively. Then the problem statement is where C is the strain energy in the absence of hanging nodes, g is the refered to as the penetration function, is the lagrange multiplier term with λ being the lagrange multiplier and Ns 1 2 is the penalty term with ǫ being the penalty parameter. Let t s and t m be force conjugates to the constraint g = 0 at Γ s and Γ m respectively. Then is the force conjugate to the constraint g = 0 introduced in a mean sense. For the sake of clarity, we rewriteC as Minimization of (2) would imply equating the first variation to zero as follows where C is given by The contribution to C over every Γ s is evaluated as a sum of the integrandĈ evaluated at each of the four gauss points g ∈ G shown in Figure 3 multiplied by the determinant J Γs of the jacobian of the mappinĝ Γ → Γ s as follows C := System of equations As shown in Figure 3, corresponding to each gauss point (ξ s , η s , −1) onΓ, there is an actual physical point x s on Γ s given by where X i s , i = 1, .., 8 are coordinates of nodes of E s and N i (ξ, η), i = 1, .., 8 represent the shape functions. Let x m be the orthogonal projection of x s onto the corresponding master surface with corresponding location χ ≡ (ξ m , η m , 1) onÊ such that where X i m , i = 1, .., 8 be the coordinates of nodes of E m . We know x s but need to evaluate x m . Evaluating x m given x s The orthogonality condition is satisfied by where the components e 1 , e 2 and e 3 of the tangent at x m with respect to the local axis of master surface are computed as Substituting (9), (6) and (7) in (8), we get The solution to (10) is obtained iteratively for the (k + 1) th iteration as The stopping criterion is where T OL is a pre-specified tolerance. Once this criterion is satisfied, we set χ = χ k+1 and then obtain x m using (7). 3.2 Evaluating u s , δu s , u m and δu m ; t s , δt s , t m and δt m Let U represent the vector of nodal displacement degrees of freedom, and let U| E represent the restriction of U to any element E. Then we have The force conjugate to the constraint evaluated at x s is given by t s = σ 1 σ 4 σ 6 σ 4 σ 2 σ 5 σ 6 σ 5 σ 3 xs n 1 n 2 n 3 xs ≡ n 1 0 0 n 2 0 n 3 0 n 2 0 n 1 n 3 0 0 0 n 3 0 n 2 n 1 xs Evaluating the surface integral Let E s be the collection of all slave elements. In lieu of Equations (11) -(13), the surface integral (5) is evaluated as Which can also be written as where U s and U m are the collection of displacement degrees of freedom corresponding to nodes of slave elements and master elements respectively. The system of equations is eventually written as where U r is the collection of displacement degrees of freedom corresponding to nodes of all elements which are neither slave elements nor master elements, and K ss , K sm , K ms and K mm are given in Equation (15). Procedural framework The steps to be followed for the treatment of hanging nodes in hexahedral meshes are Identify the elements sharing the interface Identify the elements on the fine mesh side as slave elements and elements on the coarse mesh side as master elements Identify the faces of the slave elements on the interface as slave surfaces and faces of the master elements on the interface as master surfaces Use singular value decompositions [1] to obtain the equations of the slave and master surfaces In the numerical integration module, map the slave and master surfaces to 2D reference elements For every gauss point on the reference element which every slave surface has been mapped onto, identify the point on the slave surface. Denote Jacobian matrix by DF E and let J E = det(DF E ). Defining r ij ≡ r i − r j , we have Denote inverse mapping by F −1 E , its Jacobian matrix by DF −1 E and let J F −1 Let φ(x) be any function defined on E andφ(x) be its corresponding definition onÊ. Then we have where c 8×1 is the vector of coefficients to be determined. Since the equation S(x) = 0 is satisfied at each of the four vertices defining the face, we get the system of equations M 4×8   x 1 y 1 z 1 x 1 y 1 y 1 z 1 x 1 z 1 x 1 y 1 z 1 1 x 2 y 2 z 2 x 2 y 2 y 2 z 2 x 2 z 2 x 2 y 2 z 2 1 x 3 y 3 z 3 x 3 y 3 y 3 z 3 x 3 z 3 x 3 y 3 z 3 1 x 4 y 4 z 4 x 4 y 4 y 4 z 4 x 4 z 4 x 4 y 4 z 4 1 for c. The objective is to determine c ∈ N ull(M). First, we get the SVD of M as where κ is the vector of coefficients and r is rank of M. The objective now is to determine κ. First, using wherev i , i = 1, 2, 3, 4 onê ∈Ê is the corresponding definition of v i , i = 1, 2, 3, 4 on e ∈ E. The solution κ of (22) is substituted into (19) to obtain c, which is then substituted into (17) to obtain the polynomial expression of S(x).
2018-08-11T02:12:21.677Z
2018-09-10T00:00:00.000
{ "year": 2018, "sha1": "a4db6e5e89be4556eba7dec214549eb93e0ac347", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8067fcf716219356f18a46d42821697c77074f52", "s2fieldsofstudy": [ "Geology", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
235644182
pes2o/s2orc
v3-fos-license
Screen-Printed Graphene/Carbon Electrodes on Paper Substrates as Impedance Sensors for Detection of Coronavirus in Nasopharyngeal Fluid Samples Severe acute respiratory syndrome (SARS-CoV-2), the causative agent of the global pandemic, which has resulted in more than one million deaths with tens of millions reported cases, requires a fast, accurate, and portable testing mechanism operable in the field environment. Electrochemical sensors, based on paper substrates with portable electrochemical devices, can prove an excellent alternative in mitigating the economic and public health effects of the disease. Herein, we present an impedance biosensor for the detection of the SARS-CoV-2 spike protein utilizing the IgG anti-SARS-CoV-2 spike antibody. This label-free platform utilizing screen-printed electrodes works on the principle of redox reaction impedance of a probe and can detect antigen spikes directly in nasopharyngeal fluid as well as virus samples collected in the universal transport medium (UTM). High conductivity graphene/carbon ink is used for this purpose so as to have a small background impedance that leads to a wider dynamic range of detection. Antibody immobilization onto the electrode surface was conducted through a chemical entity or a biological entity to see their effect; where a biological immobilization can enhance the antibody loading and thereby the sensitivity. In both cases, we were able to have a very low limit of quantification (i.e., 0.25 fg/mL), however, the linear range was 3 orders of magnitude wider for the biological entity-based immobilization. The specificity of the sensor was also tested against high concentrations of H1N1 flu antigens with no appreciable response. The most optimized sensors are used to identify negative and positive COVID-19 samples with great accuracy and precision. Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a highly contagious virus, transmissible from human to human [1], which has been classified as a global pandemic by the World Health Organization [2]. Despite the rollout of many vaccine candidates in the last few months [3], much of the world population is still effected by outbreaks as new variants are identified and low-income communities have no access to vaccines [4]. Even where these vaccines are available, economic activities are still hindered, with schools and colleges most affected by apprehensions of outbreaks. Rapid, low-cost and accurate testing, particularly in the early days of infection [5] when the viral load is too small to easily cause false negatives, is of high significance [6][7][8]. Therefore, batch fabricated mass diagnostic devices or biosensors are needed among these communities to reduce the number of undetected cases [9]. Such an early and prompt diagnosis can play a crucial role for informed decisions concerning the isolation of infected patients. This, again, has an economic impact if a false positive person is isolated, while slowing the spread of this infectious disease by isolating the true positives [10][11][12]. After the genetic code of the virus was discovered, polymerase chain reaction (PCR)based tests became the gold standard around the world [13,14]. However, these have their own limitations, the most important of which is a relatively long detection time that has been shortened considerably by many scientific developments in the last months, yet is still without a point-of-care possibility. Moreover, sample transport to specialized labs and the complicated sample pretreatment steps are a major hurdle to quick identification [11]. Such centralized diagnostic services provided by skilled personnel are not recommendable for rapid screening in public places and are only suitable for final verification. This has led to identification by body temperatures, which is not a true indicator on one hand, but more importantly is absent in the case of asymptomatic infections. Hence, a sensitive and inexpensive immunological tool is an essential part in our fight against the virus, preventing and controlling outbreaks, and re-opening the essential components of life [10]. Biosensors [15], with their large array of diagnostic applications and fast, easy, reliable detection, are a handy tool in this regard [16,17]. Several kinds of biosensors have been reported in the past for the diagnosis of different viruses. The mechanisms used for those detections include surface plasmon resonance [18], electrochemical transduction [19,20], including the impedance-based analyses [21,22], and colorimetric [23] lateral flow assays (LFAs) [24], which mostly target a specific antigen-antibody interaction. Some of these mechanisms have already been employed in biosensors [25] for the detection of SARS-CoV-2. Specific examples include a graphene-based field-effect transistor (FET) biosensor [26], a plasmonic photothermal biosensor [27], a fluorescence-based microfluidic immunoassay [28], and a small variety of electrochemical and impedance biosensors using commercial screen-printed electrodes. In one of these reports, the carbon electrodes were functionalized with the Cu 2 O nanocubes to enhance the surface area of the electrode for improving the functionality of the impedance-based mechanism [29]. The sensor explored the same COVID-19 spike protein-spike antibody interaction through another protein mediated immobilization of the antibody. In another effort, electrochemical immunosensing was explored, targeting the nucleocapsid protein and its complementary antigen of COVID-19 [30], where the authors utilized cotton swabs on the tip of the commercial electrode to improve direct sampling. A label-free, commercially available impedance sensing platform, using specialized well plates with integrated sensing electrodes from ACEA Biosciences, was also developed for the detection of COVID-19 antibodies [31]. A magnetic bead-based electrochemical assay, targeting spike protein and nucleocapsid protein using a labeled mechanism, was also demonstrated. This mechanism detected the enzymatic by-product using screen-printed electrodes modified with carbon black nanomaterial [22]. Another example, the cobalt-functionalized TiO 2 nanotube sensor, was presented for the rapid detection of the virus through antibody-antigen interactions [32]. A paper-based approach is also demonstrated [12]. For this, the authors used well established EDC/NHS chemistry to graft the antigens of the graphene oxide electrodes and showed the detection of complementary antibodies, both IgG and IgM. These solitary examples of commercial screen-printed electrodes, in most cases, indicate that there is a wide gap in the fabrication protocols, the cost of testing, and the implementation of the resulting sensors. If some of these antigen-antibody interactions can be efficiently transferred onto paper or other low-cost substrates with the measurements done through portable devices, it would truly transform testing and isolation in the real world in real-time. With these objectives, we started to explore paper substrates for detection purposes. Among the many sensing mechanisms, electrochemical and impedance transduction is highly suitable due to its good sensitivity, low cost, low response time, and small sample requirement [22,29,31]. More importantly, the amenability to miniaturization, and subsequent portability, through already available devices provides it with immense potential for screening and POC testing. For the electrode materials, carbon and graphene nanostructures are considered ideally suited because of their large surface area, stability, and ease of functionalization [33]. In this study, the graphene-carbon electrodes were screen-printed onto cellulose based paper pad substrates in a batch fabrication mode using different masks. A commercially available ink with high conductivity was used for this purpose. Such high conductivity of electrodes (immobilizing matrix) in the presence of probe molecules (electron transport system) provide very low background signal, which results in high sensitivity of the measurement. The ease of functionalization in these graphene materials can also provide a flexibility in immobilizing antibodies to the electrode surface. Moreover, this whole arrangement provided a stable deposit of the electrode material which is required to bear the buffering conditions during the measurements. The selection of antibody immobilization method is another point of consideration in the fabrication as well as in the performance in its final form [34]. Some of the immobilization can be random and some can graft the antibodies in an oriented assembly. The defined orientation can enhance the antibody loading and interaction capacity, thereby increasing the functionality of the sensor [35,36]. Furthermore, the antibodies can be immobilized through chemical and biological mediators. We used two of these strategies to immobilize IgG antibodies of the SARS-Cov-2 spike protein and evaluated the performance of the paper-based sensors. For measurement purposes, we used portable devices from which the data was directly collected onto a mobile device via Bluetooth. Finally, the optimized sensors were used to screen through the nasopharyngeal fluid of the healthy and infected patients in order to compare with the results of PCR tests. Fabrication of Sensing Electrodes Single strip sensing electrodes were fabricated on cellulose fiber-based paper pads from Millipore, Germany (CFSP001700). For this purpose, the sensing strips were cut out of rolls into the appropriate size and dried in an oven at 140 • C. The design of the sensing area, as well as of the electrode structure, was made using Adobe Illustrator software. First, a pattern of the testing area was drawn by impregnation of paraffin film (Sigma-Aldrich (Germany), cat. No. 327204) using a hot metallic pattern and pressure transferring, resulting in a testing channel. A screen-printing and batch fabrication procedure was adopted to obtain the sensing devices as schematically outlined in Figure 1, involving different masks and inks. High conductivity graphene/carbon hybrid ink (Dycotec Materials, UK DM-GRA-9101S) was used to print the working electrode. Ag/AgCl ink (Creative Materials (Ayer, MA, USA) 119-10) was used to print the reference electrode. Carbon ink (Dycotec Materials, UK DM-CAP-4311S) was used to print the counter electrode. Between each printing of the electrodes, the strips were temperature cured at 100-140 • C for 30 min each. The connecting pads and the leading wires were also screen printed using conductive silver ink (Dycotec Materials, UK DM-SIP-3060S) and temperature cured. The non-exposed area of the strip was then covered with the UV-curable insulator ink (Dycotec Materials, UK DM-IN-7010S). After the UV-curing, these paper-based sensing strips were ready for the immobilization of biosensing elements. However, before the modifications, the reproducibility of the fabrication protocol was tested. In one batch, ten electrodes were fabricated, and repeatability and working of the testing area was examined by cyclic voltammetry in the presence of 5 mM K 3 (Fe(CN) 6 ). Relative standard deviation (RSD) for the anodic peak current, calculated from these measurements, was less than 10% in each case. Figure 1. Schematic for the fabrication protocol of the paper-based impedance sensor and the measurement device with the sensors installed, where: (a) is the paper strip cut into appropriate size and paraffin-impregnated within defined test area; (b) is the mask 1 for the connecting wires and connectivity pads; (c) is the screen-printed connectivity pads and lead wires using the silver conductive paste; (d) is the mask 2 for the working electrode; (e) is the screen-printed working electrode using the high conductivity graphene/carbon hybrid ink; (f) is the mask 3 for the counter electrode; (g) is the screen-printed counter electrode with carbon ink; (h) is the mask 4 for the reference electrode; and (i) is the screen-printed reference electrode using the Ag/AgCl ink. Immobilization of the Sensing Elements For the direct immobilization of COVID-19 antibodies onto the electrode surface, 1-pyrenebutanoic acid succinimidyl ester (PBASE), an interface coupling agent, was used as a probe linker. For this purpose, the fabricated paper electrode was impregnated with 10 µL of PBASE (10 mg/mL) in methanol for 1 h at room temperature. After washing with PBS, the functionalized sensor was then incubated with 10 µL of IgG solution (30 mg/mL) in PBS (pH 7.4) for 1 h. The free sites on the electrode surface, which otherwise can cause non-specific interactions, were blocked by a solution of 0.1% BSA in the PBS buffer. In another attempt, a more oriented immobilization of spike S1 IgG antibody was achieved, using 10 µL of ProtA solution (10 mg/mL) in PBS (pH 7.4), impregnated onto the surface of the graphene electrode for 1 h. The IgG antibody was then immobilized by incubating it with 10 µL of its solution (30 mg/mL) in PBS for 0.5 h. Again, the active sites were blocked from the unspecific interactions by 10 µL of BSA (0.1%) solution dropping onto the modified electrode and washing with PBS. For both cases of direct immobilization of antibodies and the ProtA-mediated one, the test strips were then stored in dry conditions at 4 • C in a refrigerator until immediately before use. A check of the surface, after the modification was done, was performed through SEM analysis. However, this analysis was only intended to look into the orientation status of the immobilization. For that purpose, a field-emission SEM instrument (Lyra 3, Tescan, Czech Republic) was used. Electrochemical Measurements All the electrochemical data was collected using a battery-powered portable potentiostat/impedance analyzer (PalmSens4, PalmSens BV, The Netherlands) transferred to a laptop and, in some cases, to a mobile device using Bluetooth connectivity. In order to demonstrate the portability of the sensing devices, the optimized sensing electrodes were also validated using SensIT BT devices (PalmSens BV, The Netherlands). Cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) were used to study the different steps of immobilization, whereas the quantitative measurements were done using EIS only. For each measurement, the required volume of the test solution was impregnated onto the electrode surface at room temperature and the data was collected after a fixed interval of time. For each exposure, the given sample quantity was dropped onto the sensor surface and the sensor response was obtained after 5 min. This time duration was kept constant throughout the study for all the experiments. Treatment of the Samples Both types of antibody modified strips were tested against the RBD antigen in artificial sample matrices, i.e., nasal fluid from a healthy volunteer collected in PCR tubes spiked with antigen solutions. For the analysis of real samples, four nasopharyngeal swab samples were collected from a local hospital, which were identified to be positive (three cases) and negative (two cases) by RT-PCR. These samples were collected in the universal transport media (UTM), inactivated by heating at 100 • C for 10 min, and stored in the freezer at −80 • C until tested with the fabricated strips. All the real sample testing was done using the portable devices in a bio-safe environment, whereas the analytical team was blinded to clinical information of the patients and the samples. Sensor Fabrication and Characterization SARS-CoV-2 spike S1 IgG antibody, as a biosensing element, is required to have a strong immobilization on the electrode surface, with a good homogenous orientation, in order for the sensor to work efficiently towards COVID-19 detection. Moreover, the electrode material, i.e., graphene in this case, must be stable enough to bear the buffering conditions during different electrochemical processes. For impedance-based sensors, as proposed in this study, the conductivity of the sensing electrode needed to be high so as to extend the detection range by providing a sharper and low baseline impedance. In order to achieve that synchronized situation, high conductivity graphene hybrid ink (Sheet Resistance = 10 Ω/Sq/25 µm) was used and two different strategies were applied and tested for their final outcome. In the first case, (PBASE), an interface coupling agent typically used as a probe linker that binds to IgG, was immobilized on the hydrophilic cellulose paper pads. Graphene/carbon film was tightly embedded in the porous cellulose network in the working zone of the sensor due to the presence of hydroxyl groups both on paper and graphene. Physisorption, as well as the hydrogen bonding in the two structures, can lead to an integrated structure in such a scenario, thereby avoiding a layer of film peeling off once the surface is rehydrated by the buffer systems. Thus, the stability and reproducibility of the fabricated sensor can be significantly enhanced. The activated carboxylic terminals of the graphene can then be used to incorporate PBASE and, later, for the attachment of IgG antibodies. Such interaction can also occur between the graphene and pyrene groups of the PBASE to support this immobilization mechanism. At the same time, a protein-mediated IgG immobilization strategy using the same paper substrate was also applied. ProtA, a strong immunological instrument, was used in this regard, which can strongly bind to the Fc region of IgG antibodies, leading to well-defined orientations of the antibody elements. In an earlier report [29], this strategy was tested on Cu 2 O modified commercial electrodes with good recognition ability for the spike proteins. In this study, we utilized graphene on paper electrodes using the same strategy. The immobilized antibodies with such defined orientation were proven to increase the antigen-binding capacity of the films, thus improving the function of the detection system. A schematic illustration of the two strategies is provided in Figure 2. Figure 2. A schematic illustration of the strategies adopted in this work for immunosensing of the coronavirus in the nasopharyngeal samples using paper substrates, and graphene electrode. The difference between the two strategies is also shown where the immobilization in the top of the frame is PBASE mediated one and the immobilization in the bottom frame is protein mediated one. The sensor signals were transmitted to a mobile device using Bluetooth. The electrochemical profile at each fabrication step was studied by two complementary techniques (i.e., EIS and CV) in ferro/ferricyanide solution. The EIS is regarded as a method that can obtain electrical information in a broad frequency range in order to monitor the electrode modification steps. Herein, the EIS data was fitted into Randles' equivalent circuit model, whereas the real and imaginary components of the impedance were plotted as the Nyquist plot. The straight diagonal line in the Nyquist plot at lower frequencies indicates a typical behavior of the planar electrodes for diffusion-controlled redox reactions, while the semicircle part at higher frequencies represents the electron transfer efficiency between the redox couple and the electrode surface, which is known as charge transfer resistance (Rct). The Rct values were used as the sensor signal for different measurements. CV, on the other hand, provides indications in the form of oxidation and reduction peak heights that are increased or decreased based on the changes of the charge transfer kinetics. Figure 3 illustrates the state for each step of the sensor fabrication in terms of both CV and EIS. Starting with two different electrodes, the peak currents and Rct were measured simultaneously for PBASE and ProtA-mediated binding. Both the electrodes chosen have almost the same values for the oxidation peak current (~50 µA) and Rct (~2 kΩ). The peak potentials for the reduction and oxidation are sufficiently close to indicate the reversible reaction and a minimal obstruction towards the redox conversion. When the graphene sensor surfaces were modified with PBASE, which is a molecular entity, the redox peak current, as well as the Rct, only slightly shifted for the ferro/ferricyanide couple ( Figure 3A,C). On the other hand, when a similar electrode was modified with ProtA, which is a biological entity of larger size domain and thus causing a more insulating behavior, the redox peaks, as well as the Rct, was shifted to a larger extent ( Figure 3B,D). These observations suggest that the added reagents behaved as insulators, thereby impeding the electron transfer at the interface. However, this behavior was somewhat reversed in the case of antibody immobilization. Overall, the continuous enlargement of the Rct was still observed, implying that the immobilization of the spike protein was accomplished. However, for the antibody immobilization over the PBASE, which is considered a relatively random process, the Rct shift was significantly higher as compared to ProtA-mediated binding, which is a relatively organized and oriented process. Similar voltammetric and impedimetric behavior was observed when the non-active sites were blocked with BSA in both cases, thereby leading to the sensor surfaces having an appreciably lower impedance baseline to conduct further quantitative tests. Quantitative Detection of RBD in Nasopharyngeal Samples The performance of the sensor thus fabricated was quantified against a series of concentrations of RBD antigen (spike protein) using EIS. In order to have more practical applicability, a nasopharyngeal sample of a healthy donor was obtained and diluted. The aliquots of this sample was spiked with different concentrations of the antigen. Figure 4A indicates the EIS responses for the increasing concentrations of the RBD in the nasopharyngeal samples when the antibody-immobilized sensor, through PBASE mechanism, was exposed to these concentrations ranging from 0.25 fg/mL to 1 × 10 8 fg/mL. A gradual increase in the Rct semicircle in the Nyquist plots was observed with the increasing concentration of the spike protein. This indicates the inhibition effects of the RBD for the electron transfer between the electrode surface and the redox couple. Even small changes in the concentrations can lead to the altered interfacial processes, demonstrating the specific binding of the protein with the antibodies immobilized on the surface and also providing an idea of the sensitivity of the sensor probe. The response of the sensor (Rct) was plotted against the log of the concentrations spiked in the nasopharyngeal samples ( Figure 4B). This relationship was found to be linear in the range of 0.25 fg/mL to 1 ng/mL with a regression equation of Rct = 11.45 log C + 22.85. After this linear range, the saturation level of the sensor was reached, and so, there was no increase in the Rct values with the increase of the concentration of RBD. Still, the concentration range and the limit of detection was better than typically available ELISA platforms, which can be attributed to the high conductivity of the graphene electrodes fabricated in this study. A similar set of experiments were performed with the sensor strips modified with antibodies through the ProtA mechanism, which are supposed to have a much more aligned antibody structure. The data is provided in Figure 4C. The response obtained from the spiked nasopharyngeal samples was also similar to the PBASE-modified antibody sensor, i.e., the EIS impedance increased with the increasing concentration of the RBD and that response was linearly proportional to the log of the concentration. However, the linear range was extended 3-fold in magnitude of the concentration (i.e., up to 1 µg/mL) for the antigen in the samples, as shown in Figure 4D. As the electrode surface was the same graphene, this enhancement of the sensor signal can be attributed to the ordered orientation of the antibody through ProtA-modification. Despite of this wider dynamic range of measurement, the limit of quantification has more of a significance here. This is due to the fact that the sensors have to be utilized in the detection of real samples invaded by the virus itself. The virus loading in the patient's biological fluids exponentially increase based on the number of days that have passed after infection. This phenomenon can lead to false negatives in the early days after infection if the detection mechanisms have higher limits of quantification, as the virus load is very low from day 1 to day 4 of the infection. This is the time when the patient can be identified as infected and thereby isolated, and further infections can be limited as the patient is relatively less contagious in this timeframe. Both the EIS sensors presented here are capable of quantifying 0.25 fg/mL of the concentration, which is sufficiently low enough to enable early identification and isolation of the infected person. However, this number depends on many factors, such as the number of days the patient has been infected or even the inherent immunity of the patient themself. When compared to the performance of this sensor against recently published sensor reports, the sensitivity was comparable to an FET sensor (1 fg/mL), a monoclonal antibody sensor (0.1 µg/mL), an electrochemical sensor based on Cu 2 O nanocubes (0.25 fg/mL-1 µg/mL), an electrochemical sensor based on gold nanoparticles (1 pg/mL), a cotton tipped electrochemical sensor (0.8 pg/mL), a molecularly imprinted electrochemical sensor (50 fM), and a plasmonic photothermal biosensor (0.22 pM). However, the condition of measurement and sampling were different in all the cases, therefore, a true comparison is neither possible nor justifiable. Furthermore, the sensor can only have a practical value if the detection mechanism is selective, which is rather specific in the case of such critical detections where false positives mean a person is isolated from most of his/her activities for a period of at least two weeks. To confirm this selectivity, the oriented antibody sensor was exposed to potential interfering agents. In order to confirm the behavior of the sensor, a fabricated strip was subjected to EIS measurements against the Flu A antigen up to 10 pg/mL of the concentration, again in nasopharyngeal samples. Figure 5 shows the EIS curves with an inset showing a bar graph of the responses. For the antigen H1N1, the sensor showed a negligible increase in the impedance as there would be no binding of this antigen on antibody sites. After PBS washing, the same sensor was exposed to 1.0 pg of the RBD, and the sensor showed significant signal to indicate again the specific binding. This binding remains intact even if the sensor is exposed to the mixture of the RBD with 10 times the concentration of the possible interferent. These results indicate high selectivity of the proposed EIS immunosensors. Real Sample Testing Next, we investigated the EIS sensor's ability to detect the SARS-Cov-2 virus in real samples. To this end, the nasopharyngeal swab specimens from including COVID-19-negative (n = 2) and COVID-19-positive samples (n = 3), already confirmed by gold standard PCR tests, were collected and stored in UTM. Each sample was divided into three aliquots in order to study the reproducibility of the data as well. When the disposable sensor strips were exposed to these sample aliquots, a clearly distinguishable response was observed in the case of both the negative and positive samples (where 1 and 2 represent the negative samples and 3-4 represent the positive samples) in Figure 6. Further, these responses are quite reproducible when different aliquots of the same sample were used as shown by the standard deviation values. This exhibits a great promise for a reliable detection platform for the fast indication of the disease directly in the real samples. Conclusions In summary, we herein presented a low-cost, batch fabrication protocol for a cellulose paper substrate, where the highly conductive formulation of the graphene/carbon ink can be screen-printed as a working electrode to be immobilized with the antibodies in different formats. A PBASE directed format and a ProtA controlled format were tested, which were characterized by the electrochemical techniques. Both the tested formats generated appreciable EIS signals as the viral antigen (RBD) or the virus itself when attached, impeding the redox reaction of the (Fe(CN) 6 ) 3−/4− . The sensors could quantify the concentrations with as little as 0.25 fg/mL in spiked nasopharyngeal samples. Moreover, the sensor showed high selectivity and reproducibility. The ProtA antibody immobilization resulted in a wider dynamic range. Therefore, this sensor was further utilized in the real samples to indicate the positive and negative COVID-19 identity in the samples with a correct identification as confirmed by standard PCR testing. It is important to note that the sensor data was collected using portable potentiostats, where the signals are transferred to smart devices via Bluetooth. Thus, it can be concluded here that the fabricated sensors show promise in direct, rapid, and low-cost diagnosis without sample pretreatments. Moreover, the sensor fabrication process can be automated, and such developments are underway in our lab. With the development of more stable and high affinity receptors, natural (monoclonal antibodies) or synthetic (artificial antibodies based on polymers), such sensors have the potential to be implemented in the day-to-day screening of COVID-19.
2021-06-10T13:16:36.144Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "cf4b629eaa47d31e157cce83506a70bfbe07573d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/11/6/1030/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "e31ee2b59aa68c0ce57e1cc11a3cd52e69aae3b1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
260851905
pes2o/s2orc
v3-fos-license
Prognostic role of C‐reactive protein to albumin ratio in lung cancer: An updated systematic review and meta‐analysis Abstract Background C‐reactive protein to albumin ratio (CRP/Alb ratio, CAR) has been suggested as a potential prognostic biomarker in lung cancer. This updated systematic review and meta‐analysis aimed to assess the association between CAR and lung cancer prognosis in current literature. Methods A systematic search of databases was conducted to identify relevant studies published up to April 2023. Pooled hazard ratios (HRs) and 95% confidence intervals (CIs) were calculated to assess the association between CAR and overall survival (OS) and progression‐free survival (PFS) and recurrence‐free survival (RF) in lung cancer patients. Results This meta‐analysis includes 16 studies with a total of 5337 patients, indicating a significant association between higher CAR and poorer OS, PFS, and RFS in lung cancer patients, with a pooled HR of 1.78 (95% CI = 1.60–1.99), 1.57 (95% CI = 1.36–1.80), and 1.97 (95% CI = 1.40–2.77), respectively. Conclusions This updated meta‐analysis provides evidence for the potential prognostic role of CAR in lung cancer, suggesting its utility as an effective and noninvasive biomarker for identifying high‐risk patients and informing treatment decisions in a cost‐effective manner. However, further large‐scale studies will be necessary to establish the optimal cut‐off value for CAR in lung cancer and confirm the present findings. Highlights • Higher C-reactive protein to albumin ratio (CAR) is associated with poorer prognosis in lung cancer patients.• CAR is a potentially useful prognostic biomarker for lung cancer as it is simple, inexpensive, and widely available.• CAR may be used to identify high-risk patients who may benefit from more aggressive treatment strategies.• Further studies are needed to investigate the potential use of CAR as a predictive biomarker for response to therapy and to establish optimal cutoff values for different stages of lung cancer. Lung cancer is a malignant neoplasm that originates from lung tissue.It is broadly categorized into two types: small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC), among which NSCLC is the most prevalent.NSCLC can be further divided into lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), and lung large cell carcinoma.LUAD, the most frequently occurring subtype, constitutes 60%-70% of all NSCLCs.LUSC accounts for 20%-30% of cases, while large cell carcinoma represents a minority, with only 5%-10%. 1 Lung cancer is the reason for causing the greatest amount of fatalities related to cancer in the United States, with a mortality rate of over 350 individuals.This figure is 2.5 times greater than the number of fatalities attributed to colorectal cancer, which is the second most prevalent malignancy. 2 Despite notable progress in surgical, radiotherapeutic, chemotherapeutic, and immunotherapeutic interventions, advances in the care and handling of lung cancer persist.Regrettably, the prognosis for individuals diagnosed with lung cancer remains unfavorable, 3 and the 5-year overall survival (OS) rate only was 18.2%. 4Moreover, a retrospective study conducted in China has revealed that the median time of survival of individuals diagnosed with lung cancer is roughly 1 year. 4Patients suffering from advanced NSCLC who undergo chemotherapy have a 5-year survival rate of under 5%.Moreover, the risks of chemotherapy toxicity are on the rise in this population. 5Although immunotherapy is associated with sustained improvements in 5-year OS and progression-free survival (PFS) in patients diagnosed with lung cancer as compared to conventional chemotherapy, the OS rates (median 47.5 vs. 29.1 months) and PFS rates (median 16.9 vs. 5.6 months) were below the 50% threshold in this study cohort. 6Thus, accurate prognosis assessment of lung cancer patients is crucial in guiding the selection of clinical treatment options. C-reactive protein (CRP) is synthesized by the liver as an acutely reactive protein.It can activate the innate immune system's complement system, 7 serving as a reliable prognostic marker for monitoring a diverse range of malignant tumors.Examples include pancreatic cancer, 8 urinary system tumors, 9 and hepatocellular carcinoma. 10Albumin (Alb), synthesized in the liver, is a plasma protein that plays a significant role in regulating fluid balance in the body by maintaining plasma osmolality and facilitating blood volume transport. 113][14] Likewise, serum albumin is also deemed an essential prognostic factor for the survival of NSCLC patients. 15The utilization of the CRP-to-albumin ratio in evaluating inflammatory response and nutritional status can provide a more inclusive evaluation of lung cancer prognosis.7][18] Prior metaanalyzes have suggested that pretreatment CAR is a potential prognostic marker for NSCLC, specifically excluding small cell lung cancer, with OS and RFS outcomes being the only events studied and PFS being overlooked. 19Due to the lack of a comprehensive analysis on the reliability and extent of CAR's prognostic significance in lung cancer, this meta-analysis was conducted to further explore this association. | Search strategy and criteria Relevant literature was collected through computer searches of databases including PubMed, Embase, Cochrane Library, and Web of Science, from the establishment of each database until April 25, 2023.English search terms included "lung," "pulmonary," "cancer," "tumor," "neoplasm," "carcinoma," "C-reactive protein/albumin ratio," "C-reactive protein to albumin ratio," "C-reactive protein in albumin ratio," "CRP/Alb ratio," "CAR," and so forth.The PubMed-specific search strategy was: (lung OR pulmonary) AND (cancer OR tumor OR neoplasm OR carcinoma) AND (C-reactive protein/albumin ratio OR Creactive protein to albumin ratio OR C-reactive protein in albumin ratio OR CRP/Alb ratio OR CAR).The reference lists of included studies were also searched.Exclusion criteria: (1) Studies on nonprimary lung cancers such as metastatic tumors or recurrent cancers; | Study selection and exclusion (2) abstracts, letters, case reports, reviews, or nonclinical studies; (3) studies that did not provide sufficient data or hazard ratio (HR) values with 95% confidence intervals (CI) for the calculation of OS; (4) Newcastle-Ottawa Scale (NOS) scores <7; 20 (5) For duplicate or identical studies, only those with higher methodological quality were retained. | Data extraction According to the search strategy described above, the databases were thoroughly searched, and duplicate studies were removed.Articles that met the inclusion criteria were chosen determined by their titles and abstracts.Subsequently, the full texts were read to further screen the remaining literature according to the inclusion and exclusion criteria.Articles with missing or duplicate data were excluded, and the remaining articles were included for data extraction.The data extraction process was independently completed by two reviewers, and cross-checking was performed after completion to make final decisions.Discrepancies were discussed and resolved by the two reviewers, and a third reviewer was consulted when necessary.The following information was extracted: first author, year of publication, study time, country, sample size (gender ratio), follow-up time, treatment regimen, age, critical value of CAR, pathological type, TNM stage, outcome measures, HR, and 95% CI. | Quality evaluation All included literature was evaluated for quality based on the NOS, 20 which includes three aspects: the appropriateness of the selection of study cohorts, the comparability between study cohorts, and the evaluation of outcome events in the literature.The included literature was assessed, and a score was assigned based on the three aspects mentioned above.Studies with a score ≥7 were considered to be of high quality. | Statistical analysis All data statistical processing and analysis were performed using Stata 12.0 (64-bit) software.Meta-analysis was used to calculate the pooled HR and corresponding 95% CI of OS to explore the correlation between CAR and prognosis of lung cancer patients, and a forest plot was generated.Z-test was used to determine statistical differences, and p < 0.05 was considered significant.Heterogeneity was evaluated using the I 2 statistic and Qtest. 21When I 2 ≥ 50% and p < 0.05, significant heterogeneity was present, and a random-effects model was used.Otherwise, a fixed-effects model was used (p ≥ 0.05, I 2 < 50%). 22,23When significant heterogeneity was observed, sensitivity analysis and subgroup analysis were performed to explore the source of heterogeneity, and the stability of the meta-analysis results was assessed.Begg's test, 23 Egger's test, 24 and funnel plots were used to detect publication bias, and if the Egger's plot was significantly asymmetric or the p-value of Egger's test was <0.05, significant publication bias was considered to be present.If there was publication bias, studies used the Trim and Fill method to assess the robustness of the findings.Trim and Fill method is a statistical approach used to evaluate publication bias.It serves to assess and correct the effect of such bias based on trimming unreliable estimates from analysis results and filling in the model.This way, potential distortions in results due to bias can be remedied. | Characteristics of included studies and quality assessment According to the search strategy described above, a thorough search of the databases was conducted, and 6158 preliminary research articles were obtained after removing duplicate studies.After preliminary screening, further 6120 articles were excluded, determined by their titles and abstracts, as which did not satisfy the criteria for inclusion.Of the remaining articles, 37 were found to be relevant, and five articles were excluded as they were either reviews or meta-analyzes.Finally, after reading the full texts and excluding 16 [25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40] articles with incomplete or duplicate data, a total of 16 articles were included in this meta-analysis.The specific screening process is shown in Figure 1. | Correlation between CAR levels and the outcome of individuals diagnosed with lung cancer A meta-analysis of 12 studies evaluating the association between CAR and OS, four studies assessing the correlation between CAR and PFS, and three studies assessing the correlation between CAR and RFS showed no significant statistical heterogeneity in patients with lung cancer.Using a fixed-effects model, the results indicate a notable correlation between high CAR and poor OS (HR = 1.78, 95% CI = 1.60-1.99,p < 0.001; Figure 2), PFS (HR = 1.57, 95% CI = 1.36-1.80,p < 0.001; Figure 3), and RFS (HR = 1.97, 95% CI = 1.40-2.77,p < 0.001; Figure 3).Subgroup analyzes based on country, pathology type, TNM, and treatment modality revealed that pretreatment CAR is a reliable predictor of prognosis in lung cancer patients.There was no significant statistical heterogeneity among studies conducted in China (I 2 = 19.8%,p = 0.289), on III-IV (I 2 = 0, p = 0.964), on NSCLC (I 2 = 0, p = 0.994), or on surgery as a treatment modality (I 2 = 0, p = 0.986).Detailed results are presented in Table 1 and Supporting Information: | Sensitivity analyzes and publication bias To investigate the main source of I 2 in the regional subgroup of China (I 2 = 19.8%,p = 0.289) in the subgroup analysis, and in CAR versus PFS in lung cancer patients (I 2 = 31.0%,p = 0.226), sensitivity analysis was conducted by removing one study at a time, evaluating the change in values and heterogeneity after the final combination.If removing any study did not significantly affect the combined effect, the meta-analysis would provide reliable results.The results are presented in Figure 4A,B.Funnel plots and Egger's plots were also used to qualitatively and quantitatively detect publication bias of the included articles in the OS, PFS, and RFS studies.Figure S2A-C. Begg's method and Egger's method were used to investigate publication bias in the articles included in the exploration of OS, PFS, and RFS.The results were as follows: OS (Begg's test, z = 1.03, p = 0.304 > 0.05; Egger's test, t = 2.52, p = 0.03 < 0.05), PFS (Begg's test, z = 1.70, p = 0.089 > 0.05; Egger's test, t = 3.58, p = 0.07 > 0.05), and RFS (Begg's test, z = 0, p = 1 > 0.05; Egger's test, t = 4.25, p = 0.147 > 0.05).As a result, there was no evidence of publication bias among the studies included in the analysis of the secondary outcomes PFS and RFS.However, for the articles included for the primary outcome OS, despite the p > 0.05 measured by Begg's method, it was necessary to use the Trim and Fill method to assess the stability of the pooled results due to the p < 0.05 obtained by the Egger's method. The Trim and Fill method was used to evaluate publication bias in a meta-analysis.First, the results from the fixed-effect model and random-effect model were reported, including a heterogeneity test, with Q = 7.431 and p = 0.763, adopting the fixed-effect model, resulting in a combined effect of HR = 0.579 with a 95% CI of (0.47-0.69).Then, the linear method was used to estimate the missing number of studies, which was calculated as six after four iterations.Finally, the data from the six virtual studies were added to the meta-analysis, and the overall results were reanalyzed.The heterogeneity test showed a Q value of 13.841 and p = 0.678, and the fixed-effect model was employed, resulting in a combined effect of HR = 1.684 with a 95% CI of (1.53-1.86).The final result indicated that p < 0.05, meaning that the addition of the six studies led to a reversal of the results.Therefore, there might be some instability in the OS estimate (Figure 5).Therefore, starting with subgroup analysis, Begg's test and Egger's test were conducted to explore potential sources of publication bias in studies on OS in both Chinese and non-Chinese populations.The results showed that there was no publication bias in the Chinese studies on OS (Begg's test, z = 0.24, p = 0.806 > 0.05; Egger's test, t = 1.15, p = 0.334 > 0.05).Similarly, there was no publication bias in the non-Chinese studies on OS (Begg's test, z = 0, p = 1 > 0.05; Egger's test, t = −0.12,p = 0.910 > 0.05).Thus, based on the initial OS (Begg's test, z = 1.03, p = 0.304 > 0.05; Egger's test, F I G U R E 4 Sensitivity analyzes were performed for heterogeneity in the presence of countries (A) and PFS (B).PFS, progression-free survival.t = 2.52, p = 0.03 < 0.05) and results of bias detection published between subgroups, it can still be argued that the high CAR is still significantly associated with a poorer prognosis for lung cancer, at least based on both Chinese and non-Chinese studies.For a more definitive conclusion, further research may be required to provide additional support. | DISCUSSION This study conducted a meta-analysis of 16 articles to investigate the prognostic value of high levels of CAR in lung cancer patients in terms of OS, PFS, and RFS.Heterogeneity among the studies was examined and a fixed-effects model was used for the analysis, which revealed a significant association between high CAR levels and poor prognosis for all outcome measures, with HRs of 1.78 (95% CI = 1.60-1.99,p < 0.001) for OS, 1.57 (95% CI = 1.36-1.80,p < 0.001) for PFS and 1.97 (95% CI = 1.40-2.77,p < 0.001) for RFS.This result is consistent with the results of previous articles on the meta-analysis of CAR in patients with NSCLC. 41Subgroup analyzes were performed for OS, which included country, pathology type, and treatment method, all of which confirmed the correlation between high CAR levels and poor prognosis in lung cancer patients.The study also investigated the prognostic value of high CAR levels in SCLC patients, finding that it may predict poor prognosis, similar to NSCLC.Sensitivity analyzes were performed for Chinese studies due to heterogeneity, and further investigation was conducted to identify its possible causes. Chronic inflammation has emerged as a significant field of interest in cancer research due to its potential carcinogenic effects and its association with tumor progression. 42,43The chronic inflammatory state promotes tumor development by producing inflammatory cytokines that affect the extracellular matrix and contribute significantly to cancer progression. 44,45CRP, a significant inflammatory factor, can serve as a prognostic marker for various malignant tumors, including lung cancer, with decreased patient survival rates.Additionally, the association between low levels of albumin and poor prognosis has also been established. 7,15However, limitations currently exist when using albumin and CRP as individual prognostic factors.For instance, heightened levels of these biomarkers may not solely be attributed to tumors, but rather to other diseases or conditions such as inflammation, infection, liver cirrhosis, myocardial infarction, surgery, trauma, and physiological stress.Additionally, there is interindividual variability in baseline CRP levels, making it challenging to determine significant increases.To continuously monitor CRP levels during the treatment process, measurement time and interval should be accurately determined while interpreting the results.Albumin's longer half-life of 2-3 weeks limits its ability to reflect short-term disease progression or treatment effects.When used as a single prognostic marker, albumin's predictive effect is weak.The use of CAR as a prognostic marker combination offers several advantages over individual biomarkers.CRP and albumin, which are synthesized in the liver, reflect the body's inflammatory and nutritional metabolic status.Advantages of CAR include: (1) greater specificity in reflecting abnormal conditions of inflammation and nutritional metabolism, avoiding issues of misdiagnosis and missed diagnosis associated with individual biomarkers; (2) improved performance in predicting and assessing disease progression and treatment efficacy by more significantly demonstrating the body's conditions and functions; and (3) improved prognostic effect by more significantly reflecting the body's condition and function, enabling better prediction and assessment of disease progression and treatment efficacy.Therefore, CAR as a prognostic marker combination provides an improved evaluation of the body's nutritional metabolic status and inflammatory response, enhancing the accuracy of prediction and judgment, facilitating effective clinical treatment, and having a promising clinical application prospect. This study had certain limitations, including: (1) all the included studies were retrospective, increasing the likelihood of bias.(2) Since there are relatively few studies related to the prognosis of CAR and SCLC, and only two of the present articles included are related to SCLC, further validation of the prognostic role of CAR in SCLC through meta-analysis in larger research data is required.(3) Also, most of the included articles are from East Asia (China, South Korea, Japan), with only two from Germany, thereby necessitating further evidence to establish the value of CAR in the prognosis of lung cancer patients in countries and regions outside these areas. | CONCLUSION This meta-analysis offers proof for the promising use of CAR as a prognostic tool in lung cancer, indicating it could be a valuable and noninvasive biomarker for identifying patients at high risk and guiding treatment in a cost-effective manner.Nevertheless, more extensive studies will be required to determine the best threshold value for CAR in lung cancer and validate these findings. Inclusion criteria: ( 1 ) Research type: Observational studies on the relationship between CAR and lung cancer prognosis, which have been published domestically and internationally.(2) Study population: Patients with NSCLC confirmed by pathology.(3) Exposure: Patients were classified into a high CAR or low CAR group based on CAR values reported in the literature.(4) Outcome measures: The main research indicator was OS, and the secondary indicators were RFS and PFS. F I G U R E 1 Document screening process and results.F I G U R E 2 Forest plot of OS comparison results in lung cancer patients with higher CAR versus lower CAR.CAR, C-reactive protein to albumin ratio; OS, overall survival.F I G U R E 3 Forest plot of PFS and RFS comparison results in lung cancer patients with higher CAR versus lower CAR.CAR, C-reactive protein to albumin ratio; CI, confidence interval; HR, hazard ratio; PFS, progression-free survival; RFS, recurrence-free survival. F I G U R E 5 Using Trim and Fill method to detect the stability of the conclusion. Results of the OS subgroup analysis of the primary outcome.
2023-08-13T15:10:49.428Z
2023-08-11T00:00:00.000
{ "year": 2023, "sha1": "7103d42ac0a4a4ea7b71c12a53e29e110bc99f01", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cdt3.91", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "57abc1e9cf1e45859dc0b895c771c460c0aacd08", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
823248
pes2o/s2orc
v3-fos-license
Understanding issues associated with attending a young adult diabetes clinic: a case study Diabet. Med. 29, 257–259 (2012) Aims To study the reasons for attendance behaviour from the patient viewpoint at a young adult diabetes outpatient clinic. Methods Attendance rates for 231 clinic appointments over 19 months for 102 patients were calculated. Semi-structured interviews were conducted with a purposive sample of 17 of the 102. The interviews encouraged participants to describe routines, thoughts and feelings around clinic appointments. Observations were made of the clinic system. Themes arising from patients’ emotional and practical issues around attendance were generated from the data. Results ‘Did not attend’ rates for the clinic over the study period were 15.7%. However, bureaucratic problems created many ‘missed’ appointments; most instances of ‘did not attend’ investigated were attributable to communication failures. Participants did not divide neatly into ‘attenders’/’non-attenders’; many had complex mixed attendance records. Most weighed the value of attendance against immediate obstacles such as incompatible work/clinic hours. Reminders were seen as important, particularly for this age group. Respondents identified fear of being judged for ‘poor control’ as a major factor in attendance decisions, suggesting that having a high HbA1c level may lead to non-attendance, rather than vice versa. Conclusions Health professionals’ supportive, non-judgemental attitude is important to patients considering clinic attendance. In this study, improved communication, reminders and flexible hours might reduce ‘did not attend’ rates. Introduction Improving attendance rates at outpatient clinics is often seen as important both in terms of avoiding the waste of medical resources and in terms of better overall health outcomes [1]. Much of the medical literature on non-attendance in diabetes points to significantly higher HbA 1c results amongst 'defaulters' as an example of the benefits of clinic attendance [2]. In UK diabetes care, outpatient clinic attendance rates vary widely, from 75% non-attendance [3] to 1.4% [4]. There is evidence that young people miss more scheduled medical appointments of all kinds than other age groups [3,5]. Indeed, for younger patients with diabetes, the transition from paediatric to adult clinic can be crucial, with many people dropping out of the system altogether [6]. Within diabetes outpatient care, socio-demographic factors, such as gender and class, do not seem to be associated with missed appointments, although some have found single parents and smokers to be more likely not to attend [7]; patients who feel that their recommended treatment is not effective are also less likely to seek specialist care at clinic [8]. Overall, however, reviews of the existing literature do not offer conclusive reasons for nonattendance and show that clinic-related factors behind nonattendance are rarely assessed, with the patient voice largely absent from the debate [9,10]. This study aimed to help redress that balance by exploring issues around attendance for this vulnerable age group, from the patient point of view. A specific young adult diabetes clinic was taken as an 'exemplifying' case study [11], to assess in depth what attendance means for those registered there. The study was led by a researcher with Type 1 diabetes. Questions centred on the value of clinic to this group of patients, the physical, emotional and practical barriers to attendance, and the processes involved in the decision to go-or not to go-to clinic. Patients and methods The case study young adult clinic accepts all 18-to 25-year-olds with Type 1 diabetes within a single county in south-east England. Three types of data were collected: (1) attendance records were analysed for 231 appointments for 102 individuals from November 2008 to May 2010; (2) semi-structured interviews were carried out with 17 patients registered at the young adult clinic; (3) the appointments and cancellation telephone line was monitored over a 3-week period. Using the data collected as described above, a purposive approach to sampling for the interview study was employed [12], with 17 participants (nine men and eight women) selected on grounds of relevance to the questions driving the research-in this case, attendance behaviour. The interviewees included seven who were recorded as regularly attending clinic appointments, five with a record of intermittent attendance and three who had never attended within the survey period. A further two participants were chosen because they were new to the young adult clinic following extended periods of non-attendance. The decision-making process relating to clinic attendance was used as a framework to allow participants to identify the areas they considered important. The interviews were conducted as semi-structured one-to-one discussions of 20-30 min each. Themes arising from patients' emotional and practical issues around clinic attendance were derived from the data. The study gained National Health Service (NHS) ethics approval under REC reference 10 ⁄ H0718 ⁄ 1. Results Patients could not be divided into 'attenders' and 'non-attenders'; many showed a complex record of attendance, non-attendance and cancellations. Overall DNA (Did not attend) rates across the study period were calculated using NHS guidelines [13] at 15.7% (36 recorded DNAs ⁄ 231 scheduled appointments). However, this figure should be treated with caution. Most patients had more than one scheduled appointment during the survey period, so it was possible to gather further data on 18 appointments from the patients' perspective during the 17 interviews described above. Eleven instances recorded as 'did not attend' were attributable to problems with administration, communication and bureaucracy, combining to create false 'missed' appointments. Patients faced great difficulty accessing the central booking line and internal hospital communication problems meant that cancellations and changes of address were not always passed on to the clinic. The audit of the cancellation service showed that there could be many as 17 people waiting in the telephone queueing system at peak times and a wait of over 20 min to speak to an operator; on some occasions, the call simply disconnected with no option to wait or leave a message. In interviews, some patients mentioned that they had been warned by staff or friends not to bother with the central number, as they would not get through. Within the study sample, participants could be grouped into those who made a cost-benefit analysis of the obstacles and benefits of going to clinic, and those who did not think about it at all; some moved from one group to another over time. In the 'cost-benefit analysis' group, valued benefits included practical information (in an ideal world, delivered by others with diabetes), timely test results, emotional support and reassurance. 'You know, it's all very well saying, oh, 'get better control' but it's not always that easy… it would be helpful if there was someone who actually had diabetes that you could talk to and say oh I'm having trouble with this, what can I do with that… you could maybe fit it into the real world, you know, how it would work and not just in theory'. Woman, age 24, diagnosed in childhood The value of friendly, positive reception and clinical staff was appreciated by all and a reliable system of reminders by text or email was seen by this age-group as very useful for ensuring appointments were not missed. 'I think everyone's on mobile and email these days aren't they, so I think that would be better than [ Many respondents identified that being 'told off for poor control' by health professionals of all kinds could be a major obstruction to future attendance at clinic. 'They look at you really disapprovingly, and it's like, please don't because there is, you know, I'm not just doing it because I can't be arsed… there's obviously a reason for it so just sort of, I don't know, not analyse it but just look to see why and don't judge'. Woman, age 21, diagnosed in adolescence Amongst those patients who did not think about whether or not to go to clinic, some always attended out of routine. Parents often played an important role in supporting this routine. Others went through a period of non-attendance, often referring to this afterwards as 'denial'. This concept of a phase where the condition feels unmanageable was a common theme and may be seen as part of the normal process of chronic disease [14]. 'It's a very emotional, I mean when you are diagnosed with something new, you know, your mind, I mean I was really, really depressed. I mean come on, who wouldn't be, you know, it's such a thing, and at that stage I can't even handle most [doctors]'. Woman, age 25, diagnosed in adulthood Discussion In this study, patients' attendance behaviour was complex, with many respondents reporting a change in attitude over time. For the majority of those interviewed, their attendance record was dependent on the value offered at clinic vs. the obstacles put up by inflexible hours, bureaucratic procedures and by health professionals' attitudes to diabetes. In addition, information-sharing problems inflated the number of appointments recorded as 'did not attend'; the clinic's true non-attendance rate is likely to be considerably lower than the 15.7% initially documented. As interviewees were not selected at random, but deliberately chosen to give a range of attendance behaviours, it is not possible to give an accurate estimate of the real 'did not attend' rate during the survey period. However, the study found that at least 31% (11 ⁄ 36) of all unattended appointments could have been avoided, by improving communication between clinic and hospital trust. Even assuming the remaining uninvestigated instances of 'did not attend' were accurately recorded, this may bring the clinic's true overall 'did not attend' rate closer to 10 or 11%. Previous studies of non-attendance assume a causal connection between missed appointments and associated higher HbA 1c [2,3]. Results from this study, however, indicate that fear of being 'told off ' for failing to reach biomedical targets was an important factor in the decision not to attend. In other words, rather than non-attendance causing high blood glucose readings, perhaps high blood glucose readings-or health professionals' reactions to them-cause non-attendance. Any benefits clinic may offer in terms of screening, particularly valuable to those struggling to control their diabetes, will then of course also be missed. This study suggests two main implications for service delivery. Firstly, it may be worthwhile for clinics with apparently high 'did not attend' rates to conduct audits of their own booking procedures to identify where messages are going astray or where cancellation and rebooking may be particularly difficult. Secondly, the research highlights the importance of diabetes professionals' reactions to young people's HbA 1c results. Censorious responses to 'poor' control may in fact be contributing to patients' decisions to stop attending clinic. In this study, an understanding of the difficulties in managing diabetes, plus timely and practical information, were among the most highly valued things health professionals could offer participants. The research is limited in a number of ways. As with all case studies, findings cannot be reliably generalized to other clinics. In particular, the region studied is above average in terms of employment and income, with limited ethnic diversity, and the catchment area includes a highly educated university population. Regions with fewer resources, a more heterogeneous and complex pool of patients and, of course, a different age group might yield very different themes. However, although in-depth research into attendance from the patient viewpoint is rare, comparable studies of people with Type 1 diabetes have flagged up identical issues; particularly the need for flexible hours, positive emotional support and understanding from others with diabetes, and nonjudgemental advice from health professionals [15,16].
2014-10-01T00:00:00.000Z
2012-02-01T00:00:00.000
{ "year": 2012, "sha1": "7cee83022ef003503490a5cc0f5b3af47bfbc11a", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc3321224?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "7cee83022ef003503490a5cc0f5b3af47bfbc11a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248571767
pes2o/s2orc
v3-fos-license
Challenges and Opportunities of Blockchain for Cyber Threat Intelligence Sharing The emergence of the Internet of Things (IoT) technology has caused a powerful transition in the cyber threat landscape. As a result, organisations have had to find new ways to better manage the risks associated with their infrastructure. In response, a significant amount of research has focused on developing efficient Cyber Threat Intelligence (CTI) sharing platforms. However, most existing solutions are highly centralised and do not provide a way to exchange information in a distributed way. In this chapter, we subsequently seek to evaluate how blockchain technology can be used to address a number of limitations present in existing CTI sharing platforms. To determine the role of blockchain-based sharing moving forward, we present a number of general CTI sharing challenges, and discuss how blockchain can bring opportunities to address these challenges in a secure and efficient manner. Finally, we discuss a list of relevant works and note some unique future research questions. Introduction Each year the threat landscape continues to evolve in both the types of cyberattacks and the methods used to commit them [1]. Organisations have subsequently had to find ways to manage the increased risk associated with the infrastructure they depend on to operate. As a result, several Cyber Security Risk Management (CSRM) frameworks have been developed to define a more concrete framework to better manage this risk [2]. However, with the emergence of the Internet of Things (IoT) technology [3], smart portable sensors and their resource-constrained nature, the threat landscape has recently grown at a rate that makes the traditional CSRM task challenging [4] [5]. To minimise cyber threats, organisations continue to develop methods focused on gathering threat-based information specific to them. Towards this, Cyber Threat Intelligence (CTI) is a concept that describes the collection and analysis of threat information by an organisation. The emergence of CTI in recent years has seen its integration into traditional CSRM frameworks become a effective threat mitigation strategy [6]. The SANS institute is a US based organisation that conducts a yearly CTI survey across industry. The primary aim of this survey is to understand the current state of CTI use within industry. In their 2021 survey, a significant milestone was reported, 100% of surveyed organisations indicated that they either currently do or have plans to use CTI in some way [7]. When this figure is contrasted with the 75% reported only four year earlier in 2017, it is clear that CTI will continue to play a critical role in threat mitigation within industry moving forward. Sharing CTI cooperatively between organisations can be highlighted as a mutually beneficial process for all participating organisations [8]. However, in practise CTI sharing is challenging due to the variety of ways threats can affect the components that make up an organisations infrastructure (e.g., Storage, Networks and Communication). For example, the man-in-the-middle attack, eavesdropping attack, phishing and spear-phishing attacks, etc [9]. In recent years Vendor-created/Open-source threat intelligence sharing platforms, have become a popular choice within industry. These platforms provide organisations with an environment where they can share and consume CTI in either a fully or semi automated way. During their 2021 survey the SANS institute noted that these types of sharing platforms saw a 3% increase in use compared to 2020 [7]. Moreover, it was also reported that more traditional sharing mechanisms (e.g., emails and briefs) saw a 7.8% decrease in use compared to 2020. We argue that while this trend towards either fully or semi automated threat intelligence sharing is positive, a number of key challenges (e.g., Produce Consumer Imbalance, Data Validity) are currently prevalent in this space, as highlighted by existing literature [10]. Furthermore, we also seek to provide a unique insight into how privacy, trust and accountability define a seemingly paradoxical relationship. As well as discussing several general CTI sharing challenges, we also seek to demonstrate that using a decentralised platform for CTI sharing between organisations in a trustless manner has tremendous promise. Towards this, blockchain is a promising technology. Blockchain is a tamperproof, decentralised, and immutable storage of digital information that is impossible to change [11]. Therefore, blockchain can provide a strong and effective solution for securing CTI in networked ledgers, a series of blocks that are cryptographically linked, and facilitates secure dissemination between organisations. However, blockchain-based CTI sharing solutions are lacking in the present literature. A few proposals, e.g., [12] [13] [14], integrate blockchain for CTI sharing, but a comprehensive solution which addresses all of the discussed challenges is currently lacking. In this chapter, we evaluate a number of present CTI sharing challenges and discuss how blockchain can bring opportunities to address these challenges. Thus, the major contribution of this chapter is to provide a list of challenges associated with CTI sharing and deliver a list of opportunities present within the blockchain space for future research. The remainder of the Chapter is organised as follows. In Section 2, we present a brief overview of blockchain and CTI. In Section 3, we present a simplified blockchain-based CTI sharing model from the current literature to demonstrate how blockchain can facilitate sharing. In Section 4, we discuss the a number of challenges associated with CTI sharing. In Section 5, we present a number of opportunities that highlight the applicability of blockchain-based models in the CTI sharing space based on current ideas presented in the literature. In Section 6, we present a brief discussion the related work within the literature. To concluded, in Section 7 we summarise the work presented in this Chapter and discuss future research directions. Overview of Blockchain and CTI Before discussing blockchain-based CTI sharing in detail, we present a brief overview of blockchain and CTI in this section. Blockchain Blockchain is a distributed digital ledger for storing electronic records [11]. In other words, blockchain can be seen as a network of computers that store transactions (and therefore the data) across multiple computers. These computers are considered a node in the blockchain. The data entered in a particular interval in the chain is known as a block. Each block is identified using a unique identifier is called a hash. Each block contains the hash of the previous block. A hash is the output of a unique cryptographic function that takes as input a arbitrary amount amount of data and generates a fixed-size output, the hash. Significantly, this is a one-way function and it is impossible to reserve the computation [15]. In blockchain, when a transaction is first equested, it is authenticated using cryptographic keys (public and private keys). Then a block containing that transaction is created and sent to the entire network. Once the transaction is agreed between the nodes in the network, it is approved (i.e., authorised) before the block is added to the chain. This is done by a mechanism called consensus, where the majority of nodes agree with the transaction. Note that nodes must perform a complex mathematical problem to validate a transaction. This is known as mining, and the participanting nodes are referred to as the miners. Commonly, the mining task in called Proof of Work (PoW). A cryptocurrency reward is given to the miner who first solves the mathematical problem (i.e., the PoW) and validates a block. After this, the block is added to the existing chain, and all the nodes in the network are updated with this information [16]. Therefore, blockchain provides a framework in which nodes can maintain an immutable ledger of data. In Fig. 1, we illustrate how immutability is created in blockchain by linking successive blocks together using cryptographic hashing functions. Currently PoW is the most widely used mechanism for mining. However, PoW requires a substantial computing power and therefore uses considerable amounts of energy, a notable drawbacks of PoW. To solve this issue, another mining mechanism, Proof-of-Stake (PoS) is becoming popular. PoS provides faster transactions and uses less energy during mining [17]. Some significant properties of blockchain are outlined as follows [18] To ensure a transaction has not been changed or altered the proceeding blocks hashes can be checked. -Transparency: Every transaction that takes place is stored on the blockchain and therefore is visible to the every node in the network. -Use of Smart Contracts: Transaction in the blockchain can be automated with smart contracts. It is a computer code that facilitates and verifies the nodes' agreements and therefore increases the computational efficiencies. CTI Cyber Threat Intelligence refers to a collection of evidence-based knowledge about cyber threats. This knowledge can be compromised of a variety of information including Indicators of Compromise (IoC), attackers' motivations, intentions, characteristics, attack vectors, as well as their Techniques, Tactics, and Procedures (TTPs) [19]. CTI can also consist of actionable advice to detect, prevent, and mitigate the impact of attacks. It can also be obtained from a variety of sources, including anti-virus programs, open-source intelligence (OSINT), Intrusion Detection Systems (IDSs), human intelligence, malware analysis, code repositories, and CTI sharing platforms. CTI can be categorised into following four types: (i) strategic, (ii) operational, (iii) tactical, and (iv) technical. A brief description for each type follows: -Strategic CTI provides a high-level overview of the threat landscape in terms of past, current, and future trends. This type of CTI is often presented in plain language and is focused on improving situational awareness and presenting business risks. The intended audience is senior, lay-person decision makers in an organisation. -Operational CTI refers to information about the nature and motivations of potential upcoming attacks against an organisation, that can be used to formulate targeted prevention strategies and prevent future incidents. -Tactical CTI relates to TTPs and IoCs, that are useful to identify specific attack vectors and vulnerabilities for the purposes of proactively updating signature-based defences against known threats. -Technical CTI consists of technical information often found on threat intelligence feeds about malware and adversarial campaigns, including information about an attacker's assets, attack vectors employed, Command and Control domains used, and types of vulnerabilities exploited. CTI deals with the collection and analysis of evidence-based knowledge about existing or potential threats that can be used to inform decision making. The aim of CTI is to aggregate a number of unstructured data sources (e.g., network logs and software signatures) and create structured intelligence that details a threat [6]. As noted in Section 1, traditional CTI sharing systems lack the ability to share this intelligence effectively. Several of the major challenges that these systems have yet to overcome are -the producer consumer imbalance, data validity, legal and regulatory factors, and sharing intelligent intelligence. Consequently, a number of recently proposed CTI sharing platforms have integrated blockchain into their design to try and provide novel solutions to these challenges. Blockchain-Based CTI Sharing Significant diversity exists in the blockchain-based CTI sharing space. These models utilise specific blockchain characteristics and cryptographic constructs in a variety of ways to facilitate sharing. In Fig. 2, we illustrate a simple sharing framework which exemplifies how blockchain can be applied to CTI sharing [20]. This model is composed of the following components. -Consumers: Users who consume shared CTI information. Make decisions about which intelligence they consume based on the relevance to their physical infrastructure or business case. -Producers: Users who produce CTI based on internal information that can be linked to an existing or new threat. This CTI is then shared with an individual, group or publicly, based on sensitivity of the intelligence using blockchain. -Verifiers: Users who validate shared CTI to ensure it meets sharing standards (e.g., Complies with STIX format), is not duplicated intelligence that has already been shared, and or maliciously contains fake information. The results of this user's analysis either directly impacts the addition of CTI to the blockchain or is added with the given CTI as a report to inform consumer decisions. -Authority: Users who verify the identity of other users before they participate in sharing. This authentication creates trust between users who produce and consume intelligence as they can be sure that only authenticated users are able to do so. -Blockchain: It is used to provide a distributed ledger of CTI information (e.g., Hyperledger, Ethereum, EOS, etc). -CTI Smart Contract: Self managed code that is executed by the blockchain to manage the verification of shared CTI. This contract is made up of a Inter Planetary File System reference to the shared CTI and a verification status. Note: Users can be any combination of the above roles and subsequently are not restricted to one role. As shown in Fig. 2, the process of communication among the various components of the framework follows these steps. -Step 1: All stakeholders prove their identity to a trusted Authority. Proof-ofidentity can consist of the exchange of information like government credentials (e.g., drivers licence or passport), ownership of third party certificates or industry accreditation. -Step 2: Producer generates CTI and adds it to the blockchain for verification. -Step 3: Verifier determines the credibility of the CTI based on a set of standards agreed upon by the network. -Step 4: CTI that is determined to be valid in Step 3, is added to the blockchain. -Step 5: Consumers access CTI that has been added to the blockchain. The simplified sharing model presented in Fig. 2 demonstrates how blockchain can be used to facilitate CTI sharing at a basic level. Moreover, when the properties of blockchain discussed in Section 2.1 are considered in the context of CTI sharing, the advantages that blockchain-based sharing models have over traditional centralised approaches can be highlighted. Challenges Traditional CTI sharing frameworks (e.g., MISP, OpenCTI and ISACs) have a wide range of challenges that are documented in the literature [10] [21]. In this section, we focus on a subset of these general CTI sharing challenges (cf. Fig. 3). Producer Consumer Imbalance Stakeholder who participate in CTI sharing as either a producer or consumer (cf. Section 3) must consider the risks and benefits associated with doing so. In the case where a producer shares intelligence, a number of reputational and or monetary risks are prevalent. For example, sharing intelligence that indicates an organisation has been the victim of a ransomware attack, could cause stock prices to fall or new customers to choose a competitor. Some of the potential risks are listed below [10] [21]; -Consumer Distrust: Potential consumers might feel that a reported cyber incident means that the organisation is vulnerable. As a result, existing customers may decide to use the services of a competitor that has not reported an incident. -Competitor Advantage: Competitors become aware of potential vulnerabilities that might affect them without being directly affected by it. This allows them to implement mitigation strategies for the same vulnerability at a reduced resource cost. -Revealing Trade Secretes: Information about hardware, software or services an organisation use might be revealed. Apart from being able to consume CTI themselves, producers do not gain any direct benefits from sharing. As a consequence, without implementation of a reward-based system as part of a sharing platform, the process of sharing CTI can be considered a common good service. On the other hand, consumers assume almost no risk when consuming CTI. Even in the case where the consumption of specific CTI is attributable to an organisation, this action alone is not likely to result in the same reputational or monetary consequences associated with sharing. Given that organisations that consume CTI can implement mitigation strategies against vulnerabilities before they can be acted on, we propose that the following benefits that could be gained; -Increased Service Quality: Increase service up time provides existing customers with a better service quality. This could result in a higher customer retention rate. As a result of providing existing customers with a more consistent service, an organisation might gain a reputation for providing services with low downtime. -Reduced Negative Publicity: In the case where an organisation successfully implements a mitigation strategy to fix a shared vulnerability, the potential for negative publicity due to a successful attack is removed. -New Customers: In the case where an organisation has suffered from a number of cyber incidents (e.g., DDoS Attacks, Privacy Leak), it could be predicted that dissatisfied customers could seek an alternative service. Moreover, if a competing organisation providing and analogous service that has not suffered from these same incidents due to the consumption of CTI, it could be predicted that this organisation could gain additional customers. From the above discussion it is clear that the risks and benefits associated with the producer and consumer role are not equal. This inequity, consequently creates an imbalance. If this imbalance is not addressed as part of a sharing platforms design, organisations can be observed to exhibit free-riding behavior [10]. In this case, free-riding behavior can be defined as a deliberate lack of participation by organisations who could share valuable CTI, however choose not to. If a large portion of organisations deliberately behave in this way, the productivity of a sharing platform is affected in two major ways [22]; 1. Not sharing removes the ability of other organisations to mitigate against the same incident. When CTI is shared, it is possible for other organisations to put in place mitigation strategies (e.g., Firewall rules) to ensure they are not susceptible to attacks which have a similar profile or share common characteristics. In the case of free-riding, this is not possible. 2. Non-free-riding organisations might stop or reduce the amount of intelligence they share due to a lack of consumable CTI from others. As noted above, producers assume a number of risks when they participate in sharing. However, if part of a productive platform where a large volume of valuable intelligence is shared, this risk compared to the benefit gained by consuming other intelligence makes sharing more attractive. Consequently, a large portion of free-riding organisations has the potential to impact the sharing behaviours of others. Legal and Regulatory Obligations Organisations who participate in sharing have to follow the legal and regulatory obligations associated with the jurisdiction they are from. Survey [10], highlights a number of legal and regulatory obligations that organisation in certain countries must meet. For example, in Germany Internet Protocol (IP) addresses are considered personal information and therefore any disclosure of CTI containing them must comply with German privacy laws [23]. However, in the UK IP addresses are not considered personal information and therefore can be freely shared. In terms of CTI sharing, IP addresses are likely to be shared as an IoC and therefore organisations based in these different jurisdictions have to ensure they comply with the applicable laws. Moreover, countries like Belgium and Slovenia have mandatory sharing legislation [10]. This legislation requires organisations from these two countries to report any cyber incidents to a specific authority when they occur. If these organisations were also to participate in CTI sharing on top of this, in some cases the resources consumed to facilitate both of these independent sharing requirements could exceed those which are available. The above examples highlight that while theoretically CTI sharing is ubiquitous across the world, legal and regulatory obligations can pose a significant barrier. Given that legal and regulatory obligations are significantly diverse across the world, sharing platforms must ensure CTI can be shared in a pliable way. Data Validity Threat hunting is defined by [24] and [25] as the proactive approach of seeking anomalous or malicious activity within an organisations cyber terrain. The process of performing this task, which if successful can result in the production of CTI, can be highly variable in nature. At the most basic level, threat hunting can simply consist of manual analysis of network or Windows logs. In contrast, [26] proposes a sophisticated threat hunting model which utilises machine learning to automatically generate threat intelligence based on data from a variety of sources. While these examples vary in their sophistication, they both share the common feature that the process of generating CTI is solely completed by the sharing organisation. As a result, it is possible for malicious organisations or individuals to intentionally generate and share false intelligence. We note that sharing false CTI has the potential to be utilised in several ways to either gain additional attack surfaces or to bury real CTI amongst fake intelligence going forward. Two examples of this are discussed below. Automatic Attack Feed Exploitation: Recent trends in CTI sharing have seen many notable developments towards automation, both in its generation (as discussed above) as well as in its consumption. For example, technologies such as Structured Threat Information Expression (STIX) and Trusted Automated eXchange of Intelligence Information (TAXII), has allowed many organisations to easily share and consume CTI in an automated way [10]. As CTI consumption becomes more automated, it could be feasible for threat actors to utilize this to create new attack surfaces. For example, intelligence structured using STIX can contain SNORT rules that consumers can automatically feed into their intrusion detection systems (IDS) [21]. Given certain conditions, we theorise that it could be plausible for an attacker to construct seemingly legitimate intelligence that causes a consuming organisations IDS to flag legitimate activity as malicious. This technique could be used in conjunction with an actual attack, to disguise malicious activity amongst legitimate traffic that is falsely flagged as suspicious. Denial-of-Intelligence: As sharing platforms become more and more effective at allowing organisations to mitigate against threats, they themselves could become targets. Denial-of-Service (DoS) attacks have been around since the origin of the internet yet still remain highly effective in the present day. The main goal of a DoS attack is to simply make a particular computing resource unavailable [27]. The most common way that these types of attacks are committed, is by overwhelming a service with a large volume of bogus requests. We observe that a 'Denial-of-X' style attack could be constructed to target CTI sharing platforms specifically. In this case, threat actors could develop Denial-of-Intelligence (DoI) attacks. This type of attack would seek to overwhelm a platform with a large amount of bogus intelligence. By flooding a sharing platform with a large amount volume of fake intelligence, threat actors could exploit a common vulnerability across multiple targets. The result of this would mean that while valid intelligence detailing the attack could be shared by the initial victim, it is buried amongst an overwhelmingly large volume of the false information. Both the above examples highlight that the ability to accurately determine the validity of shared CTI is a critical challenge that platforms must find novel ways overcome. Moreover, these examples also indicate that as the process of sharing becomes more automatic and widely used, data validity becomes more critical. Intelligent Intelligence In Section 2.2, we discussed what CTI is and highlighted that it can be categorised into four main types: (i) strategic, (ii) operational, (iii) tactical, and (iv) technical. Each of these types of intelligence convey a narrative about a threat, however do so in diverse range of ways, specific to their intended recipients. For example, Technical CTI is made up of data that describes the physical attributes of an observed attack (e.g., IP address, MAC address, Malware Hash, etc), intended to be consumed by technical resources [21]. It is important to understand that these types of intelligence are highly variable in their sophistication. In this case, sophistication refers not just to the quality of the CTI itself, but how consumers are able to use it. Proposal [19] makes an important distinction between data, information, and intelligence, that highlights this variability. They are as follows: -Data: Simple facts that can be made available in large volumes such as IP addresses, logs, hashes. -Information: A collection of raw data together that shows suspicious activity. -Intelligence: The process of analyzing and drawing meaningful conclusions that can be used by security professionals to define an intelligence-lead approach to decision making. If the above criteria are applied to the categories of CTI discussed in Section 2.2, tactical, operational and strategic CTI could be classed as intelligent intelligence. On the other hand, technical intelligence (e.g. IoC) can only be classified as data/information intelligence, and subsequently cannot directly inform decision making. As a result, intelligence types can be grossly defined into high-level intelligence (e.g., TTP) and low-level intelligence (e.g. IoC). Currently, the majority of exchanged CTI can be classified as low-level intelligence [28] [6] [29]. Survey [21], notes that over 250 million IoC are shared cumulatively across CTI sharing platforms every day, with this figure likely increasing in recent years. From the outset, this trend of sharing large volumes of technical intelligence may appear positive. However, when framed from the perspective of a consuming organisation, the quantity of available intelligence becomes an interpretability challenge analogous to the needle in a haystack problem. Privacy, Trust, and Accountability Privacy, Trust and Accountability, are three factors that any CTI sharing platform must balance to facilitate an environment conducive to share and consume CTI [10] [30] [31]. The relationship between each of these factors and CTI sharing are discussed below; Privacy can be defined as the ability or inability for a consuming organisations to associate some shared intelligence with the sharing organisations real identity. The literature consistently highlights reputational damage as a significant barrier that stops organisations from participating in CTI sharing [21] [10] [28]. Given that reputational damage can result from sharing intelligence in an identifiable way, a degree of anonymity is required when sharing. Trust can be defined as a consumers ability to trust the intelligence which they receive [29]. Subsequently, a trust relationship between CTI producers and consumers is present in any sharing platform. In contrast to privacy, the parameters used to define the trust relationship between producers and consumers often require some link to the producer's real identity. By linking at some level a producers real identity to the CTI they share, consumers have greater assurance that shared intelligence comes from an authoritative source [32]. Accountability can be defined as the ability for a sharing platform to provide governance shared CTI. In this case, Governance refers to a sharing platforms ability to hold users who participate in false sharing responsible. Subsequently, the ability to hold users accountable for their actions insures that the integrity of shared intelligence can be maintained. Like trust, accountability is also dependent on being able to reveal a producers real identity given that they have made a malicious contribution [33] [34]. From the above discussion, it can be hypothesised that privacy, trust, and accountability form a paradoxical relationship. Producers of intelligence want to be completely anonymous when sharing. However, it is the preference of CTI consumers to have proof that the intelligence they consume originates from a reputable source [35]. Moreover, the group of users who make up a sharing platform should have governance over the information shared, and consequently be able to hold users who share false information accountable. As a result, the way in which CTI sharing platforms manage privacy, trust, and accountability is an important challenge. Opportunities In this section, we discuss a list of opportunities (cf. Fig. 4) for blockchain-based CTI sharing. These opportunities aim to highlight how the characteristics of blockchain can be leveraged to provide novel solutions to the challenges discussed in Section 4. Incentivised Sharing To help alleviate the producer consumer imbalance discussed in Section 4.1, several incentive schemes can be implemented. In this section we will discuss two examples that illustrate how incentivised sharing can be achieved using blockchain. Concessions: Some blockchain-based sharing platform, such as [20], use subscription fees to create permissioned sharing groups. Consequently, users are required to pay an authority a subscription fee to participate, consume and or share CTI, for a given time period. To incentivise users to share CTI and not just consume it, concessions can be given to users who contribute intelligence. As a users contributions are stored using blockchain (e.g. In a Smart Contract or directly on-chain), an auditable and immutable record of these transactions is maintained. This record can thereafter be used by an authority to determine a users subscription fee once their previous subscription has expired. In the case were a users record shows that they have shared valuable intelligence, the price of their next subscription can be lowered to incentives them to continue making valuable contributions going forward. An example of a sharing model that implements concession based incentives is [20]. In this model, the authors use subscription discounts to reward CTI producers for their contributions. As part of their implementation, proposal [20] provides a CTI producer with a discount each time they share intelligence that is considered high-quality by a set of verifiers. This design therefore allows users who continually share high-quality CTI to significantly reduce their subscription fees. To achieve this, CTI sharing is completed using the following steps; 1. CTI producer adds CTI to the blockchain. 2. Three random verifiers are selected from a trusted group. 3. Verifiers rate the CTI's quality using pre-determined metrics. 4. If the majority of verifiers rate the given CTI as high-quality then both the producer and verifiers are given a discount on their next subscription. 5. If the majority of verifiers rate the given CTI as low-quality then only the verifiers are given a discount on their next subscription. Fees: Another example of how blockchain can be used to combat free riding behavior within CTI sharing is consumption fees. Unlike subscription concessions, consumption fees require consumers to pay producers to access CTI that they have shared [36] [37]. In essence, consumption fees aim to create a market place where CTI can be exchanged between organisations for currency. Due to the trustless properties of blockchain, CTI can be exchanged between two organisations without the need to pre-establish trust or use a third party. Instead, self manged Smart Contract can be used to facilitate the exchange of CTI and cryptocurrency between two organisations. By creating a blockchain-based CTI marketplace, producers who actively share valuable CTI have the ability to profit significantly from doing so. Two main approaches can be used to implement consumption fees within blockchain-based architectures. 1. Standard fee: A predefined fee is payed to producers when other organisations consume their CTI [36]. 2. User defined fee: Producers specify a consumption fee which is payed each time a user access it [37]. Can be implemented as a producer defined parameter in a Smart Contract. Decentralised incEntives for threAt inteLligEnce Reporting and exchange (DEALER), is an example of a blockchain-based CTI sharing platform that implements a user defined consumption fee to incentivise CTI sharing [37]. The below steps summarise how DEALER implementes user defined consumption fees. 1. CTI producer adds CTI to the blockchain. During this process, the producer can define a sale price. If a producer does specify a sale price, they also have to pay a verification fee. 2. In the case where a sale price is specified, three trusted verifiers review the associated intelligence using pre-determined metrics. 3. The results of each verifiers analysis are added to the blockchain to indicate to buyers the quality of the given intelligence. Moreover, each verifier is given a proportion of the verification fee. 4. When a buyer purchases some intelligence they are required to pay the associated consumption fee, if specified by the producer. As discussed in Section 4.1, an imbalance between the producer and consumers roles exists within CTI sharing. Consequently, it is critical that sharing platforms seek to address this imbalance by providing producers with more direct benefits. In this section we highlighted a number of way in which blockchainbased sharing platforms can implement different incentive schemes to combat the effects of the producer consumer imbalance. Deposits In Section 4.3 the issue of false sharing was discussed. To disincentivise CTI producers from participating in this behaviour negative financial punishments can be used. In the case of blockchain-based platforms, existing technologies that support the exchange of cryptocurrency can be utilised for this purposes (e.g. Ethereum). Moreover, many of these platforms also allow self managed Smart Contracts to exchange cryptocurrency autonomously, thus removing the need for a trusted third party [38]. As a result, Smart Contracts can be utilised to implement conditionally refundable deposits in a trustless, auditable and verifiable manner. Conditionally refundable deposits can be utilised by blockchain-based CTI sharing platforms to introduce negative financial punishments for CTI producers that participate in false sharing. In this case, when a producer shares some intelligence they could be required to pay a deposit, some amount of cryptocurrnecy, to a Smart Contract. Once payed, a consensus algorithm defined within the Smart Contract can be used to verify the integrity of the shared intelligence [14]. Given that this verification process occurs on-chain, the results are immutable and transparent to both the original producer as well as future consumers. Furthermore, the autonomous and deterministic nature of Smart Contracts allows them to hold cryptocurrency in escrow without the need for pre-established trust. In the case where shared intelligence is found to be credible, the initial deposit can be payed back either in full or partially to the original producer automatically by the Smart Contract. On the other hand, when false sharing is found to have occurred this deposit can either be held by the Smart Contract, burned or distributed to users involved in the verification process [38]. By punishing users who participate in false sharing, persistent efforts to do so on a large scale are deterred due to the associated financial cost. BLOCIS: In [14], the authors use conditionally refundable deposit to disincentivise stakeholders from deliberately sharing false/incorrect CTI. When a registered BLOCIS user shares CTI, they use a pre-defined Data Report Contract (DRC). This contract takes as input the given CTI as well as a deposit. Once added to a specific feed, an evaluation function (π) is used to assess the validity of the reported intelligence. The novelty that BLOCIS proposes is that π takes as input both the reputational score of the producer as well as their deposit. If the output of π indicates the given CTI is false, the deposit is not refunded back to the producer. When simulated in a test environment, the BLOCIS model was found to successfully disincentivise users who made malicious contributions. Fig. 5 in [14] demonstrates both the financial and reputational damage that users who participated in false sharing suffered over an extended period of time. Considerations: While deposit-based disincentive schemes are focused on punishing malicious producers, considerations must also be made to ensure honest producers are not deterred from sharing. Although the self managed nature of Smart Contracts can provide producers with a trustless way to exchange cryptocurrency, factors such as the amount of currency and consensus used to determine if a contribution is false must be considered. For example, if producers are required to pay a constant amount of cryptocurrency, the extremely volatile nature of currency markets could cause producers not to share at particular times [39]. Moreover, if consensus methods are dependent on validation of intelligence from a set of validators, then they themselves could become by subject to malicious attacks. Given cryptocurrency is at stake, we argue that malicious attacks could seek to compromise a subset of validators to deny the authentication of any intelligence. Lastly, if validators are directly incentivised through partial payment of deposits from intelligence deemed malicious, then validators might be more likely to classify honest contributions as malicious. All of these factors need to be considered carefully when designing a deposit-based disincentive scheme as they have the potential to affect honest producers as well. Aside from considerations related to how deposit-based Smart Contracts are designed, the method used to validate intelligence is another important factor. Fundamental to the success of conditionally refundable deposits is the ability of a verifier or group of verifiers to determine the credibility of CTI. However, currently a method that deterministically classifies CTI as false is considered an open challenge [14]. Consequently, platforms that implement disincentive schemes are likely to encounter cases where CTI is wrongly considered malicious and an honest producer is punished. Reputational System Another way blockchain-based solutions can mitigate against false sharing is use of reputational systems. Unlike deposits, reputational systems do not punish malicious users monetarily. Instead, they associate each user with a reputation score (e.g. 1-100) that represents their perceived trustworthiness. In the context of CTI sharing, a users reputational score can be used to directly affect their ability to consume and or share intelligence within a group [40] [14]. For example, if a CTI producer shares some CTI, their associated reputational score could be used to indicate to validators and or consumers the level to which they should trust it [14]. As a result, intelligence shared by a users with a relatively low reptuational score might be subject to more thorough inspection by validators. In the opposite case, users who have a relatively high reputational score, may be subject to less thorough inspection by validators. Furthermore, these highly trusted users might be able to consume more sensitive intelligence that might otherwise have been unavailable to them. A successful reputational system has the potential to stop a user or group of users from continually false sharing [14]. Given that a users reputational score is tightly coupled with their perceived trust, efforts to continually false share can be predicted to become harder over time. Proof-of-Reputation (PoR) is a blockchain-based consensus algorithm that was proposed by [40] specifically for CTI sharing. In their model, each node in the network has an associated reputational score between 1 and 100. Fundamentally, this score seeks to capture how trustworthy a user is based on the creditably of their previous contributions. Importantly, all of the actions taken by a node (e.g. Voting, Sharing CTI) influence its reputational score over time. When an organisation shares CTI, other nodes on the network calculate a reputation value which is used to judge if it should be added to the blockchain. The results of this reputation-based consensus are used alongside more traditional validation methods to try and mitigate against false sharing. Moreover, a contributing nodes reputational score is adjusted over time based on the results of this process. Critical to the integrity of this process is a predefined trust threshold. This trust threshold defines the point at which a node is considered trustworthy. As a result, if a nodes score drops below this threshold, then it is considered untrustworthy and cannot participate further. The above PoR consensus algorithm exemplifies how the inherent properties of blockchain discussed in Section 2.1, can be utilised to facilitate reputational systems without the need for a trusted third party. In particular the immutable, transparent and auditable properties of blockchain allow each node to calculate the reputational scores of others, thus removing the need for a centralised authority. Similar to [40], the BLOCIS architecture proposed by [14] manages reputational scores with self managed Smart Contracts. Like deposits, these Smart Contracts contain a predefined consensus algorithm that can be used to manage the reputational scores of each user over time in a trustless way. As mentioned in the in Section 5.2, the ability to deterministically validate CTI is still an open challenge. Given that reputational systems require a verifier or group of verifiers to determine the credibility of CTI, their success is dependent on the accuracy of the validation method used. Access Control Blockchain-based sharing platforms can use several methods to provide producers with control over who has access to the intelligence they share. Access control in this case, refers not just to the ability of CTI producers to control who has access to their intelligence, but also in what way [41] [42] [43]. For example, a particular producer might want to share CTI with a small trusted group. However, they only want to disclose the specific attribute values (e.g., IP addresses) associated with it to one of the organisations. While access control can be implemented by centralised architecture, blockchain is able to facilitate the fine grained access control required for CTI sharing in a trustless way. The following list outlines how a number of the key properties of blockchain can be leveraged to provide access control in a trustless way [44]. -Decentralised: As a single authority does not control access based on a producers request, greater integrity is achieved. This means that producers have greater confidence that the control policy they define will be followed given its execution is not dependent on a centralised system. -Immutability: CTI producers can be confident that the access control parameters they define cannot be altered by another user for their benefit. -Smart Contracts: Provides a framework to allow stakeholders to define the access control for the intelligence they share. Moreover, the self managed nature of Smart Contracts ensure that these access control policies are executed autonomously. Traffic Light Protocol (TLP) is an example of an access control method that can be implemented as part of a blockchain-based sharing platform [20]. TLP defines a robust access control structure that gives producers the ability to specify who CTI is shared with. This is achieved by allowing producers to specify a sharing level from a predefined list. Each of these predefined sharing levels is simply a control policy that specifies which users can access the CTI. Table 1 is an example of how a TLP policy could be structured. Ciphertext-Policy Attribute-Based Encryption (CP-ABE) is another method that can be used to give producers with fine grained access control [45]. In the case of CP-ABE, when a producer shares CTI, they encrypt it using Green Disclosure to an entire group of stakeholders. In the case of private blockchain this is restricted to anyone who has access to it. White Public disclosure which is accessible to anyone. attribute-based encryption methods [46]. The ciphertext that results from this process is then added to the blockchain. When a user access this ciphertext they are able to decrypt parts of it based on their own attributes. As a result, users can define highly specific fine grained access control policies using CP-ABE. For example, a CP-ABE policy might require that an organisation is a ICS-ISAC member to view a subset of the CTI. Furthermore, it might also specify that only a specific subset of these organisations can access the specific details of the hardware affected by a ransomware attack. This example demonstrates how CP-ABE can be used to construct fine grained access control policies specific to a producers needs. Both TLP and CP-ABE are examples of access control methods that can be implemented using blockchain. Importantly, these methods provide CTI producers with better control over who consumes the intelligence they share in a trustless way. In Section 4.5, the issue of privacy was discussed. During this discussion, it was highlighted that fear of reputational damage was a significant barrier that stopped some organisations from sharing. While greater access control does not provide a complete solution to this problem, we argue it has the potential to cause more organisations to share within closed groups given their privacy-preserving nature. Moreover, if key regulatory bodies are incorporated into sharing platforms, these frameworks can further help organisations meet their legal and regulatory obligations without having to use secondary sharing mechanisms [20]. Intelligence Mining In Section 4.4, it is noted that not all types of CTI are equivalent in their ability to describe threats and subsequently be used to implement mitigation strategies against them. Given that the process of generating CTI is dependent on the capabilities of the sharing organisation, it cannot be expected that all organisations are capable of generating high-level intelligence. As a result, strategies to create high-level intelligence from aggregated sources of low-level intelligence have the potential to shift sharing towards more intelligent intelligence. Furthermore, this process also allows organisations which do not have the resources to generate high-level intelligence themselves to still contribute. Intelligence mining can be defined as the process of deriving high-level intelligence from low-level intelligence already stored on the blockchain [21]. The immutable and auditable properties of blockchain are able to facilitate mining in a trustless way. Given that low-level intelligence used as part of the mining process is immutable and accessible by each organisation on the network, high-level intelligence that is derived from it can be validated by other organisations. As a result, the ability to mine high-level intelligence in a trustless way has the potential to allow blockchain-based CTI sharing platforms to provide participating organisations with more advanced threat mitigation. Proposal [36] provides an example of how STIX, Semantic Rule Language (SWRL) and Web Ontology Language (OWL) can be combined to create more meaningful and interpretable representations of CTI. The use of these tools together has great potential in the area of intelligence mining, as CTI represented in this way allows semantic reasoners to infer new knowledge [36]. Furthermore, extending traditional representations of CTI could also pave the way for Machine Learning (ML)/Artificial Intelligence (AI) approaches to intelligence mining. In [47], it was demonstrated that ML algorithms were able to generate CTI from a single organisations network logs stored using blockchain. Therefore, it could be possible to extend this approach further to generate more intelligent intelligence, from large amounts of aggregated CTI expressed using STIX, SWRL and OWL. Related Work and Discussion In this section, we present some related works on CTI sharing and the integration of blockchain platforms for CTI sharing and provide a discussion on the findings of this chapter. Several studies discuss the importance of CTI sharing in information security and general computing systems [10] [21] [6] [48]. However, most of them discuss CTI sharing from the lens of traditional centralised computing approaches. Subsequently, few publications considering how blockchain-based approaches can overcome existing challenges are present in the literature. In this section, we aim to discuss the contributions of several publications that outline challenges associated with CTI sharing. Proposal [10], provides a comprehensive insight into what CTI sharing is and how it is commonly performed. Furthermore, it also discusses a number of important CTI sharing concepts including -what CTI is, how it can be shared, and most notably what benefits and risks are associated with sharing. Of particular note, the authors highlight the importance of privacy and anonymity in CTI sharing. However unlike [10], in this chapter, we extended these ideas to consider the relationship between privacy, trust, and accountability. In [21], a survey on technical threat intelligence was conducted. Like [10], [21] provides a good insight into the key concepts which define CTI sharing. This paper specifically seeks to provide a clear definition of what threat intelligence is and what some of the associated challenges in this space are. An important challenge highlighted by [21], is intelligent intelligence. Moreover, their suggestion that big data analysis could be applied to threat intelligence was extended by our work to focus on how these concepts can be applied to blockchain specifically. Proposal [29] provides a comprehensive study into the current challenges associated with CTI sharing platforms (e.g. MISP). As part of their research, they investigate twenty two sharing platforms and derived a list of eight key findings. A number of which are discussed in Section 4. While their research was mostly focused on centralised architectures, their insights into existing challenges allowed us to highlight how blockchain-based architectures can provide novel solutions to them. In [28], the authors perform a comprehensive literature review into the current use CTI. As part of their findings, they outline four main challenges of which three were discussed in this chapter. However, unlike our approach, this research does not explore how blockchain-based solutions can provide novel solutions to these challenges. Recently, a diverse range of blockchain-based CTI sharing models have been published. In this chapter, we discussed a number of novel features present within a subset of these models which we feel represent the current state-of-the-art. We argue that [36] currently presents the most comprehensive blockchainbased CTI sharing platform, as it addresses a number of the challenges presented in this chapter. As part of their model, the authors integrate a number of features which address the producer consumer imbalance, intelligence intelligence, and legal and regulatory factors. However, it must be noted that while this model does provide trust and accountability, it is achieved at the cost of privacy-preserving anonymity. DEALER is a blockchain-based CTI sharing platform presented by [37], which like [36], presents novel solutions to a number of the challenges discussed in this chapter. The DEALER proposal provides solutions to the producer consumer imbalance and legal and regulatory factors. Moreover, this proposal also integrates a quality assurance method which provides a heuristic approach to solving the challenge of data validity. It must be noted, however, that while a heuristic approach to the issue of data validity has the potential to be effective, it does not completely mitigate against false sharing. Few models present in the current literature provide a robust framework that balances privacy, trust, and accountability, as defined in Section 4.5. We argue that [30] presents the most comprehensive approach to balancing these factors. The authors of this platform propose a framework which allows CTI producers to share intelligence semi-anonymously while still facilitating trust and accountability. However, the major limitation of this framework is that a single trusted authority has the ability to reveal the identity of any CTI producer, subsequently creating a single point of failure. We find there are various challenges in CTI sharing, and blockchain is a promising solution to gain opportunities in most cases. However, there is still a list of open research questions that need to be resolved. We list a few of them as follows: -How the properties of blockchain and other cryptographic constructs be used to create a blockchain-based CTI sharing model that provides a balance between privacy, trust, and accountability? -How can shared CTI be deterministically validated to ensure false sharing is not possible? -How can ML/AI be utilised along side current approaches (e.g. STIX, SWRL, OWL) to facilitate the sharing of more intelligent intelligence? Conclusion The drastic evolution of the threat landscape, brought about by the emergence of Internet of Things (IoT) technology, has caused organisations to find new ways to better manage their cyber risks. This appetite for tools that better mitigate against potential threats has driven the development for a number of Cyber Threat Intelligence (CTI) sharing platforms (e.g., MISP). In this chapter, we defined a number of general CTI sharing challenges including the producer consumer imbalance, legal and regulator factors, intelligent intelligence, data validity, and privacy, trust and accountability. These general CTI sharing challenges were then used to deliver a list of opportunities present within the blockchain-based space. These opportunities included deposits, access control, reputational systems, intelligence mining and incentivised sharing. Finally, we explored several existing proposals and determine a list of unique future research questions for efficient and secure CTI sharing using blockchain.
2022-05-10T06:47:55.833Z
2022-05-08T00:00:00.000
{ "year": 2022, "sha1": "9500c7ff64623d43a7d9765a79621b3cf42cd56a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9500c7ff64623d43a7d9765a79621b3cf42cd56a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
209290293
pes2o/s2orc
v3-fos-license
Determination of overnutrition using mid-upper arm circumference in comparison with bioelectrical impedance analysis in children and adolescents in Benin, Nigeria Purpose – The prevalence of overweight and obesity in children and adolescents is on the increase in developing countries. Therefore, a cheap, accessible and simple screening tool such as the mid-upper arm circumference (MUAC) is required for the prompt assessment. The purpose of this paper is to determine the usefulness of MUAC in assessing overnutrition in comparison with bioelectrical impedance analysis (BIA). Design/methodology/approach – Participants included 1,067 children aged 6–18 years recruited from private and public schools in Egor Local Government Area in Benin City, Nigeria. Body fat was estimated by BIA using a Tanita scale, whereas the MUAC was measured with a non-elastic tape. Receiver operating characteristic analysis was used to test the ability of MUAC to determine children and adolescents identified as overweight and obese by BIA. Findings – The prevalence of overnutrition by MUAC (12.4 percent – overweight 6.0 percent and obesity 6.4 percent) was comparable to that by BIA (12.3 percent – overweight 5.4 percent, obesity 6.9 percent). There was a significant correlation between MUAC and body fat percentage, fat mass, fat mass index and fat-free mass index in both males and females (p1⁄4 0.000). Research limitations/implications – This study, in contrast to most other studies on the use of MUAC in the assessment of overnutrition, has the advantage of using BIA cut-offs values against body mass index which does not assess body fat composition. BIA is, however, not the gold standard in the measurement of body fat composition. The optimal MUAC cut-off values of this study may not be representative of the entire country because of its restriction to Benin. Similar studies from different parts of Nigeria will be required to validate this smoothed MUAC percentiles for use in the screening of children and adolescents for overnutrition. Originality/value – MUAC compares well with BIA in this study and can be a useful, alternative and practical screening tool for assessing obesity in the resource-poor setting. Introduction Overnutrition which consists of overweight and obesity constitutes one of the major contributors to the global burden of non-communicable diseases. The World Health Organization has estimated that non-communicable diseases will become the principal cause of morbidity and mortality within the next few years [1]. The burden of overweight and obesity is on the increase globally which is due largely to a change in the dietary pattern and lifestyle. More than 1.9bn adults, aged 18 years and older, were overweight, whereas 600mn were obese globally in 2014. About 42mn children less than five years of age were either overweight or obese in the same year [2]. The high burden of undernutrition in developing countries is coexisting with a rising incidence of overnutrition which is estimated to be about 30 percent faster than in richer countries. The prevalence of overweight and obesity in preschool-aged children was 13.7 and 5.25 percent, respectively [3]; in school-aged children it was 7.7 and 3.1 percent, respectively [4]; and in adolescents it was 1.98 and 0.84 percent, respectively [5]. A 30-year systematic review of studies on obesity and overweight in children and adolescents in Nigeria showed a prevalence of 1.0-8.6 percent of overweight and 0.0-2.8 percent of obesity. The studies of children and adolescents combinedly showed a prevalence of 5.0-12.0 percent of overweight and 0.0-5.8 percent of obesity [6]. The rising trend in obesity in developing countries has been attributed to reduced physical activities and nutritional transition from traditional diets rich in fiber to intake of calorie-dense, nutrient-poor foods [7]. Greater affluence with more technology such as television in homes and the ability to purchase fast foods have been linked to overweight [7]. Overweight and obesity in school-aged children in Nigeria were significantly associated with high socioeconomic status (SES), attendance of private schools (mainly affordable to those in high SES), female gender and presence of television in the children's room [8]. Obesity is associated with complications that can occur during childhood and adolescence and persist into adulthood. These complications include hypertension with its attendant risk of long-term cardiovascular diseases and early death. Studies among school-aged children in Nigeria showed a significantly higher proportion of children with blood pressure in the pre-hypertension and hypertension range in obese children [4,9]. Early detection of overnutrition and institution of measures to prevent complications are of utmost importance. Body mass index (BMI), which is an indirect method of assessment of body fat, is an acceptable and widely used technique globally. Other indirect methods include mid-upper arm circumference (MUAC), skinfold thickness and waist circumference. The direct methods include bioelectrical impedance analysis (BIA), isotope dilution (hydrometry), dual-energy X-ray absorptiometry and magnetic resonance imaging. The BIA, in contrast to BMI and other indirect methods, can measure body fat percentage from which fat mass and fat-free mass can be measured and has been validated for the use in Nigerian children and adolescents [10]. The MUAC, a simpler indirect method than BMI, is a well-known tool for the assessment of undernutrition in children under the age of 5 but has also been reported to be a useful, alternative and practical screening tool for obesity [11]. Jaiswal et al. in India reported MUAC to be highly accurate in identifying obesity in children aged 5-14 years [11]. Chomtho et al. [12] reported a strong correlation between MUAC and fat mass compared with fat-free mass in children aged 4.4-13.9 years. There is a paucity of data in Nigeria on the use of MUAC in the assessment of overnutrition. This study aimed at determining the usefulness of MUAC in the assessment of overnutrition in comparison with BIA among children aged 6-18 years in Egor Local Government Area (LGA) of Edo State in Nigeria. Determination of overnutrition Methods This is a cross-sectional study carried out among apparently healthy children aged 6-18 years attending primary and secondary schools in Egor LGA, Benin City, Edo State, Nigeria. There are 37 public primary schools, 13 public secondary schools and 143 approved private nursery, primary and secondary schools within the LGA. The study was conducted between October and December 2017. The sample size was calculated using the formula [13]: where n is the minimum sample size, Z 1-α/2 is the confidence interval constant at 95 percentile confidence interval from a Sampling technique Subjects were selected using a multi-stage sampling technique. Of the ten political wards in Egor LGA, 30 percent were selected by the simple random technique. There are 27 private schools and 10 public schools in the three selected wards (Evbotubu, Use and Uselu). Three private and one public school, in keeping with the private to public schools ratio of 3:1 (143:50) in the LGA, were selected from each ward using the simple random sampling technique. A total of nine private and three public schools were selected. The number of pupils to be sampled from each school was determined using the formula: where a is the number of children aged 6-18 years in each school; b is the sample size of the study (1,067), whereas c is the total number of children aged 6-18 years in all the selected schools. The number of children to be sampled from each age group was calculated after obtaining their ages from the class register provided by the school head. The formula n 1 ¼ a×b/c was used where n 1 is the age sample size; a is the number of each age cohort; b is the obtained sample size from each selected school; and c is the population of children aged 6-18 years in each school. Selection of subjects After calculating the age sample size, an arm was picked from the class containing the required age. One arm was picked from Primary 1 and Primary 2 where the six-year-olds were found. The register for the selected arm was obtained and a separate list of male and female pupils was generated following which a number was assigned to each pupil, written in a piece of paper and put in a bag. The required number of pupils was randomly picked separately from the bags containing males and females until the sample size was obtained. The subjects were classified into four age groups to reflect pre-adolescence (6-9 years), early adolescence (10-12 years), mid-adolescence (13-15 years) and late adolescence (16-18 years) [14]. Data collection A questionnaire which was pretested in another school not selected for the study was used to collect information on the sociodemographic characteristic of the subjects and their families and on the presence of any chronic disease. A general examination was performed 70 JHR 34,1 on the subjects while the anthropometry was measured. A Seca stadiometer (model 214; Seca Corp, Hanover, MD, USA) was used to measure the height to the nearest millimeter with the subjects standing erect, bare footed and both feet together. The heels, buttocks and upper part of the back touched the scale. The weight was measured to the nearest 100 g with the Tanita body fat monitor/scale model SC-240 which displays the body weight and body fat percentage. The pupils were weighed in their school uniforms without cardigan or sweater and with all pockets emptied out. The equipment self calibrates after each measurement. The BIA was measured with the Tanita scale. The fat mass was calculated from the body fat percentage and body weight. The fat-free mass is the body weight minus the fat mass. The fat mass index and the fat-free mass index were derived from the fat mass and fat-free mass, respectively, divided by the square of the height. The MUAC was measured midway between the olecranon and acromion process using a non-elastic measuring tape (Chasmors, London) to the nearest millimeter. The socioeconomic class was calculated using the educational status and occupation of the parents as described by Oyedeji [15]. Ethics Ethics approval was obtained from the Ethics and Research Committee of the University of Benin Teaching Hospital (ADM/E22/A/VOL.VII/1348). Written permission was obtained from the Education Authority of the Egor LGA, written informed consent was given by the parents/guardians, whereas verbal permission was given by the school heads. Data analysis Data were analyzed using Statistical Package for Social Sciences (SPSS) version 21.0 (SPSS for Window Inc., Chicago, IL, USA). Mean, standard deviation, standard error of mean were calculated for quantitative variables such as BIA, MUAC and independent t-test was used for the comparison of means. Receiver operating characteristic (ROC) analysis was used to test the ability of MUAC to determine those children and adolescents identified as overweight and obese by BIA. A ROC score is considered as follows: 0.9-1 (excellent), 0.8-0.9 (good), 0.7-0.8 ( fair), 0.6-0.7 (poor) and 0.5-0.6 (fail). A test with an area under the curve (AUC) ⩾ 0.85 is considered an accurate test [16]. Sensitivity and specificity of MUAC were calculated at all possible cut-off points to find the optimal cut-off values. Predictive values of MUAC were obtained using BIA as standard. The McCarthy reference which defines overfat/overweight as greater than or equal to the 85th percentile of body fat and obesity as greater than or equal to 95th percentile of body fat was used [17]. χ 2 was used in testing the association between overnutrition and gender with the test significance set at p o0.05 and a confidence level of 95.0%. Results A total of 1,067 subjects consisting of 538 (50.4 percent) males and 529 (49.6 percent) females with M:F ratio of 1:1 were recruited for the study. In all, 335 (31.4 percent) of the subjects were recruited from public schools, whereas 732 (68.6 percent) were from private schools. The pre-adolescent age group (6-9 years) had the highest representation (30.8 percent), whereas the early adolescent (10-12 years) and late adolescent (16-18 years) age groups had equal and least proportion. Most (42.2 percent) of the subjects belonged to the middle socioeconomic class (SEC), whereas the lower SEC had the least proportion (24.2 percent). Of the 358 subjects in the upper SEC, 83.8 percent were in private schools, whereas 16.2 percent attended public schools. There was no significant difference, as depicted in Table I, between the mean age of male subjects (12.00 ± 3.77 years) and the female subjects (11.99 ± 3.72 years, p ¼ 0.941). Determination of overnutrition The mean MUAC, percent body fat, fat mass and fat mass index were higher in females than in males and this difference was statistically significant, p o0.0001. There was no significant difference between the mean weight of male subjects (40.46 ± 15.61 kg) and the female subjects (41.95 ± 15.82 kg, p ¼ 0.122). Conversely, the mean height was higher in males (148.06 ± 19.25 cm) than in females (146.79 ± 16.63 cm) but the difference was not statistically significant ( p ¼ 0.248), whereas the mean fat-free mass and fat-free mass index were statistically higher in males than females, p o0.0001. The mean MUAC, as shown in Table II, was statistically higher in females than in males (23.31 ± 4.74 cm vs 22.57 ± 4.67 cm, p ¼ 0.01). The MUAC-smoothed centile chart is shown in Table III. The mean MUAC increases with age in both males and females except for male subjects who were nine years old. At different ages, the mean MUAC of the female subjects was higher than that of the male subjects except for 15-17 years and the difference was statistically significant at ages 7, 9, 12 and 13 years. The 50th centile value of the MUAC-smoothed centile chart ranged from 17.5 to 28 cm for males and from 17.9 to 28.1 cm for females with peak at age 18 years for both sexes. The overall prevalence of overweight according to body fat percentage using the McCarthy reference was 5.4 percent (6.0 percent for females and 4.8 percent for males), whereas that of obesity was 6.9 percent (7.2 percent for females and 6.7 percent for males). The prevalence of overweight and obesity was higher, although not significantly, in females than in males (overweight, χ 2 ¼ 0.768, p ¼ 0.381; obesity, χ 2 ¼ 0.100, p ¼ 0.752). The prevalence of overnutrition was 12.3 percent for the study subjects, 13.2 percent for females (70 females) and 11.5 percent for males (62 males), and this difference was not statistically significant ( χ 2 ¼ 0.72, p ¼ 0.39). The results are shown is shown in Table IV Determination of overnutrition The overall prevalence of overweight according to MUAC generated from the smoothed centile chart was 6.0 percent. There was a statistically significant difference between the prevalence of overweight in females and males (7.8 percent vs 4.3 percent, χ 2 ¼ 5.714, p ¼ 0.017). The overall prevalence of obesity according to MUAC was 6.4 percent and was statistically higher in females (8.5 percent vs 4.3 percent, χ 2 ¼ 8.004, p ¼ 0.005). The prevalence of overnutrition in the study subjects by MUAC was 12.4 percent, with females having a statistically higher prevalence (16.3 percent vs 8.6 percent, χ 2 ¼ 14.62, p ¼ 0.00) than males. The prevalence of overnutrition using MUAC (12.4 percent) was comparable with that of BIA (12.3 percent). The AUC, cut-off value, sensitivity and specificity for each age and gender are shown in Table V. The AUC for MUAC was statistically significant in both genders in most ages. The AUC was good to excellent in most subjects except in males aged 15-17 years and females within the ages of 16 and 18 years. Sensitivity was relatively high in all ages except in males aged 7 and 12 years and females aged 7 and 17 years. Specificity was relatively high in all ages except in males aged 16 and 17 years. There was no subject in the nine-year-old male subjects that were overweight or obese. The MUAC cut-off values for elevated body fat percentage were calculated to be approximately 18.75-31.5 cm in males and females. There was a significant correlation between MUAC and body fat percentage (r ¼ 0.183, The categories used to summarize accuracy of AUC in ROC analysis were as follows: excellent (0.9-1), good (0.8-0.9), fair (0.7-0.8), poor (0.6-0.7) and fail (0.5-0.6). A test with an AUC ⩾0.85 is generally considered an accurate test [16] Table V. Area under the curves, optimal cut-off values, sensitivities and specificities for mid-upper arm circumference associated with overweight/obesity in boys and girls 74 JHR 34,1 index (r ¼ 0.437, p ¼ 0.000; r ¼ 0.791, p ¼ 0.000) and fat-free mass index (r ¼ 0.838, r ¼ 0.000; r ¼ 0.735, p ¼ 0.000) in both males and females, respectively. Body fat percentage, fat mass and fat mass index correlated better with MUAC in females, whereas fat-free mass index correlated better with MUAC in males. Discussion This study set out to assess the usefulness of MUAC in comparison with BIA in the assessment of overnutrition which consists of overweight and obesity. The prevalence of overnutrition as estimated by MUAC (12.4 percent) was comparable to the 12.3 percent by BIA. Despite the equal prevalence of overnutrition by both the methods, MUAC tended to slightly underestimate obesity (6.4 percent) in comparison to BIA (6.9 percent). The above finding suggests that MUAC can be a cheap, simple, available, easy to measure and effective indirect method for the assessment of overnutrition in the community. No other study, to the best of the authors' knowledge, has reported the prevalence of overnutrition using MUAC for comparison with our findings. The prevalence of obesity obtained by MUAC (6.9 percent) and BIA (6.4 percent) is similar to the findings of El-Hazmi et al. in Saudi Arabia [18] and Sadoh et al. in Benin City, Nigeria [9]. A higher prevalence of 17.2 percent was, however, reported by Cheryl et al. in the USA [19], whereas a lower prevalence of 3.3 percent was reported in Pakistan [20]. The variation in the prevalence of obesity from the various studies can be attributed to the dietary pattern and level of physical activities which differ within and between countries and regions. The methods of assessment for obesity may also contribute to the difference in prevalence rates. The mean MUAC of 23.31 ± 4.74 cm in females was statistically higher than the 22.57 ± 4.67 cm found in males which is comparable to the 23.5 ± 3.7 cm and 20.6 ± 3.1 cm reported by Chomtho et al. in females and males, respectively [12]. Jaiswal et al. [11] similarly reported a higher mean MUAC in females but the values were lower than that were reported in this study. This difference might be due to the age group of 5-14 years studied by Jaiswal et al. in contrast to the age group of 6-18 years in this study. Since MUAC increases with age, it follows that the mean value should be lower in the younger age group. The mean MUAC in this study increased with increasing age for both males and females except for nine-year-old males where a decrease was observed. The reason for this finding is not quite apparent. In contrast to this study, Lu et al. [21] reported a higher mean MUAC among males in Han, China (20.9 ± 3.7 cm vs 20.2 ± 3.2 cm, p o0.001). The reason for this was not given by the authors but Zhai et al. [22] with similar findings in a study in China noted a traditional, societal preference for male children who are favored to enjoy more of the family resources. The AUC values for optimal cut-off values of MUAC in the assessment of overnutrition showed high sensitivity and specificity for males and females for most ages. The area under the ROC curve values ranged between 0.62 and 1 with most subjects in the range of 0.8-1. This means 62-100 percent of the time a randomly selected overweight/obese child, based on MUAC, has a body fat percentage greater than that of a randomly selected child of normal adiposity. This finding corroborates previous studies stating that MUAC can be an alternative and reliable index in the assessment of overnutrition especially in the resource-poor setting [23][24][25][26]. Similar findings were reported by Craig et al. [23] in Black South African children and Mazicioglu et al. [24] in Turkish children. It was, however, observed that the accuracy level was higher in females (AUC ⩾0.85) than males. A similar observation was reported by Craig et al. especially in males aged 5-9 years. However, much higher accuracy was observed with no gender difference when MUAC was correlated with BMI in the same subjects [23]. No explanation was proposed for this finding. The positive likelihood ratios for the females which ranged from 18-38 except in middle and late adolescents (ranged from 2-4) further support the better performance of MUAC in females than males with a positive likelihood ratio ranging from 2-6 except in 6, 7 and 18-year olds. The AUC values were less accurate in late adolescents. Contrary to the higher accuracy of MUAC with AUC values between 0.8 and 1 in 75 Determination of overnutrition the middle and late adolescent age group in the study by Mazicioglu et al. in Turkey [24], this study observed a poorer performance in subjects aged 16-17 years. No plausible reason could be ascribed to this finding but of note is the fact that MUAC cut-offs were obtained using BIA cut-off values in this study, whereas BMI was used by Mazicioglu et al. There was a strong positive correlation between MUAC and fat mass and fat mass index as measured by BIA especially in females (r ¼ 0.87, 0.79). The correlation between MUAC and fat mass and fat mass index was weaker in males (r ¼ 0.67 and 0.44, respectively). The MUAC, on the other hand, showed a higher correlation with fat-free mass index in males than females. A similar finding was reported by Clomtho et al. [12] in the UK. The correlation between MUAC and body fat percentage was weaker in males (r ¼ 0.183) than in females (r ¼ 0.780). These findings may be attributed to the significant sexual difference in the muscle and fat distribution patterns between males and females due to hormonal influence [25]. Fat-free mass may be more accountable for an increase in MUAC in males, whereas fat mass may be more implicated in females. The smoothed MUAC percentile for children aged 6-18 years in this study, to the best of the authors' knowledge, is the first in Nigeria for the assessment of overnutrition in children and adolescents. The P 50 for males ranged from 17.5 cm in the 6-year olds to 28.0 cm in the 18-year olds, whereas that of the females ranged from 17.9 cm to 28.1 cm. The values obtained from the US MUAC [26] reference chart was comparable in six-year-olds to that obtained in this study in both sexes (17.5 vs 17.7 cm in males; 17.9 vs 17.8 cm in females). Males had a slightly higher value than those in our study for aged 18 years (29.4 vs 28 cm), whereas females had slightly lower value than those in our study (26.3 vs 28.1 cm). This difference may infer that the increase in MUAC with age is more marked in males than females in late adolescence. No plausible reason could be ascribed to this finding. Despite a significant difference in the prevalence of overweight and obesity using MUAC in this study, the use of body fat percentage did not show any significant difference. No previous study, to the best of the authors' knowledge, has reported this finding previously. This finding emphasizes the need for caution and inclusion of the tool of assessment in the comparison of the prevalence of overnutrition within and between different populations. Limitations This study in contrast to most other studies on the use of MUAC in the assessment of overnutrition has the advantage of using BIA cut-offs values as against BMI which does not assess body fat composition. BIA is, however, not the gold standard in the measurement of body fat composition. The optimal MUAC cut-off values of this study may not be representative of the entire country because of its restriction to Benin. Similar studies from different parts of Nigeria will be required to validate this smoothed MUAC percentiles for use in the screening of children and adolescents for overnutrition. Conclusion This study showed a remarkable comparison between MUAC and BIA in the assessment of overnutrition in children and adolescents. MUAC has the potential for use as a proxy for the assessment of overnutrition in resource-poor settings.
2019-11-14T17:13:01.926Z
2019-11-11T00:00:00.000
{ "year": 2019, "sha1": "9f21850807a944152581d5561478cf271652697f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1108/jhr-03-2019-0051", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "81d15033130a99795a6fc3aa16023b65ce340c0f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260024084
pes2o/s2orc
v3-fos-license
Ceramide-1-phosphate alleviates high-altitude pulmonary edema by stabilizing circadian ARNTL-mediated mitochondrial dynamics Graphical abstract Introduction High-altitude pulmonary edema (HAPE) as a specific medical condition High-altitude pulmonary edema (HAPE) is a serious and potentially fatal condition that can occur when a person climbs to an altitude exceeding 2500 m [1].It is a subtype of acute high altitude illness (AHAI) and is acute and noncardiogenic in nature [2].HAPE is characterized by various clinical symptoms, including shortness of breath, bluish skin color, dry cough during physical exertion, pink foamy sputum, difficulty breathing while lying flat, and fever [3]. The alveolar epithelium comprises type I and type II cells, with type I alveolar epithelial (AT1) cells being the most abundant cell type, covering over 95% of the alveolar surface and forming the membrane barrier between the air and blood compartments of the lung [4].Damage to the alveolar structure can lead to damage to AT1 cells, resulting in impaired respiratory function, which is a manifestation of lung injury [5].The function of AT1 cells is to reabsorb excess lining fluid from the alveolar surface [6], which is associated with the occurrence and progression of pulmonary edema.HAPE is a type of pulmonary edema that occurs at high altitudes.Although the precise mechanism of HAPE is not fully understood, it is believed to be related to the hypoxic vasoconstriction of pulmonary arterioles, which leads to increased pulmonary capillary pressure and fluid leakage into the alveolar space [7,8].Therefore, it is our belief that the selection of AT1 cells was appropriate for studying HAPE. The mortality rate of untreated HAPE is approximately 50% [9].Current treatment strategies for HAPE, such as acetazolamide, nifedipine, sildenafil, salmeterol, and dexamethasone, are effective but have side effects.Acetazolamide is a diuretic that is commonly used to treat HAPE, but it can cause side effects such as hypotension, dyspnea, tremor, and tachycardia.Other medications, such as nifedipine and sildenafil, can be used to reduce pulmonary artery pressure and improve oxygenation, but they can also cause side effects such as headache, dizziness, and flushing [10].Despite extensive research conducted over several years, treatment options for HAPE still have several limitations and challenges [11].Therefore, research on new, safe, and effective therapeutic targets for HAPE is necessary. Ceramide-1-phosphate (C1P) and its role in pulmonary diseases Ceramide-1-phosphate (C1P) is a vital signaling molecule that promotes cellular growth and proliferation [12], while regulating inflammation and cancer [13,14].It plays a crucial role in maintaining vascular and epithelial integrity [15].Various studies have demonstrated the significant role of C1P in the pathogenesis of pulmonary diseases such as asthma [15], chronic obstructive pulmonary disease (COPD) [16], and pulmonary fibrosis [17].C1P is synthesized intracellularly through direct phosphorylation of ceramide by ceramide kinase (CERK) [18], which is the sole pathway for C1P biosynthesis in mammalian cells [18].Several studies have suggested that CERK has a cytoprotective effect by producing C1P [19].However, there is no published evidence to date on the potential involvement of CERK/C1P in mitigating the symptoms of HAPE. Circadian clock and its impact on respiratory diseases The circadian clock is responsible for regulating physiological functions based on a 24-hour rhythm, enabling organisms to synchronize internal processes in response to external changes [20].These clocks exist in most cells, including hepatocytes [21], fibroblasts [22], monocytes [23], and cardiac myocytes [24].In mammals, the circadian clock modulates gene expression and cell function through transcriptional and translational feedback loops.The transcription factor aryl hydrocarbon receptor nuclear translocatorlike/brain and muscle ARNT-like 1 (ARNTL/BMAL1) plays a central role in regulating the expression of other clock-controlled genes [25].Heterodimers of CLOCK:ARNTL activate transcription of the Period (Per) and Cryptochrome (Cry) genes through E-box elements, and this transcriptional activation can be inhibited by Per and Cry proteins [26].The lung displays significant circadian rhythms, and the severity of several respiratory diseases varies at different times of the day [27].For instance, asthma and COPD symptoms worsen in the early morning in humans [28], and the survival of mice with influenza depends on the timing of infection [29]. CERK-derived C1P as a promising therapeutic target for alleviating HAPE In this study, we have discovered a novel molecular mechanism underlying the action of CERK-derived C1P in alleviating HAPE.We observed that C1P deficiency resulting from CERK inhibition exacerbated the severity of HAPE under hypobaric hypoxic conditions, while exogenous C1P supplementation was able to alleviate the condition.Specifically, CERK inhibition induced circadian misalignment by increasing the degradation of ARNTL, leading to dysregulation of the expression of mitochondrial fission and fusion proteins, which resulted in mitochondrial fragmentation and oxidative stress damage.However, treatment with exogenous C1P was able to restore the abnormal circadian rhythms and mitochondrial damage caused by CERK inhibition, ultimately leading to a reduction in HAPE under hypobaric hypoxia.Our findings provide the first mechanistic basis for CERK-derived C1P in alleviating HAPE and identify a promising target for the treatment of this condition. Reagents The reagents were described in table S2. Generation of CERK -/-C57BL/6 mice by CRISPR/Cas9 technology CERK-knockout C57BL/6 mice were generated using CRISPR/ Cas9 technology.To generate genetically modified mice, Cas9 mRNA and gRNA were injected into the fertilized eggs using microinjection techniques, resulting in the F0 generation.The genetically modified F0 mice were then identified using PCR amplification and sequencing methods before being bred with wild-type mice to produce the F1 generation carrying the desired genetic modifications.Heterozygous mice were then self-crossed to produce gene-knockout homozygous mice (CERK-/-).Transgenic mice were obtained from the Shanghai Model Organisms Center in Shanghai, China.The primers for identifying mouse DNA were designed and used accordingly, and their sequences are provided in Table S3. Animal care The animals were maintained in a controlled environment with a constant humidity of 50 ± 5% and temperature of 23 ± 2℃, and 12-hour light/dark cycles, ensuring stable and standardized living conditions throughout the study.To establish the HAPE model mice, we followed the protocol laid out by Qian Ni et al. [30] and our previous study [31] by exposing the mice to hypobaric hypoxic conditions for 3 days.To simulate a 5,500-meter-high atmospheric environment, we utilized a FLYDWC50-1C hypobaric hypoxia cabin from Guizhou Fenglei Air Ordnance LTD in Guizhou, China. Ethics statement All experiments involving animals were conducted according to the ethical policies and procedures approved by the ethics committee of the Chinese PLA General Hospital (Approval no.SQ2021218). Cell culture Type Ⅰ alveolar epithelial (AT1) cells were cultured in DMEM supplemented with 10% FBS and 1% antibiotics (penicillin and streptomycin).To mimic hypoxic conditions, the cells were exposed to an oxygen concentration of 1% for a period of 24 h. HE staining The lung tissue were fixed in formalin overnight and subsequently embedded in paraffin, and then sectioned into slices that were six micrometers thick.The sections were stained with the hematoxylin and eosin (HE) method, wherein they were first immersed in hematoxylin stain for 5 min and then in eosin stain for 3 min.Images were captured using an optical microscope (Nikon, Japan). The ratio of lung dry and wet weight To assess the extent of pulmonary edema, we calculated the ratio of dry lung weight to wet lung weight.Firstly, after euthanizing the animals, we immediately performed dissection.We opened the chest cavity, removed the lung tissue, and recorded the total lung weight.To ensure the accuracy of lung weight, we thoroughly cleaned the lung surface before recording the lung weight, in order to remove any contaminants that might affect weight measurement.After cleaning the lung surface, we recorded the lung wet weight.Next, we placed the lung tissue in an oven and dried it for 72 h at 160 °C until the lung tissue was completely dry.After the lung tissue was completely dry, we recorded the lung dry weight.Finally, the dry and wet weight ratio was then calculated to determine the severity of pulmonary edema. RNA-seq library preparation, sequencing, and processing Lung tissues were dissected from the mice in each group, immediately frozen.Total RNA was extracted from the samples using the TRIzol method, following the manufacturer's instructions (Total RNA Extraction Reagent, Vazyme, Jiangsu, China).The quality of the extracted RNA was assessed using an RNA Nano 6000 Assay kit on a Bioanalyzer 2100 system, which provides accurate and quantitative measurements of RNA purity and integrity (Agilent Technologies, CA, USA). After enrichment with polyA, mRNA library preparation was conducted with TruSeq Stranded mRNA (Illumina, CA, USA).High-quality libraries were prepared for sequencing using the HiSeq-2000 platform (Illumina, CA, USA).To ensure the accuracy and reliability of the data, low-quality reads and adaptor sequences were removed from the sequencing data using SOAPnuke (https://github.com/BGI-flexlab/SOAPnuke).The remaining clean reads were aligned to the mouse reference genome (GRCm38) using BOWTIE2 software (version 2.2.5).Gene expression levels were quantified using the RSEM program (version 1.2.12).Differential gene expression was evaluated by DESeq2 utilizing the Wald test.Genes that exhibited a log fold change greater than 0.5 and an adjusted p-value<0.05were identified as differentially expressed genes (DEGs). Gene enrichment testing ClusterProfiler (v4.6.0) was used for the functional enrichment of the differentially expressed genes in Gene Ontology (GO) categories and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways.We considered enrichment analysis results with a false discovery rate (FDR) below 0.05 to be statistically significant.The functional analysis of the DEGs was analyzed according to biological process (BP), molecular function (MF), and cellular component (CC) GO categories.Additionally, gene set enrichment analysis of the KEGG pathway was conducted using ClusterProfiler. E-box motif sequence alignment of gene promoter regions The [À1000 bp, 100 bp] interval of the transcription start site (TSS) was delineated as the gene promoter region and the promoter coordinate intervals for the genes of interest were obtained using GENCODE gene annotation (gencode.vM25.annotation.gtf).Bedtools (v2.26.0) was utilized to obtain the FASTA sequence of the gene promoter region based on the mouse reference genome.Finally, seqkit (v2.3.1) was employed to align the E-box motif (5 0 -CANNTG-3 0 ) sequence with the promoter sequence. CO-Immunoprecipitation analysis Protein lysates were extracted from the samples using RIPA buffer supplemented with phenylmethylsulfonyl fluoride.The protein concentration was determined using a BCA assay.Subsequently, 5 lg of target protein antibody was added to 500 ll of cell lysate containing 500 lg total protein and gently mixed overnight at 4 °C.The following day, 5 ll of Protein A and 5 ll of Protein G dynabeads were added to the mixture and rotated at room temperature for 1 h to allow the antibody to bind to the target protein. The beads-antibody-protein complex was separated using a magnetic rack and washed three times with lysis buffer to remove non-specifically bound proteins.The complex was then gently resuspended in 20-40 ll of 2 Â SDS loading buffer and heated at 95-100 °C for 10 min.Finally, Dynabeads and supernatant were separated for Western blot analysis. Cell activity measurements To assess cell viability, AT1 cells were seeded in a 96-well plate at a density of 8000 cells per 100 ll per well.We used a Cell Counting Kit/CCK-8 solution (Dojindo Laboratories, Japan) to measure cell viability.The cells were incubated at 37 °C for a specified period of time, and the absorbance was measured at 450 nm using a microplate reader from BioTek (USA).To determine the cytotoxicity of NVP231 (S6501, Selleck, USA) on cells, different concentrations ranging from 0 nM to 1000 nM were cocultured with cells, and the resultant cell activity was calculated. siRNA and cDNA transfection To perform siRNA or cDNA transfection, we utilized Lipofectamine 3000 (no.L3000-015, Thermo Fisher Scientific, USA) according to the following protocol: Cells were seeded to be 70%-90% confluent at the time of transfection.Then, 2.5 lg of cDNA or siRNA was mixed with 125 ll of Opti-MEM TM Medium, which could contain or not contain P3000 TM Reagent (2 ll/lg DNA).In addition, 3.75 ll of Lipofectamine 3000 reagent was mixed with 125 ll of Opti-MEM TM Medium.The two mixtures were then mixed separately and further combined in a 1:1 ratio to prepare the DNAlipid complex.The complex was incubated at room temperature for 10 min.In addition, 250 ll of DNA-lipid complex was added to each well of a 6-well plate, and the plate was incubated at 37 °C for 4 h for transfection.The transfection protocol was performed according to the manufacturer's recommended guidelines with modifications as described above. Immunofluorescence To prepare the cells for confocal immunofluorescence analysis, we first fixed them in 4% formaldehyde for 10 min to preserve cellular structures and proteins.Next, the cells were permeabilized with 0.2% Triton X-100 for 5 min to allow antibody penetration into the cells.To prevent non-specific binding of antibodies, the cells were then blocked in 3% BSA for 2 h.we used CoraLite Ò 488conjugated TOM20 monoclonal antibody (CL488-66777, Proteintech, USA) and CoraLite Ò 594-conjugated LAMP1 monoclonal antibody (CL594-67300, Proteintech, USA), both at a dilution of 1:200.The incubation was carried out overnight at a temperature of 4 °C to allow for optimal antibody binding to the target proteins.To visualize and capture the antibody-labeled cells, we added an antifade mounting medium with DAPI to enhance the fluorescence signal and prevent photobleaching.The cells were then imaged using an OLYMPUS FV1000 inverted confocal microscope (Japan). Mitochondrial respiration To prepare AT1 cells for further experimentation, they were seeded in a 24-well cell culture plate and incubated at 37 °C with 5% CO2 overnight to allow for optimal cell attachment and growth. Oligomycin (1.0 lM), FCCP (1.0 lM), and a mix of rotenone and antimycin A (0.5 lM) were injected into the cell culture medium. The oxygen consumption rate (OCR) was calculated by Seahorse XFe/XF24 Analyzers (Agilent Technologies, USA) according to the manufacturer's instructions. Intracellular reactive oxygen species synthesis assay To measure ROS levels, we used a fluorescent probe called DCFH-DA.The probe was first diluted to a final concentration of 10 lM.Next, logarithmically growing cells were harvested and adjusted to a concentration of 10^6 cells/mL.The cells were incu-bated in the dark for 30 min.After incubation, To remove DCFH-DA and prepare the cells for ROS analysis, we washed the cells three times with serum-free medium.The cell lines were then harvested and subjected to flow cytometry using a Becton Dickinson instrument (USA) to detect ROS levels.Excitation was set at 488 nm, and emission was recorded at 525 nm. JC-1 assay for mitochondrial membrane potential (Dwm) To measure the Dwm, we used JC-1, a fluorescent dye that accumulates in mitochondria in a potential-dependent manner.To load the cells with the JC-1 dye, we prepared a loading solution with a final concentration of 2 lM.The cells were then incubated in this solution 30 min and protected from light.After washing three times with phosphate-buffered saline, the cells with DAPI antifade mounting medium to enhance the fluorescence signal and prevent photobleaching.The cells were then imaged using an OLYMPUS FV1000 inverted confocal microscope (Japan). Transmission electron microscopy (TEM) For ultrastructural analyses, freshly harvested AT1 cells were processed as described previously [35].Briefly, cells were fixed in 1% glutaraldehyde in 0.1 M phosphate-buffered saline overnight.After fixation, the cells were washed with phosphate buffer and distilled water and then stained with 2% osmium tetroxide for 60 min.Next, the cells were dehydrated using a series of acetone concentrations, including 50%, 70%, 90%, and 100% (2 Â 10 min each), and embedded (45345, Sigma Aldrich, USA).The sections with a thickness of 70 nm were examined through a transmission electron microscope (Hitachi, HT7800) for ultrastructural analysis. Statistical analysis To analyze the data and perform statistical tests, we used GraphPad Prism 8.0 Software (GraphPad Software Inc., USA).The measurement data were presented as the means ± standard deviation (SD) of at least three biological replicates.Before statistical analysis, we assessed the normality of the data distribution using the Shapiro-Wilk test.For normally distributed continuous variables, we used an independent-sample T-test to evaluate the statistical significance between the two experimental groups.To test for statistical significance between multiple experiments, we used a one-way analysis of variance (ANOVA) followed by Dunnett's test to determine which groups were significantly different from each other.For non-normally distributed continuous variables, we employed a Mann-Whitney U test instead.P value (P) < 0.05 was considered to indicate statistical significance.All statistical analyses were performed with a two-tailed test. CERK-knockout impaired lung structure and aggravated HAPE in hypobaric hypoxia As an initial step to investigate the potential role of CERK in HAPE, we examined the expression levels of CERK mRNA and protein in the lungs of HAPE model mice.Our findings, based on mRNA and Western blot analyses, revealed a significant reduction in both CERK mRNA and protein expression levels in HAPE models compared to control groups (Fig. 1A-B). Based on our observation of decreased CERK expression in HAPE models, we formulated a hypothesis that CERK deficiency might contribute to the pathogenesis of HAPE.To test this hypothesis, we utilized CRISPR/Cas9 technology to generate CERK gene knock-out mice, as depicted in Fig. S1 A. The genotype and protein expression of CERK-knockout mice was confirmed via PCR analysis (Fig. S1B) and Western blotting (Fig. 1C), respectively.To evaluate the impact of CERK-knockout on ceramide and C1P production in vivo, we measured their concentrations in the lungs and serum.Our findings indicated a significant increase in the levels of ceramide species, such as C16:0 Cer (C16Cer), C24:0 Cer (C24 Cer), and C24:1 Cer (C24:1 Cer) by approximately 1.5-3-fold in CERKknockout mice (Fig. 1D).Furthermore, the levels of C16:0 C1P (C16C1P), C24:0 C1P (C24 C1P), and C24:1 C1P (C24:1 C1P) in the lungs and serum of CERK-knockout mice were reduced by approximately 70% compared to those in the wild-type controls (Fig. 1E). Histological analysis using hematoxylin-eosin staining (HE staining) revealed that the pulmonary tissue of CERK-knockout mice exhibited destroyed alveolar structure and thickened alveolar walls compared to those of wild-type lungs (Fig. 1F).Notably, hypobaric hypoxia intervention resulted in more significant pathological alterations in CERK-knockout lungs, including pulmonary capillary congestion, massive inflammatory cytokine infiltration in the alveolar cavity, and interstitial edema, than in wild-type lungs.We further evaluated the severity of HAPE by measuring the dry and wet weight ratio (D/W ratio).In normoxia, there was no significant difference in the D/W ratio between wild-type and CERK-knockout mice.However, under hypobaric hypoxia intervention, HAPE was aggravated in CERK-knockout mice, as reflected by a more significant decrease in the D/W ratio compared to that in wild-type mice (Fig. 1G).These findings indicated that C1P deficiency caused by CERK-knockout induced pulmonary injury and aggravated HAPE in hypobaric hypoxia. CERK-knockout caused lung injury by disrupting the circadian rhythm To investigate the mechanism underlying CERK-knockoutinduced lung injury, we conducted high-throughput RNA sequencing on lung tissue from wild-type and CERK-knockout mice.We identified a total of 528 DEGs (Fig. 2A and Table S1).GO analysis of the DEGs revealed enrichment of terms related to muscle contraction, including muscle system process, myofibril, and actinbinding (Fig. 2B).KEGG pathway analysis showed an enrichment of pathways related to the circadian rhythm (Fig. 2C).Using gene set enrichment analysis (GSEA) with the KEGG database, we identified an enriched pathway related to the circadian rhythm (normalized enrichment score; NES = 1.602, p = 0.0274) (Fig. 2D).The heatmap of DEGs in the circadian rhythm pathway was shown in Fig. 2E.ARNTL is a central component of the mammalian circadian clock [25].Our study revealed that CERK-knockout led to a decrease in the expression of ARNTL, while upregulating its negative regulator proteins, PER and DEC, which inhibited ARNTL expression in the circadian rhythm pathway (Fig. 2E).Disruptions in circadian rhythms have been linked to the pathogenesis of various diseases [23,36,37].Additionally, GSEA showed an enrichment of the HIF-1 signaling pathway and chemical carcinogenesisreactive oxygen species (Fig. S1), suggesting that CERK-knockout might affect mitochondrial activity.Our results also demonstrated that many of the DEGs involved in mitochondrial dynamics contained canonical or noncanonical E-box elements (CANNTG) for CLOCK: ARNTL (Fig. 2F) (24,38).To further investigate this, we analyzed the abundance of ARNTL and mitochondrial dynamic proteins (MFN2, DRP1, and MFF) by Western blotting.As shown in Fig. 2G-H, the expression of ARNTL and MFN2 was decreased, while the expression of DRP1 and MFF was upregulated in CERKknockout lungs compared to wild-type controls.In conclusion, our findings suggested that CERK-knockout-induced lung injury was due to disrupted expression of core proteins involved in circadian rhythms and impaired mitochondrial dynamics. CERK inhibition induced mitochondrial oxidative stress and fragmentation in AT1 cells To investigate the effect of CERK inhibition on mitochondrial health and function, we examined the dysregulation of several genes essential for mitochondrial dynamics in CERK-knockout mice.We established two CERK inhibition models using CERK inhibitor (NVP-231) treatment or CERK downregulation by siRNA in vitro.Our results showed that the selective CERK inhibitor NVP-231 significantly decreased cell activity in a dose-dependent manner (Fig. 3A), indicating that CERK-derived C1P was necessary for AT1 cells to maintain activity.We also observed the mitochondrial morphology by labeling the Tom20 protein, which was specifically localized in the mitochondrial outer membrane.Both NVP-231 treatment and CERK siRNA impaired mitochondrial morphology, with the mitochondrial network showing extensive fragmentation and the average length of mitochondria being reduced (Fig. 3B-C).These abnormal structures might impair mitochondrial function.To measure the mitochondrial respiratory capacity, we used the Seahorse Extracellular Flux Analyzer and observed impaired mitochondrial respiration, with the oxygen consumption rate (OCR) significantly decreasing after CERK inhibition (Fig. 3D-F).Reactive oxygen species (ROS) are toxic byproducts of oxidative phosphorylation that accumulate in damaged mitochondria, and their accumulation is often accompanied by a loss of membrane potential (Dwm) [39].Additionally, we found that CERK inhibition induced Dwm loss (Fig. 3G-H) and increased ROS production, suggesting increased oxidative stress (Fig. 3I-J).Taken together, these findings indicated that CERK inhibition induced mitochondrial damage by dysregulating mitochondrial dynamics, driving mitochondrial fragmentation, and increasing oxidative stress. CERK inhibition aggravated the dysregulation of mitochondrial dynamics under hypoxic conditions To evaluate the significance of CERK in pathological conditions, we investigated the impact of CERK inhibition under hypoxic conditions.We conducted high-throughput RNA sequencing of lung tissues and found that the dysregulation of DEGs in the circadian pathway caused by CERK inhibition worsened under hypoxia compared with normoxia (Fig. S2).Consistent with these results, we observed that CERK inhibition consistently reduced ARNTL expression both in normoxia and hypoxia (Fig. 4A-B).Furthermore, CERK inhibition induced upregulation of mitochondrial fission marker proteins (DRP1 and MFF) and downregulation of fusion protein (MFN2) expression.The observed changes in mitochondrial morphology indicated increased fission and decreased fusion events, suggesting that impaired mitochondrial dynamics caused by CERK inhibition were exacerbated under hypoxic conditions (Fig. 4A, C-E).Additionally, transmission electron microscopy (TEM) analysis revealed that CERK inhibition resulted in smaller mitochondria with a significantly increased mitochondrial number, with a disintegrated mitochondrial membrane and disappearance of inner cristae during hypoxia (Fig. 4F-H).These findings suggested that CERK played a crucial role in regulating mitochondrial homeostasis in AT1 cells during hypoxia and that CERK inhibition exacerbated the dysregulation of mitochondrial dynamics under hypoxic conditions. CERK inhibition aggravated mitophagy defects under hypoxic conditions Disruption of the balance between mitochondrial fission and fusion can affect the function and health of mitochondria [40].Our results showed that CERK inhibition induced Dwm depolarization and ROS accumulation, which were worsened under hypoxia (Fig. 5A-D), indicating severe mitochondrial damage.To maintain homeostasis, damaged mitochondria need to be cleared by a specific form of autophagy -mitophagy [41].Co-immunofluorescence microscopy detection showed that in control cells, mitochondria were spindle-shaped with punctate lysosomes (red) occasionally overlapping with mitochondria (green).However, in CERKinhibited cells, the mitochondrial network structure was absent, with numerous mitochondrial fragments appearing that barely overlapped with lysosomes, and this was worsened in hypoxia (Fig. 5E).LC3II is a mammalian autophagosome protein that can be used to evaluate autophagosome formation [42].The expression level of LC3-II was significantly increased in CERK-inhibited cells (Fig. 5F-G), indicating the increased formation of autophagosomes or their defective fusion with lysosomes [43].P62 is another common marker used to study the formation of autolysosomes [42].However, accumulation of P62 indicated defective mitophagy flux (Fig. 5F, 5H), consistent with confocal microscopy results.These findings suggested that inhibition of CERK induced mitophagy defects and inhibited the clearance of damaged mitochondria. Overexpression of ARNTL alleviated the mitochondrial damage caused by CERK inhibition To investigate whether ARNTL downregulation was necessary for CERK inhibition to induce mitochondrial damage, we conducted a series of experiments.Firstly, we measured the expression of mitochondrial dynamic proteins when ARNTL was overexpressed by transfecting specific plasmids.Western blot analyses confirmed increased ARNTL expression after plasmid transfection (Fig. 6A-B).Subsequently, we found that ARNTL overexpression significantly reversed the abnormal expression of mitochondrial dynamics proteins caused by CERK inhibition under normoxic and hypoxic conditions.Specifically, the reduced expression of MFN2 and upregulation of DRP1/MFF caused by CERK inhibition were reversed by ARNTL overexpression compared to the control group (Fig. 6A-E).Furthermore, we observed that ARNTL overexpression alleviated the mitophagy defects induced by CERK inhibition, as shown by the protein expression of LC3b-II (Fig. 6A, 6F).Additionally, ARNTL overexpression ameliorated the decreased basal and 3 Fig. 2. CERK knockout caused lung injury by disrupting the circadian rhythm.A: Volcano plots of DEGs between wild-type lungs and CERK-knockout lungs.B: Gene Ontology assignment of the top 10 biological processes (BPs), molecular functions (MFs), and cellular components (CCs) of DEGs between wild-type lungs and CERK-knockout lungs.C: Kyoto Encyclopedia of Genes and Genomes pathway enrichment of DEGs between wild-type lungs and CERK-knockout lungs.D: Enrichment gene set showed the circadian rhythm pathway.E: Heatmap showing the DEGs in circadian rhythm between wild-type lungs and CERK-knockout lungs.F: ARNTL target genes with E-box elements.Both canonical and noncanonical E-box sequences were identified and highlighted in red for easy visualization; G: Western blot analysis of ARNTL, MFN2, DRP1, and MFF protein expression compared to that of the ACTB loading control in wild-type and CERK-knockout lungs.H: Quantification of the Western blot results.The relative expression of each protein was normalized to that of ACTB.The results were obtained from three independent experiments and are presented as the mean ± SD. n = 3/4.*P < 0.05, **P < 0.01.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)maximal OCR caused by CERK inhibition (Fig. 6G-L).These results suggested that ARNTL overexpression restored the impaired mitochondrial respiratory capacity caused by CERK inhibition under normoxic and hypoxic conditions.In summary, our findings indicated that ARNTL downregulation was downstream of CERK inhibition and dysregulated mitochondrial dynamics, leading to impaired mitochondrial respiratory function. Exogenous C1P blocked ARNTL downregulation and mitochondrial damage caused by CERK inhibition in vitro In our previous study, we demonstrated that CERK knockout led to C1P deficiency, while CERK inhibition exacerbated HAPE by downregulating ARNTL and impairing mitochondrial function.We hypothesized that C1P might play a role in preventing ARNTL reduction and maintaining mitochondrial dynamics, and could be a promising target for HAPE treatment.To test this hypothesis, we conducted in vitro experiments using AT1 cells preincubated with C16-C1P (10 lM), which is the major intracellular C1P species in mammalian cells [44], for 10 min [45].After 24 h of normoxia/ hypoxia treatment, we collected cells for protein extraction and found that exogenous C1P prevented the downregulation of ARNTL caused by CERK inhibition (Fig. 7A-B).Furthermore, C1P supplementation rectified the abnormal expression of proteins specific to mitochondrial dynamics and mitophagy, such as MFN2, DRP1, MFF, and LC3II, which were disrupted by CERK inhibition (Fig. 7A, C-F).We also observed that C1P alleviated the mitochondrial fragmentation and length reduction induced by CERK inhibition through fluorescence microscopy (Fig. 7G-H).These results indicated that exogenous C1P supplementation neutralized ARNTL reduction and prevented mitochondrial dysfunction caused by CERK inhibition. To determine whether ARNTL was necessary for C1P-mediated mitochondrial protection, we inhibited ARNTL expression by shRNA transfection.We found that shARNTL blocked all protein expression changes induced by C1P (Fig. 7), indicating that ARNTL was a necessary downstream protein for C1P to maintain mitochondrial dynamics and function.Overall, our findings suggest that C1P may be a promising therapeutic target for HAPE treatment by preventing ARNTL reduction and maintaining mitochondrial function. Exogenous C1P attenuated mitochondrial damage and alleviated HAPE in hypobaric hypoxia in vivo To further investigate the potential therapeutic effects of exogenous C1P on HAPE in vivo, we conducted a subcutaneous injection of C16-C1P (200 ll, 1 lM [45,46]), which is the major intracellular C1P species in mammalian cells, and found that C1P blocked the downregulation of ARNTL caused by CERK-knockout (Fig. 8A-B).Moreover, C1P reversed the MFN2 downregulation DRP1/MFF/ LC3II upregulation caused by CERK-knockout, indicating that C1P supplementation neutralized the mitochondrial damage caused by CERK-knockout in mice (Fig. 8A-F).Additionally, C16-C1P relieved pulmonary capillary congestion, inflammatory cytokine infiltration, and interstitial edema caused by CERK knockout under hypobaric hypoxic conditions (Fig. 8G).The decrease in the D/W weight ratio in CERK-knockout lungs was reversed by C16-C1P injection under hypobaric hypoxia, indicating relief of pulmonary edema (Fig. 8H).Overall, our results suggested that exogenous C1P alleviated HAPE by stabilizing ARNTL expression and mitigating mitochondrial damage in vivo. Finally, we investigated the mechanism by which CERK inhibition leads to ARNTL downregulation.The ubiquitin-proteasome system and autophagy are two pathways that mediate protein degradation in mammals [47].Our results demonstrated that the degradation of ARNTL caused by CERK inhibition was blocked by Spautin-1 (an autophagy inhibitor) but not by MG-132 (a proteasome inhibitor) (Fig. 8I-J indicating that CERK inhibition induced ARNTL degradation via autophagy rather than proteasomes.P62 is a multifunctional cargo receptor implicated in the autophagic degradation of proteins and organelles [48].The Co-Immunoprecipitation analysis revealed that CERK inhibition increased the protein-protein interaction between ARNTL and P62, leading to ARNTL autophagic degradation (Fig. 8K-L). Discussion CERK-derived C1P alleviated HAPE and its deficiency aggravated the pathophysiological processes HAPE is a growing concern, and there is an urgent need to develop new safe, effective, and cost-effective drug candidates for its treatment or prevention [10].Previous studies have demonstrated that CERK-derived C1P is involved in regulating several pulmonary diseases and has protective properties [15][16][17].Our study revealed that CERK-derived C1P could alleviate HAPE and that its deficiency caused by CERK inhibition worsened the disease severity under hypobaric hypoxic conditions.We discovered a novel molecular mechanism in which CERK and C1P alleviated HAPE by regulating the circadian rhythm.CERK inhibition led to ARNTL degradation, which disrupted mitochondrial dynamics, blocked mitophagy flux, and triggered oxidative stress.Exogenous C1P injection neutralized the damage caused by CERK inhibition and alleviated HAPE.Therefore, targeting C1P may present a potential therapeutic avenue for the treatment of HAPE. C1P has been shown to have both proinflammatory and antiinflammatory properties, depending on the cell type [18,49].Notably, C1P has a potent anti-inflammatory impact in animal models of emphysema and may be a potential treatment for COPD, asthma, or lung fibrosis [50][51][52].In our study, we observed decreased expression of CERK mRNA and protein in HAPE model mice.CERK-knockout mice exhibited disrupted alveolar architecture and lower C1P concentrations in their lungs compared to wildtype mice.Furthermore, CERK-knockout mice also had aggravated HAPE after hypobaric hypoxia intervention.Baudiß et al. (46) reported that C1P reduced cigarette smoke (CS) induced acute and chronic lung inflammation and the development of emphy-3 Fig. 3. CERK inhibition induced mitochondrial oxidative stress and fragmentation in AT1 cells.A: AT1 cells were treated with NVP231 (0-1000 nmol/L) for 24 h and then used in a Cell Counting Kit-8 assay to detect cell activity.B: The cells were coincubated with the Tom20 antibody followed by image acquisition on a confocal microscope; scale bar: 10 lm.C: The mitochondrial length was calculated by ImageJ software.D: Mitochondrial respiratory function.At the end of the experiment, OCR values were normalized to the number of cells cultured per well.E: Basal respiration-associated OCR was calculated.F: The maximal respiration-associated OCR was calculated.G: Dwm was incubated with JC-1 followed by image acquisition on a confocal microscope; scale bar: 10 lm.H: Dwm was calculated by the ratio of the red/green fluorescence intensity.I: The intracellular ROS level was evaluated using a DCFH-DA probe by flow cytometry.J: The ROS analysis was calculated as the mean fluorescence intensity.The results were obtained from three independent experiments and are presented as the mean ± SD. *P < 0.05, **P < 0.01.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)The number of mitochondria per cell from TEM images was calculated.H: The relative length of mitochondria was normalized to Ctrl.The results were obtained from three independent experiments and are presented as the mean ± SD. *P < 0.05, **P < 0.01.sema in mice.Additionally, C1P protected mice from lung edema development and lethal Staphylococcus aureus sepsis by inhibiting A-SMase activity [53].Overall, our results confirmed that C1P had a protective effect against HAPE, and its deficiency could exacerbate the severity of HAPE. Inhibition of CERK decreased circadian ARNTL protein and induced mitochondrial damage In this study, we discovered a novel mechanism in which CERK inhibition impaired the circadian rhythm by downregulating the expression of the circadian core component protein ARNTL.Circadian misalignment is present in many diseases, including cardiac injury [24], pancreatic fibrosis progression [54], and obesity [55].Loss of ARNTL leads to abnormal physiological rhythms due to low-amplitude rhythms in the expression of multiple genes [37].Our findings demonstrated that CERK inhibition increased ARNTL degradation through the protein autophagy pathway, which was specifically blocked by Spautin-1.ARNTL primarily regulates mitochondrial dynamics and oxidative metabolism [24,38].Hepatic ARNTL gene deletion results in dysfunctional mitochondria and may cause fatty liver and insulin resistance due to increased oxidative stress, decreased mitochondrial responsiveness to metabolic input, and increased ROS levels [21].Our findings indicated that CERK inhibition dysregulated mitochondrial dynamics, including upregulating the mitochondrial fission proteins DRP1 and MFF and downregulating the fusion protein MFN2.Mitochondrial dynamics involve a delicate balance between fission and fusion.The balance of division and fusion maintains mitochondrial morphology and structure [56].These mitochondrial dysfunctions contributed to extensive mitochondrial fragmentation, as identified through transmission electron microscopy and confocal microscopy.Furthermore, CERK inhibition promoted mitochondrial oxidative stress by decreasing the Dwm and increasing ROS production.These findings indicated that ARNTL downregulation leading to impaired mitochondrial quality control could be the mechanism by which CERK inhibition induces lung injury. Inhibition of CERK exacerbated mitochondrial damage and impaired mitophagy, particularly under hypoxic conditions Cells respond to hypoxic or ischemic stress by altering their metabolic demands [57].Circadian rhythms maintain mitochondrial turnover and biogenesis to ensure healthy mitochondrial pools to meet the metabolic needs of cells.Disrupting circadian rhythms prevents the activation of adaptive mechanisms for maintaining mitochondrial health, leaving cells susceptible to ischemic/ hypoxic stress damage [24].Our research revealed that the reduction in ARNTL and dysregulation of mitochondrial dynamics were responsible for the exacerbation of HAPE by CERK inhibition under hypobaric hypoxia.CERK inhibition induced severe mitochondrial damage, particularly under hypoxic conditions, leading to Dwm depolarization and ROS accumulation.The excessive generation of ROS and the resulting heightened oxidative stress can ultimately compromise the proper function of the mitochondrial quality control machinery [58].Damaged mitochondria are cleared by mitophagy, which plays an important role in cell survival [41].Free amino acids and fatty acids produced by autophagic degradation can be used for protein synthesis and energy production to adapt to environmental stress [59].The process of mitophagy is initiated by the formation of autophagosomes, which are double-membrane structures that engulf and sequester damaged or dysfunctional mitochondria.These autophagosomes then fuse with lysosomes to form autolysosomes, where the contents are degraded and recycled.Lysosomal hydrolases degrade the autophagosome-delivered contents and their inner membranes [60].Our data showed increased LC3-II and P62 protein levels in CERK-inhibited cells, reflecting defective mitophagy flux.Impairment of mitophagy appears to be closely associated with the progressive accumulation of defective organelles and damaged mitochondria, which is the basis of the pathogeneses of many diseases [56].CERK inhibition dramatically worsened mitophagy deficiency in hypoxia.Consistent with Nosal et al. [61], who reported that deleting ARNTL aggravates COPD and excessive fibrosis, our results implicated that CERK inhibition disturbed circadian rhythms, impaired mitochondrial dynamics, and induced defective mitophagy as well as oxidative stress, which might be the main mechanism of HAPE aggravation under hypobaric hypoxic conditions. Exogenous C1P injection stabilized circadian rhythms and alleviated HAPE The current study demonstrates that exogenous C1P injection could recover circadian rhythms by blocking ARNTL downregulation caused by CERK inhibition and alleviate pulmonary edema under hypobaric hypoxia.Disruption of circadian rhythms causes transient discomfort or exacerbates chronic diseases [62].Maintaining healthy circadian rhythms can prevent or treat various diseases, including cancer, inflammatory diseases, and metabolic diseases [63][64][65].Herein, we found that exogenous C1P injection stabilized ARNTL expression both in vivo and in vitro.The mitochondrial dynamics changes, which were downstream of ARNTL, were also reversed by exogenous C1P injection.Additionally, exogenous C1P administration reversed pulmonary capillary congestion, interstitial edema, and inflammatory cytokine infiltration and increased the D/W weight ratio.These results suggested that exogenous C1P injection stabilizes circadian rhythms by maintaining ARNTL expression, alleviating mitochondrial damage, and relieving HAPE under hypobaric hypoxia.Pharmacological modulation to stabilize circadian rhythms has proven promising in improving therapeutic benefits in fibrosis, according to Jiang et al. (54).Therefore, exogenous C1P could be a potential therapeutic strategy for preventing or treating HAPE by stabilizing circadian rhythms. Despite our findings suggesting a protective role of CERKderived C1P and exogenous C1P in alleviating HAPE under hypobaric hypoxic conditions, there are some limitations to our research.Firstly, multiple pathways for mitophagy have been identified, including PINK1-Parkin-mediate mitophagy, ubiquitinmediated mitophagy, stress-induced mitophagy, and basal mitophagy.The mechanism by which CERK inhibition impairs mitophagy needs further clarification, as multiple signaling pathways have been identified.Secondly, further study is necessary to understand how disrupted circadian rhythms influence mitochondrial homeostasis through ARNTL degradation-induced mitochondrial dynamics disorder. CERK-derived C1P maintained circadian rhythms and mitochondrial health as a promising therapeutic approach for HAPE In conclusion, our results demonstrated that CERK-derived C1P played a protective role in HAPE under hypobaric hypoxic conditions.CERK inhibition increased the protein-protein interaction between circadian protein ARNTL and P62, thereby leading to ARNTL autophagic degradation.Furthermore, ARNTL dysregulated the transcription of many mitochondrial dynamics-regulating genes containing E-box elements, such as MFN2, DRP1, and MFF, resulting in the disorder of mitochondrial dynamics and defective mitophagy.Ultimately, it led to mitochondrial fragmentation and oxidative stress damage.However, exogenous C1P treatment stabilized circadian rhythms and reverses these effects.Therefore, exogenous C1P had the potential to be an effective therapeutic strategy for preventing or treating HAPE by stabilizing circadian rhythms and maintaining mitochondrial dynamics.Mitochondrial health is crucial for a myriad of physiological processes, including energy production, cellular signaling, and the prevention of various diseases [66].By focusing on the predictive and personalized approach in the context of mitochondrial health, we could develop innovative strategies for disease prevention and treatment.Although there are limitations to our research, these findings shed light on the importance of CERK-C1P signaling and the circadian rhythm in mitigating the development and progression of HAPE. Compliance with ethics requirements.All experiments involving animals were conducted according to the ethical policies and procedures approved by the ethics committee of the Chinese PLA General Hospital (Approval no.SQ2021218). Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 1 . Fig. 1.CERK-knockout impaired lung structure and aggravated HAPE in hypobaric hypoxia.A: CERK mRNA expression in lung tissues of normoxic control and hypobaric hypoxia intervention mice.B: Upper panel: Western blot analysis of CERK protein expression in lung tissue of normoxic control and hypobaric hypoxia intervention mice.ACTB was used as a loading control.Lower panel: Quantification of Western blot analysis from the upper panel.C: Upper panel: Western blot analysis of CERK protein expression in lung tissue.ACTB was used as a loading control.Lower panel: Quantification of Western blot analysis from the upper panel.D: The ceramide contents of the lungs and serum were analyzed by LC-MS.E: The C1P contents of the lungs and serum were analyzed by LC-MS.F: Microscopic images of lung tissues stained with hematoxylin and eosin (H&E).Scale bar: 100 lm.G: Lung dry and wet weight ratio.These was normalized to that in the Ctrl group.The results were obtained from three independent experiments and are presented as the mean ± SD. n = 3/4.P < 0.05, **P < 0.01. Fig. 4 . Fig. 4. CERK inhibition aggravated mitochondrial fragmentation under hypoxic conditions.A: Western blot analysis of ARNTL, MFN2, DRP1, and MFF protein expression compared to that of the ACTB loading control in control and CERK-inhibited lungs or AT1 cells.B-E: Quantification analysis of Western blots.The relative expression of each protein was normalized to that of ACTB.F: Transmission electron microscopy (TEM) images of mitochondria were obtained at a magnification of 12000X.Scale bars: 5 lm.G: Fig. 5 .Fig. 6 . Fig. 5. CERK inhibition disrupts mitophagy under hypoxic conditions.A: Dwm was determined by incubation with JC-1 followed by image acquisition on a confocal microscope; scale bar: 10 lm.B: Dwm was calculated by the ratio of the red/green fluorescence intensity.C: The intracellular ROS level was evaluated using a DCFH-DA probe by flow cytometry.D: The ROS analysis was calculated as the mean fluorescence intensity.E: The cells were coincubated with Tom20 and LAMP1 antibodies followed by image acquisition on a confocal microscope; scale bar: 10 lm.F: Western blot analysis of LC3II and P62 protein expression in control and CERK-inhibited lungs or AT1 cells compared to the ACTB loading control.G-H: Quantification analysis of Western blots.The relative expression of each protein was normalized to that of ACTB.The results were obtained from three independent experiments and are presented as the mean ± SD. *P < 0.05, **P < 0.01.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Fig. 7 .Fig. 8 . Fig. 7. Exogenous C1P blocked ARNTL downregulation and mitochondrial damage caused by CERK inhibition.A: Western blot analysis of ARNTL, MFN2, DRP1, MFF, and LC3II protein expression compared to that of the ACTB loading control in AT1 cells.B-F: Quantification analysis of Western blots.The relative expression of each protein was normalized to that of ACTB.G: The cells were coincubated with the Tom20 antibody and then subjected to image acquisition on a confocal microscope; scale bar: 10 lm.H: The mitochondrial length was calculated by ImageJ software.The results were obtained from three independent experiments and are presented as the mean ± SD. *P < 0.05, **P < 0.01.
2023-07-22T15:18:57.491Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "120b5fcc596a2c75f86a66ae537592d4c3706782", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jare.2023.07.008", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "21ce840fad4ec725ba1b3fcc8a0aa3af06fb85db", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235712436
pes2o/s2orc
v3-fos-license
Prior trauma, PTSD long-term trajectories, and risk for PTSD during the COVID-19 pandemic: A 29-year longitudinal study This study assessed the contributions of prior war captivity trauma, the appraisal of the current COVID-19 danger and its resemblance to the prior trauma, and long-term trajectories of posttraumatic stress disorder (PTSD) to risk for PTSD during the COVID-19 pandemic. Capitalizing on a 29-year longitudinal study with four previous assessments, two groups of Israeli veterans – ex-Prisoners-of-War (ex-POWs) of the 1973 Yom Kippur War and comparable combat veterans of the same war – were reassessed during the COVID-19 pandemic. Previous data were collected on their PTSD trajectory 18, 30, 35, and 42 years after the war and exposure to stressful life events after the war. Currently, we collected data on their PTSD during the COVID-19 pandemic and their appraisal of similarities of past trauma with the current pandemic. Previously traumatized ex-POWs were found to be more vulnerable and had significantly higher rates of PTSD and more intense PTSD during the current pandemic than comparable combat veterans. Moreover, veterans in both groups who perceived the current adversity (captivity, combat) as hindering their current coping were more likely to suffer from PTSD than veterans who perceived it as a facilitating or irrelevant experience. In addition, chronic and delayed trajectories of PTSD among ex-POWs increased the risk for PTSD during the pandemic, and lifetime PTSD mediated the effects of war captivity on PTSD during the current pandemic. These findings support the stress resolution perspective indicating that the response to previous trauma – PTSD and its trajectories – increased the risk of PTSD following subsequent exposure to stress. Introduction The COVID-19 pandemic is a public health emergency of international concern (World Health Organization, 2020). Given its highly infectious nature, in many countries Covid-19 has led to quarantine, which entails social distancing and restriction of movement. As a result, people have had to make significant changes in their daily routines. These include the curtailment of daily activities and interaction with others. The COVID-19 pandemic and its restrictions are even more distressing for the elderly who are seen as a high-risk group for both morbidity and mortality. While many young people continue to work, community support and senior citizen centers are closed. This is especially difficult for those who live alone and are unable to socialize. While the pandemic may be stressful for all elderly people, there is one group in particular that could be exceptionally affectedelderly individuals with a history of trauma. One such group is former prisoners-of-war (ex-POWs) who endured severe deprivations, torture, and mock executions repeatedly for extended periods of time. Ex-POWs traumatic experiences often leave them with severe and debilitating psychological damage (Sutker et al., 1990), in particular posttraumatic stress disorder (PTSD; Solomon et al., 2012). Yet, what these trauma survivors lived through may not end with repatriation. Studies have shown that prior exposure to trauma increases the risk of future stressful life incidents and exposure to more traumatic events (Breslau et al., 2008). According to the sensitivity perspective (Selye, 1976;Solomon, 1993), when traumatized individuals are faced with subsequent adversity they are more likely to suffer from heightened psychological vulnerability and distress than individuals without a history of trauma. Moreover, previously traumatized individuals are likely to endure reactivation following further exposure to stressors, particularly if there is a resemblance to the initial trauma (Christenson et al., 1981;Solomon, 1993). Reactivation is also known to be prevalent among the elderly, who tend to shift their attention from present and future activities to reviewing and reminiscing about their lives. In fact, studies conducted among previously traumatized older veterans (Christenson et al., 1981;Solomon, 1993) have shown that ensuing stressful experiences serve as triggers that unmask and accelerate latent PTSD. Although trauma history is a risk factor for traumatized survivors, some may be more vulnerable than others. Those who perceive the current adversity as resembling their previous trauma are likely to assess the risk to be greater and, thus, are more likely to experience reactivation and exacerbation of their posttraumatic symptoms (Hantman et al., 1994). The stress resolution perspective contends that it is not merely previous exposure but rather the psychological impact of the previous trauma that affects the outcome of the subsequent adversity (Solomon, 1993). Namely, survivors who already suffered from PTSD are more vulnerable than those who did not and, therefore, are at the greatest risk for recurrent PTSD upon additional traumatic exposure (Breslau et al., 2008). When applied to the current COVID-19 pandemic, elderly veterans who had previously suffered from PTSD are more likely to experience reactivation and exacerbation of PTSD than those who had similar trauma exposure but did not develop PTSD. PTSD is recognized as a labile disorder, with a heterogeneous and fluctuating course (Bonanno et al., 2012;Bryant, 2019), and both increases and decreases over time. Indeed, several studies have identified distinct PTSD trajectories (American Psychiatric Association, 1994), with a predominant trajectory of resilience (Bonanno et al., 2012) alongside chronic, recovered, delayed, and reactivated (Magruder et al., 2016). In this study, we capitalized on data from a 29-year longitudinal study comprising of Israeli ex-POWs of the 1973 Yom Kippur War and comparable combat veterans of the same war, who were evaluated at four previous time points − 1991 (T1), 2003 (T2), 2008 (T3), and 2015 (T4) -and then during the COVID-19 outbreak at April-May 2020 (T5), with the initial four waves identifying PTSD trajectories (Solomon et al., 2012). While both ex-POWs and controls had similar trajectories, the two groups differed in proportions of the trajectories, with more ex-POWs exhibiting ongoing and severe clinical profiles (chronic and delayed) and less mild trajectories (resilient and recovered) than controls. Given that the various trajectories may represent different levels of vulnerability, we aim to examine the role of PTSD trajectories in predicting war-related PTSD during the COVID-19 pandemic. The study aims to (1) assess the implication of previous traumatic exposure (war captivity) in PTSD and PTSD clusters during the current COVID-19 pandemic, that is, whether ex-POWs were at increased risk for PTSD during the pandemic; (2) examine the extent to which veterans' appraisal of their wartime experiences increases the risk for PTSD during the pandemic; and (3) assess whether lifetime PTSD and PTSD trajectories were associated with PTSD during the pandemic. Participants and procedure 240 Israeli ground forces were captured during the 1973 Yom Kippur War. 164 of these ex-POWs participated at T1, 103 participated at T2 (41 could not be located/refused, 4 had died, and 6 could not participate due to a deteriorated mental status), 183 at T3 (29 could not be located/ refused, 20 had died, and six could not participate due to mental deterioration), and 158 at T4 (49 could not be located/refused, 30 had died, and three suffered from physical or mental problems). One-hundred and twenty of these ex-POWs participated in the assessment conducted during the COVID-19 outbreak (T5; 66 could not be located/refused, 36 had died, and 18 could not participate due to mental deterioration). In addition, 280 veterans were sampled from the Israel Defense Forces (IDF) computerized database . These individuals also participated in the Yom Kippur War on the same fronts, but were not taken captive, and were matched to ex-POWs on military background and socio-demographic variables. Among them, 185 participated at T1, 106 took part at T2 (78 could not be located/refused and 1 had died), 118 took part at T3 (20 could not be located/refused, and five had died), and 101 participated at T4 (60 could not be located/refused, 1 was abroad, and 18 had died). At T5, the target group included 136 controls; of those, 65 participated at the study (65 could not be located/refused, 3 had died, and 3 could not participate due to mental deterioration). Data on exposure to stressful life events were assessed at T1, T2, T3, and T4. Level of exposure to COVID-19 and participants' appraisals of the extent to which their war-related experiences affected their current adjustment were assessed at T5. PTSD was assessed at T1, T2, T3, T4, and T5. The study was approved by the institutional review board (IRB) and all participants signed a consent form. 2.2.1Background variables Participants were asked their age, education, occupational status, and with whom they lived. Additionally, participants were asked in T5 for their appraisals of whether their wartime experiences affected the way they adjusted to the current lockdown and social restrictions (1 = it facilitated their adjustment, 2 = it did not affect their adjustment, 3 = it hindered their adjustment). 2.2.2Exposure to COVID-19 At T5, 10 questions were included to assess exposure to COVID-19 (Tsur and Abu-Raiya, 2020;Zhen and Zhou, 2020). Participants were asked whether they had been exposed to COVID-19-related incidents (e. g., getting infected, quarantined, a family member got infected or quarantined, knowing someone who died from COVID-19). Overall exposure was calculated by summing all positive answers, with higher scores indicating higher exposure to COVID-19. 2.2.3Exposure to stressful life events At T1, T2, and T3 participants completed a brief scale on exposure to stressful life events (Ginzburg, 2006) and reported whether they experienced a targeted event (e.g., death of a significant other, motor vehicle accident). At T4, participants were asked to list the events they experienced since T3. For each participant we computed the overall number of reported stressful events. 2.2.4PTSD PTSD was measured at all assessments using the PTSD-Inventory (Solomon, 1993;Solomon et al., 2012), a 17-item self-report scale corresponding to DSM PTSD symptom criteria (American Psychiatric Association, 1994). Each of the PTSD symptoms was anchored to the participants' war experiences. Participants indicate whether they experienced the symptom in the past month, on a four-point scale ranging from 1 (not at all) to 4 (I usually did). An answer of 3 or above was considered a positive endorsement. PTSD trajectories were derived from PTSD status (meeting/not meeting DSM criteria). In addition, intensity of PTSD and of each of its symptom clusters (intrusion, avoidance, and arousal) was calculated as the mean of the relevant items. Lifetime PTSD was defined as meeting clinical criteria at least one wave of measurement. Previous studies have supported the validity and reliability of the PTSD Inventory . Rate of agreement between diagnoses made by the PTSD Inventory and by clinical interviews reached 85% . Reliability of the scale's score was high at all assessments (Cronbach's alpha ranging from 0.91 to 0.96). Data analysis A series of Chi square analyses examined group differences (ex-POWs vs. controls) in PTSD rates at T1-T5. To examine whether ex-POWs are at an increased risk for PTSD during the COVID-19 pandemic (T5), we conducted a binomial logistic regression, examining the effect of group (ex-POWs vs. control) to the prediction of PTSD at T5, controlling for stressful life events and exposure to COVID-19. This analysis was followed by multivariate analysis of variance (MANOVA) examining the effect of group on PTSD symptom intensity (total, intrusion, avoidance, hyperarousal), while controlling for stressful life events. The effects of captivity and appraisal of the extent to which warrelated experiences affected adjustment to COVID-19 on PTSD at T5 were examined by (a) a Chi square analysis assessing group differences (ex-POWs vs. controls) in appraisals, and (b) a two-way ANOVA assessing the effects of group, appraisal, and their interaction on intensity of PTSD at T5. To examine the effect of PTSD trajectories (T1-T4) on PTSD at T5, participants were divided into five groups according to their PTSD classification at T1-T4: chronic PTSD (participants who met the criteria for PTSD in all four waves), delayed PTSD (participants who did not endorse the PTSD criteria in the first wave/s but did in subsequent wave/s), recovered PTSD (participants who endorsed PTSD criteria in either of the first waves but not in the later/last waves), resilient (veterans who never endorsed the criteria for PTSD), and reactivation (participants who initially had PTSD, recovered, and then had a delayed reactivation of PTSD at a later measurement). Chi square analysis examined differences in the trajectory rates between ex-POWs and controls. Another Chi-square analysis, conducted among the ex-POWs, examined differences between PTSD trajectories in rates of PTSD at T5. And, an one-way ANOVA assessed differences between PTSD trajectories groups in intensity of PTSD at T5. Finally, a logistic regression examined the contribution of different factors to rates of PTSD at T5. The predictors entered were group (step 1), life events since the war (step 2), lifetime PTSD (step 3), and appraisal of the effect of war-related experiences on current adjustment as a dummy variable (1 = hindering effect; 0 = appraisal as facilitating of irrelevant). Are ex-POWs at an increased risk for PTSD during the COVID-19 pandemic? A series of Chi square (χ 2 ) analyses indicated higher rates of PTSD among ex-POWs as compared to controls across all assessments (T1-T5; see Table 1). The comparison between ex-POWs and controls in the intensity of total PTSD and symptom clusters at T5, while controlling for life events since the war, resulted in a significant multivariate effect F(4,160) = 12.59, p < 0.001. Further examination yielded significant group differences in each of the symptom clusters as well as the total PTSD score (Table 1). Ex-POWs reported higher intensity of total PTSD, intrusion, avoidance, and hyperarousal than controls. The effects of life events since the war were not significant, F(4, 160) = 0.33, p = 0.855. The effect of captivity and appraisal of the extent to which warrelated experiences affected adjustment to COVID-19 on PTSD at T5 Overall, ex-POWs and controls differed in their appraisal of the extent to which their war-related experiences affected their adjustment to COVID-19, χ 2 (2) = 23.02, p < 0.001. Forty-three (38.1%) of the ex-POWs perceived their war-related experiences as hindering their adjustment to COVID-19 compared to 11.3% (n = 7) of the controls; 20.4% (n = 23) of the ex-POWs appraised it as facilitating adjustment compared to 9.7% (n = 6) of the controls; and, 41.6% (n = 47) of ex-POWs perceived their war-related experiences as irrelevant to their current adjustment compared to 79% (n = 49) of the controls. A two-way ANOVA on intensity of PTSD during COVID-19 yielded a significant main effect for captivity, F(1,169) = 22.19, p < 0.001 with higher PTSD intensity for ex-POWs (M = 2.37, SD = 0.73) than controls (M = 1.58, SD = 0.55). Analysis also revealed a significant main effect for the appraisal of the effect of war-related experiences on current adjustment, F(2,169) = 16.99, p < 0.001. That is, participants who appraised their experiences as hindering adjustment (M = 2.81, SD = 0.59) had a higher intensity of PTSD compared to veterans who perceived it as facilitating adjustment (M = 1.96, SD = 0.61) or irrelevant (M = 1.75, SD = 0.63). However, the interaction was not significant, F(2,169) = 0⋅55, p = 0.58, indicating that the effect of appraisal on PTSD intensity at T5 was similar among ex-POWs and controls. Specifically, of the resilient ex-POWs, only 11.6% developed PTSD during the COVID-19 pandemic, while of the recovery trajectory, 16.7% had PTSD during the pandemic. Of the ex-POWs in the chronic and delayed trajectories groups, 50% and 63.3% had PTSD during the pandemic. Accordingly, of the ex-POWs in the reactivation trajectory group, 66.7% had PTSD during the pandemic. The ANOVA, assessing differences between PTSD trajectories groups in PTSD intensity at T5, yielded a significant effect, F(4,139) = 16.19, p < 0.001. Bonferroni post hoc tests showed that resilient ex-POWs had the lowest levels of PTSD intensity during the pandemic (M = 1.73, SD Table 1 Differences between ex-POWs and controls in rates of PTSD across all assessments, and intensity of PTSD symptoms at T5. Predicting risk for PTSD during the COVID-19 pandemica holistic model The logistic regression examining the contribution group, stressful life events, lifetime PTSD, and appraisals of wartime experiences to PTSD at T5 yielded a significant model, χ 2 (df = 4) = 79.04, p < 0.001, Cox & Snell R 2 = 36.2%. In the final step, lifetime PTSD was associated with a greater risk of 8.15 times more to develop PTSD at T5 (see Table 2). Appraisal war-related experiences were not significant predictors of PTSD rates at T5. Finally, the significant association between captivity trauma and PTSD rates during the COVID-19 pandemic was reduced when lifetime PTSD was entered into the model (step 3). This reduction, depicted in Table 2 was significant according to the Sobel test, Z = 2.00, p = 0.02, indicating that the indirect effect was significant. In other words, there is a higher risk for ex-POWs to develop PTSD during the pandemic as mediated by lifetime PTSD. Discussion The findings of this study demonstrated that previously traumatized ex-POWs were more vulnerable during the current pandemic and had significantly higher rates and intensity of PTSD than comparable combat veterans. Moreover, veterans in both groups who perceived their warrelated experiences as hindering their current coping with the COVID-19 related stress were more likely to suffer from PTSD during the pandemic than veterans who perceived it as a facilitating or irrelevant experience. Most importantly, the results of this longitudinal study showed that chronic and delayed trajectories of PTSD among ex-POWs increased the risk for PTSD during the pandemic and lifetime PTSD mediated the effects of war captivity on PTSD during the current pandemic. In keeping with the vulnerability perspective (Selye, 1976), and supported by numerous studies (Hantman et al., 1994;Kessler et al., 2018), the previously traumatized ex-POWs were more vulnerable to PTSD decades later when faced with the current stressors of COVID-19. Their prior trauma undermined their sense of safety (Janoff-Bulman, 2010), and taxed and eroded their self-efficacy and trust in their own abilities (Titchener and Ross, 1974). As a result, their coping rendered them less prepared to cope with subsequent stressors. Moreover, our findings indicated that the current vulnerability of ex-POWs cannot be attributed to the life events that they experienced after their repatriation. It is plausible that the brutal and extreme experiences of war captivity, in which human lives are of no consequence, are likely to dwarf the meaning and effects of more common and mundane subsequent life events (Ruch et al., 1980). Our findings revealed considerable variability in both the ex-POWs and the combatants regarding the effects of prior trauma on their current perceptions. In both groups, some survivors felt that their previous trauma (captivity, combat) negatively affected their adjustment to the current COVID-19 and thus made it more difficult to endure. Interestingly, more ex-POWs felt that their previous trauma affected their perception of the current adversity in comparison to control combatants. It transpired that the vast majority of the controls and most of the ex-POWs reported that their war experiences had no relevance in the context of COVID-19. However, when we assessed the relationship between the participants' evaluation of the effects of previous trauma as enhancing or hindering coping with the current stress, we found that in both groups their attribution was significantly associated with their current PTSD. In both study groups, individuals who felt that their prior trauma made the current stress easier to endure had lower rates of PTSD than those who felt that their trauma history made COVID-19 more difficult to endure. This is consistent with an earlier Israeli study of Holocaust survivors who reportedly perceived the 1992 Gulf War as similar to their prior trauma and, as such, reacted with intense distress (Hantman et al., 1994). These findings clearly underscore the role of meaning-making of the prior trauma in the psychiatric response to current trauma. Lifetime PTSD While many of participants met PTSD symptom criteria, many others did not. One-third of the ex-POWS, and almost 90% of the controls, were not identified as having PTSD at any of the first four times. Consistent with the crisis resolution perspective (Solomon, 1993) and earlier studies (Solomon, 1993;Solomon et al., 1987), our findings indicated that it is not the history of prior exposure to trauma per se but rather the psychological outcome that affects reactions to subsequent trauma. We found that a lifetime prevalence of PTSD (that in endorsing PTSD at lead once in previse assessments was associated with an increased risk for PTSD during the COVID-19 pandemic among ex-POWs but not in the control group. In other words, the elevated risk of current PTSD among ex-POWs was accounted for by their lifetime PTSD. Inspection of the regression analysis clearly demonstrated that lifetime PTSD prevalence, rather than trauma exposure, is implicated in risk for PTSD upon subsequent stress. This finding is consistent with a prospective systematic study of young adults (Breslau et al., 2008), which found that the presence of PTSD as a result of subsequent trauma was limited to respondents with a history of PTSD. Why did their prior PTSD increase the risk of PTSD in response to the subsequent trauma? One cannot negate the possibility that a pre-existing vulnerability predated the first trauma (war captivity) and led to PTSD following other traumas. Yet, as Titchner and Ross (1974) argued as well as numerous studies of various populations (Solomon and Mikulincer, 2006), the initial psychological rupture set in motion a process of posttraumatic decline leaving permanent effects, particularly a proneness to anxiety reactions entrenched in vulnerability. This vulnerability is likely to give rise to the reactivation and exacerbation of PTSD upon exposure to subsequent trauma Notes. ***p < 0.001; **p < 0.01; *p < 0.05. (Christenson et al., 1981;Solomon, 1993;Solomon and Mikulincer, 2006). Trajectories of PTSD PTSD, like other anxiety disorders, is not a stable entity. To the best of our knowledge, this is the first study that examined not only PTSD following prior trauma but also the implication of PTSD trajectories measured prospectively at four time points over 29 years. As PTSD symptoms fluctuate over time, they are likely to tax and deplete trauma survivors' psychiatric resources differently and thus have a differential effect on their ability to cope with subsequent stress. Accordingly, our results reflect differential risk for PTSD during the COVID-19 pandemic as a function of PTSD trajectories over a four waves of measurement (24 years). At the greatest risk and most severely affected were those who had not recovered and suffered for decades from chronic debilitating PTSD. Second were those with delayed onset PTSD who lived for years after the war with residual subclinical symptoms, however, over time and due to aging their PTSD symptoms were reactivated and exacerbated, leaving them emotionally depleted and vulnerable. The most robust were those in the resilient trajectory who were initially better emotionally equipped and, therefore, subsequent stress only had limited pathogenic effects. Limitations The findings of the current study should be considered in light of its limitations. The sample size, especially that of the controls, is modest. Additionally, an initial assessment was not conducted within the first few years following the war, as the first assessment was conducted 18 years after the war. Although self-report symptom checklists based on the DSM criteria, as the PTSD Inventory used in the current study, were evaluated as valid and reliable effective for research purposes, especially when the questionnaire refers to specific traumatic events, such as captivity or combat-related events (McDonald and Calhoun, 2010;Wilkins et al., 2011), the use of self-report questionnaires to identify PTSD might be considered another limitation. Finally, as the study was conducted among Israeli ex-POWs and combat veterans, generalizing from these results to other populations, in other times and cultures, should be undertaken cautiously. Clinical implications The results call for a need to monitor and provide support and clinical help to previously traumatized individuals during the current COVID-19 pandemic. This is particularly needed for elderly trauma survivors who suffer from PTSD and are currently in double jeopardy as an identified high-risk group for COVID-19. Given the lockdowns and social restrictions that compound the already restricted movement of the elderly, it is incumbent upon the medical staff and other helping professionals, in care homes as well as in the community, to pay close attention to the psychological distress and needs of those with a history of PTSD following an event which may bear a resemblance to the current stressful experience, who are at risk for reactivation and exacerbation. Furthermore, as previous studies indicated that PTSD is often comorbid with other disorders (Ginzburg et al., 2010), other manifestations of distress should also be monitored and targeted. Support and evidence-based trauma treatments should be available and offered when relevant. Given that this pandemic is not over and another potentially worse wave may return, preparedness and precautions are called for. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors, therefore, no funding sources had any involvement in this study. Declaration of interest None of the authors have any conflicts of interest to report.
2021-06-23T13:13:41.370Z
2021-06-16T00:00:00.000
{ "year": 2021, "sha1": "4686b9308d0d4b628e4af610f036e8b8f4b7252f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "271d7e7d2b7219dc419ac3d83538852e74a3f272", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
119486823
pes2o/s2orc
v3-fos-license
Optical Kerr spatiotemporal dark extreme waves We study the existence and propagation of multidimensional dark non-diffractive and non-dispersive spatiotemporal optical wave-packets in nonlinear Kerr media. We report analytically and confirm numerically the properties of spatiotemporal dark lines, X solitary waves and lump solutions of the (2 + 1)D nonlinear Schrodinger equation (NLSE). Dark lines, X waves and lumps represent holes of light on a continuous wave background. These solitary waves are derived by exploiting the connection between the (2 + 1)D NLSE and a well-known equation of hydrodynamics, namely the (2+1)D Kadomtsev-Petviashvili (KP) equation. This finding opens a novel path for the excitation and control of spatiotemporal optical solitary and rogue waves, of hydrodynamic nature. INTRODUCTION Techniques to shape and control the propagation of electromagnetic radiation are of paramount importance in many fields of basic science and applied research, such as atomic physics, spectroscopy, communications, material processing, and medicine. 1, 2 Among these, of particular interest are methods that produce localized and distortion-free wave-packets, i.e. free from spatial and temporal spreading due to diffraction or material group velocity dispersion (GVD), respectively. 3 There are two main strategies to achieve propagation invariant electromagnetic wave packets. 4 The first methodology is based on the spatiotemporal synthesis of a special input wave, so that diffractive and dispersive effects compensate for each other upon linear propagation in the material. Building blocks of these linear light bullets are Bessel beams and their linear combinations, along with Airy pulses. 5 These waveforms enable the generation spatiotemporal invariant packets such as the Airy-Bessel beams, and the so-called X-waves, obtained by a linear superposition of Bessel beams with different temporal frequencies. The second approach involves the generation of solitary waves, that exploit the nonlinear (quadratic or cubic) response of the material for compensating diffractive and dispersive wave spreading. 6,7 Although successfully exploited in (1 + 1)D propagation models, that describe for example temporal solitons in optical fibers and spatial solitons in slab waveguides, in more than one dimension spatiotemporal solitons have so far largely eluded experimental observation, owing to their lack of stability associated with the presence of modulation instability (MI), collapse and filamentation. Here, we overview our recent contributions to the field of non-diffractive and non-dispersive wave-packets in Kerr media, 8-10 by deriving analytically and confirming numerically the existence and propagation of novel multidimensional (2 + 1)D dark non-diffractive and non-dispersive spatiotemporal solitons propagating in i) self-focusing and normal dispersion Kerr media, and in ii) self-defocusing and anomalous dispersion Kerr media. The analytical dark solitary solutions are derived by exploiting the connection between the (2 + 1)D NLSE and the (2 + 1)D Kadomtsev-Petviashvili (KP) equation, 11 a well-known equation of hydrodynamics. Our results extend and confirm the connection between nonlinear wave propagation in optics and hydrodynamics, that was first established in the 1990's. 12 namely the (2+1)D, or more precisely (1+1+1)D, NLSE, where u(t, y, z) stands for the complex wave envelope, and t, y represent the retarded time, in the frame traveling at the natural group-velocity, and the spatial transverse coordinate, respectively, and z is the longitudinal propagation coordinate. Each subscripted variable in Eq. (1) stands for partial differentiation. α, β, γ are normalized real constants that describe the effect of dispersion, diffraction and Kerr nonlinearity, respectively. NORMAL DISPERSION AND SELF-FOCUSING REGIME: NONLINEAR LINES AND X-WAVES At first, we consider the case of normal dispersion and self-focusing nonlinearity. 9, 10 We proceed to consider the existence and propagation of (2 + 1)D NLSE dark line solitary waves, which are predicted by the existence of (2 + 1)D KP-II bright line solitons. 16,17 When considering the small amplitude regime, a formula for an exact line bright soliton of Eq. (3) can be expressed as follows: 16,17 where rules the amplitude and width of the soliton, ϕ is the angle measured from the υ axis in the counterclockwise, c = 2 + 3tan 2 ϕ is the velocity in τ -direction. Notice that c is of order . Moreover we obtain φ(τ, υ, ς) = √ tanh([ /2(τ + tanϕ υ + cς)]. The analytical spatiotemporal envelope intensity profile u(t, y, z) of a NLSE dark line solitary wave is given by the mapping (2) exploiting the KP bright soliton expression. The intensity dip of the dark line solitary wave is − , the velocity c 0 − c − 3tan 2 ϕ = 4 − 2 − 3tan 2 ϕ in the z-direction. We numerically verified the accuracy of the analytically predicted dark line solitary waves of the NLSE. To this end, we made use of a standard split-step Fourier technique, which is commonly adopted in the numerical solution of the NLSE (1). Figure 1 shows the numerical spatiotemporal envelope intensity profile |u(t, y, z)| 2 of a NLSE dark line solitary wave, which corresponds to the predicted analytical dynamics. As can be seen from the images, the numerical solutions of the NLSE show an excellent agreement with the analytical approximate NLSE solitary solutions. In the long wave context, the KP-II equation admits complex soliton solutions, mostly discovered and demonstrated in the last decade, which may describe non-trivial web patterns generated under resonances of linesolitons. 16,17 Here, we consider the resonances of four line solitons, which give birth to the so-called O-type bright X-shaped two-soliton solution of the KP-II (the name O-type is due to the fact that this solution was originally found by using the Hirota bilinear method). When considering the small amplitude regime, the formula of the O-type solution of Eq. (3) can be expressed as follows, η(τ, υ, ς) = −2 (ln F ) τ τ , where the function F (τ, υ, ς) is given by We numerically verified the accuracy of the analytically predicted O-type dark X solitary wave of the NLSE. Fig. 2 shows the (y, t) profile of the numerical solution of the hyperbolic NLSE at z = 0, and at z = 10. In this particular example we have chosen 1 = 0.2, 2 = 0.001. Specifically, Fig. 2 illustrates a solitary solution which describes the X-interaction of four dark line solitons. The maximum value of the dip in the interaction region is 2( 1 − 2 ) 2 ( 1 + 2 )/( 1 + 2 + 2 √ 1 2 ). Asymptotically, the solution reduces to two line dark waves for t 0 and two for t 0, with intensity dips 1 2 ( 1 − 2 ) 2 and characteristic angles ±tan −1 ( 1 + 2 ), measured from the y axis. Numerical simulations and analytical predictions are in excellent agreement. We estimate the error between the asymptotic formula and the X solitary wave in the numerics to be lower than 2%. ANOMALOUS DISPERSION AND SELF-DEFOCUSING REGIME: DARK LUMPS Next, we consider the case of anomalous dispersion and self-defocusing nonlinearity. 8 We proceed to verify the existence of (2+1)D NLSE dark-lump solitary waves, as predicted by the solutions of KP-I through Eq.(2) (see 18 for details). When considering the small amplitude regime ( 1), a form of KP lump-soliton solution of Eq. The parameter rules the amplitude/width and velocity properties of the KP lump soliton. The lump peak amplitude in the (ς, υ) plane is −4 ; the velocity in the τ -direction is 3 . Moreover, φ(τ, υ, ς . The analytical spatiotemporal envelope intensity profile u(t, y, z) of a NLSE dark solitary wave is given by the mapping (2), which exploits the KP bright lump expression. Then, we numerically verified the accuracy of the analytically predicted dark lumps solitary waves of the NLSE. Figure 3 shows the numerical spatio-temporal envelope intensity profile |u| 2 of a NLSE dark lump solitary wave in the y-t plane (t = t − c 0 z), at the input z = 0 and after the propagation distance z = 100, for = 0.05. In the numerics, the initial dark NLSE profile, of KP-I lump origin, propagates stably in the z-direction, with virtually negligible emission of dispersive waves, with the predicted velocity c 0 + 3 , and intensity dip of 4 . Thus, the predicted theoretical dark lump solitary waves of Eq. (2) are well confirmed by numerical simulations. We remark that the KP-I equation admits other types of lump solutions which have several peaks with the same amplitude in the asymptotic stages |z| 0. We call such lump solution multi-pole lump. Here we show that (2 + 1)D NLSE can also support such lump solutions. We consider multi-pole lump solution with two peaks, which is expressed as: η(τ, υ, ς) = −2∂ 2 τ logF , where F = |f 2 1 + if 2 + f 1 / + 1/2 2 | 2 + |f 1 + 1/ | 2 /2 2 + 1/4 4 , and δ 1 , δ 2 are arbitrary complex parameters. The analytical spatiotemporal envelope intensity profile u(t, y, z) of a NLSE dark solitary wave is again given by the mapping (2). Figure 4 shows the initial spatio-temporal envelope intensity profile |u| 2 of a two-peaked NLSE dark lump in the y − t plane, along with the numerically computed profiles after propagation distances z = 150, for = 0.1 (τ 0 = 0, υ 0 = 0, ς 0 = −50, δ 1 = 0, δ 2 = 0). In particular, Fig. 4 depicts the scattering interaction of the twopeaked waves: two dark lumps approach each other along the t -axis, interact, and subsequently recede along the y-axis. These solutions exhibit anomalous (nonzero deflection angles) scattering due to multi-pole structure in the wave function of the inverse scattering problem. We remark that the numerical result of NLSE dynamics is in excellent agreement with analytical dark solitary solution Eq. (2) with KP-I multi-pole lump solution. INSTABILITIES AND EXPERIMENTAL FEASIBILITY Let us discuss the important issue of the stability of the predicted dark line, X solitary waves and lumps. Two instability factors may affect the propagation of these waves. The first one is the modulation instability (MI) of the continuous wave background. In the case of normal dispersion and self-focusing, α < 0, β, γ > 0, MI is of the conical type. 19 Generally speaking, MI can be advantageous to form X waves from arbitrary initial conditions both in the absence or in the presence of the background. However, for sufficiently long propagation distances the MI of the CW background may compete and ultimately destroy the propagation of dark solitary waves and their interactions. In the case of anomalous dispersion and in the self-defocusing regime, α > 0, β > 0, γ < 0, MI is absent, thus lumps are not affected by MI. The second mechanism is related to the transverse instability of the line solitons that compose the asymptotic state of the X wave. We point out that such instability is known to occur for the NLSE, despite the fact that line solitons are transversally stable in the framework of the KP-II (unlike those of the KP-I). 11 However, in our simulations of the NLSE, these transverse instabilities never appear, since they are extremely long-range, especially for shallow solitons. In fact, we found that the primary mechanism that affects the stability of dark line and X solitary waves is the MI of the CW background. Let us briefly discuss a possible experimental setting in nonlinear optics for the observation of cubic spatiotemporal solitary wave dynamics of hydrodynamic origin. As to (2+1)D spatiotemporal dynamics, one may consider optical propagation in a planar glass waveguide (e.g., see the experimental set-up of Ref. 20 ), or a quadratic lithium niobate crystal, in the regime of large phase-mismatch, which mimics an effective Kerr nonlinear regime (e.g., see the experimental set-up of Ref. 21 ). As far as the (2+1)D spatial dynamics is concerned, one may consider using a CW Ti:sapphire laser pulse propagating in a nonlinear medium composed of atomic-rubidium vapor (e.g., see the experimental set-up of Ref. 14 ), or a bulk quadratic lithium niobate crystal, again in the regime of large phase-mismatch (e.g., see the experimental set-up of Ref. 22,23 ). CONCLUSIONS We have analytically predicted a new class of dark solitary wave solutions, that describe non-diffractive and nondispersive spatiotemporal localized wave packets propagating in optical Kerr media. We numerically confirmed the existence of nonlinear lines, X-waves, lumps and peculiar scattering interactions of the solitary waves of the (2+1)D NLSE. The key novel property of these solutions is that their existence and interactions are inherited from the hydrodynamic soliton solutions of the well known KP equation. Our findings open a new avenue for research in spatiotemporal extreme nonlinear optics. Given that deterministic rogue and shock wave solutions, so far, have been essentially restricted to (1+1)D models, future research on multidimensional spatiotemporal nonlinear waves will lead to a substantial qualitative enrichment of the landscape of extreme wave phenomena.
2018-02-25T14:21:18.000Z
2018-02-15T00:00:00.000
{ "year": 2018, "sha1": "77cc8e750f13fc7485acb9b9479e04fe9dcc2a43", "oa_license": null, "oa_url": "https://iris.unibs.it/bitstream/11379/501444/1/105170E.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2a0fc2fd0db71adb99a22c68261ea2ae51332cb1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
119700554
pes2o/s2orc
v3-fos-license
A family of mock theta functions of weights 1/2 and 3/2 and their congruence properties In a private communication, K. Ono conjectured that any mock theta function of weight 1/2 or 3/2 can be congruent modulo a prime $p$ to a weakly holomorphic modular form for just a few values of $p$. In this paper we describe when such a congruence occurs. More precisely we show that it depends on the $p$-adic valuation of the mock theta function itself. In order to do so, we construct a family of mock theta functions in terms of derivatives of the Appell sum, which have a special Fourier expansion at infinity. In particular, the function P is a weakly holomorphic modular form, i.e., a meromorphic modular form whose poles (if any) are supported at cusps. Using similar methods of S. Ahlgren and K. Ono, S. Treneer generalized their result to any weakly holomorphic modular form [21]. Coming back to (1.1), we consider now the left hand side. In his last letter to Hardy dated 1920, Ramanujan listed 17 hypergeometric series, which he called mock theta functions, describing some of their basic properties, but without giving any precise definition. The function f (q), i.e., the left hand side of (1.1), is one of these mysterious functions. After more than 80 years from this letter, a breakthrough was made by S. Zwegers in his 2002 Ph.D. thesis [24], where he characterized these special functions in three different ways, namely, in terms of Appell sums (see Subsection 2.2), as Fourier coefficients of meromorphic Jacobi forms, and as quotients of indefinite binary theta series by unary theta series. For a more extensive description of mock theta functions and a survey of their beautiful story we refer the reader to [17,23]. In order to define a mock theta function, for any A B ∈ Q and ε ∈ {0, 1}, consider the unary theta function of weight 1 2 where B * := B 1 2 + B 2 and {x} denotes the fractional part of x ∈ Q. The pre image of Θ α,ε,κ under the operator ξ 3 2 −κ := 2iy 3 2 −κ ∂ ∂τ yields the non-holomorphic theta function where y = Im(τ) and β κ (t) := ∞ t u κ− 3 2 e −πu du. Following Zagier [23], we define a mock theta function of weight 3 2 − κ as a q-series H(q) = n≥0 a(n)q n such that there exists a rational number λ and a unary theta function Θ A B ,ε,κ , such that q λ H(q) + R A B ,ε,κ (Bτ) is a non-holomorphic modular form of weight 3 2 − κ for a congruence subgroup of SL 2 (Z). We will refer to the theta function Θ A B ,ε,κ as the shadow of H. The function f (q) defined above is a mock theta function of weight 1 2 with shadow Θ 1 6 ,0,1 . As well as classical modular forms, the Fourier coefficients of mock theta functions often have an interesting combinatorial interpretation. Dyson's rank generating function, characters associated to certain Lie superalgebras, and Hurwitz' class number generating function are examples of mock modular forms, to mention a few. During the last decade results about linear congruences for mock theta functions has been studied in certain special cases [4,11]. Among others, we point out the remarkable result of Bringmann and Ono [9] concerning the congruence properties of Dyson's rank generating function. Such identities rely on linear relations between the non-holomorphic parts. To be more precise, applying certain quadratic twists to mock theta functions, one obtains weakly holomorphic modular forms due to the cancelation of the non-holomorphic parts. In [8] N. Andersen proved that any linear congruence for the coefficients of f (q) and ω(q) must come in this way. A natural question arises. Is Anderson's result true for any mock theta function? In other words, does the obstruction to modularity dictate an obstruction to congruence properties? In light of (1.1), the aim of this paper is to understand whether this is just a rare example, or a more general result concerning congruences between mock theta functions and weakly holomorphic modular forms. Remark 1.1. As in the case of holomorphic modular forms for congruence subgroup, we identify a mock theta function with its q-expansion at infinity. In a private conversation, Ono conjectured that congruences as (1.1) exist just in special cases. Question (Ono). Suppose that H is a mock modular form with (algebraic) integer coefficients. As a function of its weight and level, can one bound the largest integer m for which there is a weakly holomorphic modular form g with (algebraic) integer coefficients for which H ≡ g (mod m)? Example. For Ramanujan's third order mock theta function f , is m = 4? In order to answer this question, we construct a family of mock theta functions, one for each shadow Θ α,ε,κ , whose Fourier coefficients at infinity have a particularly nice shape. We can therefore reduce to study congruence properties of these particular functions. To state the result we refer to (2.1) for the definition of the weight 2 Eisenstein series E 2 . Theorem 1.2. Let κ, ε ∈ {0, 1} and A and B be positive coprime integers, then the function is a mock theta function of weight 3 2 − κ and shadow Θ A B ,ε,κ . Remarks. (i) The function f A B ,ε,κ Θ A B ,ε,κ − 1 12B E 2 (τ) turn out to be the weight 2 holomorphic projection of Θ A B ,ε,κ R A B ,ε,κ . The holomorphic projection operator sends real analytic functions, with reasonable growth, that transform as modular forms to (almost) holomorphic modular forms. This operator was introduced by J. Sturm [20]. In [12], B. Gross and D. Zagier show that if the weight of the modular transformation property is greater than 2 then the image under the holomorphic projection is modular. If the weight is 2 then they show that, under certain assumptions on the Fourier expansion at the cusps, the image is modular up to the addition of a constant multiple of E 2 . Recently, the holomorphic projection has been used by Imamoglu-Raum-Richter [13] in order to determine recursion formulas for the Fourier coefficients of Ramanujan's mock theta functions. (ii) Unlike for weights larger than 2, the weight 2 holomorphic projection operator does not interchange with the slash operator. In particular, it is not trivial to understand the modularity property of the projection of R A B ,ε,κ Θ A B ,ε,κ . Our method gives an alternative to this issue. (iii) The approach of Imamoglu-Raum-Richter shows that the appearance of E 2 in the image of the weight 2 holomorphic projection depends on the representation associated to the transformation property of the original function. Theorem 1.2 imply that a trivial irreducible representation always appear in the decomposition into irreducible of the representation associated to mock theta functions. (iv) In Proposition 3.1 we will see another interesting shape for this object, which, among other applications, explains the well known Hurwitz class number relations. Rhoades, and S. Zwegers express the holomorphic projection of η · (η 3 ) * as a derivative of the Appell sum. Using a result of S. Treneer on congruence properties of modular forms (see Subsection 2.1), we prove the following. Theorem 1.3. For any weakly holomorphic modular form g of level N and for any prime p |N, Two mock theta functions with the same shadow differ by a weakly holomorphic modular form, therefore we expect Theorem 1.3 to hold for any mock theta function. Although equation (1.1) gives a contradiction, its nature has nothing to do with the exceptional cases excluded in Theorem 1.3, but it relies on the p-adic properties of f (q). We describe more generally this phenomenon in the following result. Corollary 1.4. Let H be a mock theta function with associated non-holomorphic part R A B ,ε,κ . Letting p be a prime number and j := ν p (H) be the p-adic valuation of H, then the following is true. (i) If j < 0 then there exists a weakly holomorphic modular form g of weight 1 2 for a weakly holomorphic modular form g and an integer ℓ > 0, then The remainder of the paper is organized as follows. In Section 2 we recall some basic arithmetic properties of weakly holomorphic modular forms and we describe the Appell sum. In Section 3 we construct a family of mock theta functions, proving Theorem 1.2. In Section 4 we prove the arithmetic properties of the mock theta functions described in Theorem 1.2, and in Section 5 we use them to prove Theorem 1.3 and Corollary 1.4. Preliminaries In this section we recall certain arithmetic results concerning weakly holomorphic modular forms and the Eisenstein series E 2 . Finally, we recall the definition and we describe the transformation properties of the Appell sum. Arithmetic properties of weakly holomorphic modular forms and E 2 Arithmetic properties of weakly holomorphic modular forms have been described by Treneer [21]. Briefly speaking, for any weakly holomorphic modular form g of level N and for any prime p coprime with N, Treneer constructs a cusp form which is congruent modulo p to g, after sieving the coefficients. She then uses the following result of Serre [19,Exercise 6.4] in order to establish congruences for g. To be precise, Serre showed that any cusp form of integral weight k > 1 is annihilated modulo any prime p by the Qth Hecke operator, for a positive proportion of the primes Q. This result immediately implies the following. Proposition 2.1 (Serre). Suppose that is a cusp form of weight k ≥ 1 and level N. Then for each prime p a positive proportion of the primes for any integer n. The previously mentioned result of Treneer for weakly holomorphic modular forms follows from Proposition 2.1. (Treneer). Suppose that Proposition 2.2 is a weight 2 non-holomorphic modular form. The Appell sum For τ ∈ C, u ∈ C \ (Zτ + Z), and v ∈ C, the Appell sum is defined by where the function β 1 was already introduced in (1.2). Zwegers constructed the completion of the Appell sum A as In the following proposition we recall some of the transformation properties satisfied by A. [24]). Let A(u, v; τ) be as above, then the following hold. A family of mock theta functions In this section we prove Theorem 1.2. More precisely, we construct a family of mock theta functions which has a nice expression in terms of derivative of the Appell sum and the Eisenstein series E 2 . Proof of Theorem 1.2. The function f A B ,ε,κ is clearly holomorphic. It remains to prove that it can be completed to be modular by adding the real analytic function R A B ,ε,κ . Here we assume ε = 0. The computation for ε = 1 is exactly the same. In order to do so, we rewrite f A B ,ε,κ as We start by proving the modularity property. To simplify the notation, for γ = a b c d ∈ SL 2 (Z) we write γτ := aτ+b cτ+d . Also, we call s := B 2 − A. By definition we have We study each of these four terms separately using Proposition 2.4 and eq. (2.2). The first term: The second term: Using the elliptic transformation properties of A this term equals (3.4) A similar computation yields the third term: Finally, using (2.2) we rewrite the fourth term as which by definition of f A B ,0,κ equals f A B ,0,κ (τ)Θ A B ,0,κ (Bτ). We conclude the proof of the theorem by computing the non-holomorphic part of f A B ,0,κ . As before, let where ϑ ′ and R ′ denote the derivatives in the elliptic variable. The result follows using the parity properties of ϑ and R. Note that the Fourier expansion of the mock modular form f A B ,ε,κ at any cusp has integral coefficients and its growth at the cusps is dictated by the decay of Θ A B ,ε,κ . In fact, the Fourier expansion at infinity of f A B ,ε,κ has a particular and nice shape, which we describe in the following proposition. where V n := (a, b) ∈ Z 2 : ab=n, b+Ba≡2A (mod 2B) Proof. We refer to the statement of Theorem 1.2 for the definition of f A B ,ε,κ Θ A B ,ε,κ . We consider each of the three summands separately. Expanding the denominator in the first summand, which we call here Σ 1 , in a geometric series, it equals Replacing n by s − m, we rewrite (3.7) as Similarly, the second summand in the definition of f A B ,ε,κ Θ A B ,ε,κ , which we call Σ 2 , can be written as In particular we can write Σ 1 + Σ 2 in a unique formula as We fix a q-exponent n. From (3.8) we know that n = (s − m) (2A + B(s + m)) for certain integers s and m. Since the map is a bijection, equation (3.8) equals Adding the contribution of E 2 completes the proof. Congruences for f In this section we prove certain congruence properties satisfied by the mixed mock modular form We shall see in the next section that these properties are not satisfied by weakly holomorphic modular forms. In other words, we will see that the non-holomorphic function R A B ,ε,κ cause an obstruction to the congruence between a mock modular form and a weakly holomorphic modular form. In order to give the precise statement, we recall the definition of p-adic valuation of a q-series. Letting p be a prime and g(τ) = n a(n)q n ∈ Q((q)), then the p-adic valuation of g is defined by ν p (g) := inf n ν * p (a(n)) , where ν * p is the standard p-adic valuation in Q p . Moreover, two q-series g and h are congruent modulo p m if 2. There exist infinitely many integers m ≥ 2 such that for any prime Q ≡ ±1 (mod B) sufficiently large, there exists an integer k coprime to 2Q such that c Q2 m k ≡ 2 (mod 4). We split the proof of Proposition 4.1 according to the parity of the prime p. Also, we give the proof for ε = 0. The case ε = 1 is analogous and is proven in the author's Ph.D. thesis. Congruences modulo odd primes where ℓ p denotes a prime smaller enough respect to Q such that ℓ ≡ A (mod B). We split the proof of Proposition 4.1 in three cases, according to the residue classes of B (mod 4). The special case B ≡ 0 (mod p) will be treated separately. From now on we can assume (p, B) = 1. The Chinese reminder theorem and Dirichlet's prime number theorem guarantee the existence of ℓ such that If Ψ 4Q (2Q) = 1 then c 2 ≡ 50 0 (mod p). Otherwise, using the same argument as in the previous cases, it is possible to find a prime ℓ such that either c 1 or c 3 0 (mod p). Congruences modulo 2 The proof of Proposition 4.1 for p = 2 is analogous to the proof in the case of odd p, therefore we do not prove it here. We only mention that in this case one can show that a linear combination of c 1 := 2c Q2 m c 2 := 2c Q2 m ℓ is congruent to 2 (mod 4). A detailed proof can be found in the author's Ph.D. thesis. Proof of the main results In this section we prove Theorem 1.3 and Corollary 1.4. Proof of Theorem 1.3. Assume by contradiction that there exists a weakly holomorphic modular form g and a prime p such that f A B ,ε,κ Θ A B ,ε,κ ≡ g (mod p). Proposition 2.2 applied to g implies that there exists infinitely many primes Q such that c(Qp m n) ≡ 0 (mod p) for any integer n coprime to Qp. This contradict Proposition 4.1. Proof of Corollary 1.4. Since H has the same non-holomorphic part as f A B ,ε,κ , then the difference m := H − f A B ,ε,κ is a weakly holomorphic modular form of weight 1 2 and level N. (ii) Conversely, assume that there exists a weakly holomorphic modular form g such that p − j H ≡ g (mod p ℓ ), (5.1) and assume by contradiction that j > −ℓ. Note that equation (5.1) implies that ν p (g) = 0. In particular, ν p ( f A B ,ε,κ + m − p j g) = ν p (H − p j g) ≥ j + ℓ, i.e., f A B ,ε,κ ≡ p j g − m (mod p j+ℓ ). To conclude the proof it is enough to use Theorem 1.3.
2014-02-26T12:12:39.000Z
2014-02-26T00:00:00.000
{ "year": 2014, "sha1": "ce0b38ef46ee3e505a5600b237a023098ef71956", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ce0b38ef46ee3e505a5600b237a023098ef71956", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
237508224
pes2o/s2orc
v3-fos-license
EV PD-L1 Contributes to Immunosuppressive CD8+ T Cells in Peripheral Blood of Pediatric Wilms Tumor Wilms tumor (WT) is the most common renal cancer and the most prevalent abdominal cancer in children. Children with recurrent or progressive forms of WT could benefit from novel immune-targeted approaches. While the immune status of these patients, especially the immunosuppression of peripheral T cells, was rarely reported. The present study enrolled a consecutive series of 14 Chinese WT children and 14 age- and gender-matched healthy controls. We demonstrated that plasma extracellular vesicular (EV) PD-L1 levels significantly increased in WT patients than in healthy controls. EV PD-L1 significantly inhibited the activation of human CD8+ T cells by down-regulating the cell surface CD69 expression and the intracellular IFNγ and TNFα production in vitro. In peripheral CD8+ T cells of WT patients, the intracellular IFNγ and TNFα production significantly decreased than healthy controls. The level of plasma EV PD-L1 significantly correlated with the intracellular TNFα production in peripheral CD8+ T cells of WT patients. In conclusion, the significantly increased plasma EV PD-L1 in WT patients contributed to the immunosuppression of peripheral CD8+ T cells. Monitoring the level of plasma EV PD-L1 will be helpful for the selection of immune-targeted therapies for WT patients. Introduction Wilms tumor (WT), also known as nephroblastoma, is the most common renal cancer and the most prevalent abdominal cancer in children. 1,2 Although 90% of WT being demonstrated as "favorable" histology and respond well to effective treatments, recurrence occurs in about 15% of WT children with favorable pathological types. And the survival rates of unfavorable histology ranges from 84% to 38%, depending on the disease stage. [2][3][4] Children with recurrent or progressive forms of WT could benefit from novel immune-targeted approaches. In recent years, immune checkpoint protein inhibitors, especially antibodies against programmed death-ligand 1 (PD-L1) and programmed death-1 (PD-1), have proven to be a revolutionary therapy against many types of tumors. [5][6][7] As a membrane-bound ligand, PD-L1 has been reported upregulated in almost all types of tumors and associated with poor prognosis. 8 Tumor PD-L1 could suppress the cell proliferation, cytokine secretion, and cytotoxicity of infiltrated CD8 + T cells, through binding to the PD-1 receptor. 9,10 Therapeutic antibodies, by blocking the interaction between PD-L1 and PD-1, can reactivate the anti-tumor T-cell response. 11,12 However, recent studies demonstrated that PD-L1 can also be expressed on the surface of extracellular vesicles (EV) and EV PD-L1 (ePD-L1) levels have been associated with tumor progression in some types of adult cancer. [13][14][15][16] Tumor-derived EVs also reported can regulate the tumor-infiltrating lymphocytes 17 and exosomes isolated from plasma of cancer patients demonstrated immunosuppressive activity. 18,19 For the low response rate of anti-PD-L1/PD-1 therapy on cancer patients, [20][21][22] the presence of PD-L1 on plasma EVs may be one of the important reasons. The association of EV PD-L1 with pediatric tumors is rarely reported. And there is no report about the association of EV PD-L1 with the function of peripheral T cells in pediatric tumors. The present study aims to demonstrate whether the plasma concentration of ePD-L1 was increased in WT children and the role of ePD-L1 on CD8 + T-cell activation. The function of peripheral CD8 + T cells in WT children was also compared with that of healthy controls. These results are helpful to reveal the mechanisms by which tumor cells systemically suppress the immune system in pediatric tumors, especially in WT. Patients and Sampling A consecutive series of 14 Chinese WT children who were first treated at Beijing Children's Hospital between the years 2018 and 2019 were recruited to the study (Table 1). At the same time, 14 age-(U = 70, P = .2050) and gendermatched (U = 91, P > .9999) healthy Chinese children were also enrolled in this study as controls. Peripheral blood specimens (2-4 mL) were collected before treatment and centrifuged at 1000g for 10 min at room temperature. Plasma samples were collected and centrifuged again for 15 min 2500g at room temperature to obtain platelet-free plasma, which was stored in aliquots at −70°C. This study was approved by Beijing Children's Hospital Ethics Committee (2017-53). All human subjects or their parents had provided written informed consent. Isolation of EVs EVs were isolated using the Total Exosome Isolation Kit (from plasma) (ThermoFisher Scientific) according to the manufacturer's instructions. Briefly, frozen platelet-free plasma was thawed immediately before EV isolation and diluted 1:1 in PBS. The diluted plasma samples were added 0.2 volume of Exosome Precipitation Reagent (from plasma). The resulting mixture was incubated at room temperature for 10 min and then centrifuged at 10 000g for 5 min at 4°C. The supernatant was carefully aspirated and the pellet was resuspended into 50 μL of PBS. Characterization of EVs Morphological examination of isolated EVs was done using a transmission electron microscope. Forty microlitres of isolated EVs were fixed with 4% paraformaldehyde and were loaded on a 300-mesh copper grid. After stained with 2% phosphotungstic acid for 1 to 2 min, the EV samples were dried using an electric incandescent lamp for 10 min. Data were acquired using a transmission electron microscope (JEOL JEM-2100) at an accelerating voltage of 160 kV. The number and size of EVs were examined using a NanoSight NS300 with a 405 nm laser instrument (Malvern Instruments, United Kingdom), as our previously described. 23 The camera level was maintained at 10 for light scatter mode. Three videos of typically 60 s duration were taken, with a frame rate of 30 frames per second. For optimal results, microvesicle concentrations were adjusted to obtain ∼50 microvesicles per field of view. Data were analyzed by NTA 3.0 software (Malvern Instruments). Immunostaining and Imaging of Plasma EVs Platelet-free plasma samples were centrifuged first at 2500g for 10 min at room temperature, then FITC-anti-CD63 (10 μg/mL) and PerCP-anti-PD-L1(10 μg/mL) were added into the plasma samples for 2 h at room temperature. Then the plasma EVs were purified using Total Exosome Isolation Kit (from plasma) and resuspended with 20 μL PBS. The fluorescentstained EVs were then smeared on a glass slide and visualized using a laser-scanning confocal microscope (TCS SP8 STED, Leica, magnification 63 × 10). The percentage of PD-L1-positive EVs from 5 randomly selected high power fields (magnification 4 × 63 × 10) was calculated. ELISA Assay For detection of PD-L1 on EVs, Human B7H1/PD-L1 ELISA Kit was used according to the manufacturer's instructions (RayBioetch). The EVs derived from the plasma of patients and controls were prepared using the same volume of PBS as the plasma they were originally derived from. For samples lower than the minimum detectable concentration of PD-L1, a re-examination was performed using a quantity four times of the standard dose. Then, the minimum detectable PD-L1 concentration was 10 pg/mL in this study. T-Cell Suppression by EVs From WT Patients Peripheral blood mononuclear cells (PBMCs) were isolated from fresh whole blood of healthy donors using Ficoll gradient. T cells (4 × 10 5 /well) were cultured with RPMI 1640 medium (Gibco) in 48-well plates and activated with anti-CD3/ anti-CD28 antibody-coated beads (4%, StemCell Technologies) and IL2 (150 IU/mL) for 6 h at 37°C. The activated T cells were treated with EVs (with high PD-L1 content from WT patients and low PD-L1 content from controls) or PBS (as negative controls) for 24 h at 37°C. T cells were then harvested and cell surface marker and intracellular cytokine staining for flow cytometry analysis were performed (see below). Statistical Analysis Data are presented as the median (Q1, Q3). Non-parametric Mann-Whitney U-test was used for comparison between the groups. Correlations between two continuous variables were determined by Pearson's coefficient. The correlation between ePD-L1 and patients' stage was determined by non-parametric Spearman correlation. A bilateral P-value of <.05 was regarded as significant. Statistical analyses and graphing were performed using GraphPad Prism 8. Significantly Increased ePD-L1 Levels in Plasma of WT Patients Plasma EVs were isolated using the Total Exosome Isolation Kit and identified by transmission electron microscopy Figure 2. Immunostaining and confocal microscopy images of plasma ePD-L1 of 3 WT patients. A platelet-free plasma of 3 WT patients (with low, intermediated, and high ePD-L1 expression, respectively, as shown in Table 1) were stained with FITC-anti-CD63 (10 μg/mL) and PerCP-anti-PD-L1(10 μg/mL) for 2 h at room temperature, then plasma EVs were purified and resuspended with 20 μL PBS. The fluorescent-stained EVs were then smeared on a glass slide and visualized using a laser-scanning confocal microscope (TCS SP8 STED, Leica). The percentage of PD-L1-positive EVs from five randomly selected high power fields (magnification 4 × 63 × 10) was calculated. The representative confocal microscope image (a) and quantification of PD-L1-positive EVs of these three WT patients (b) were shown. ( Figure 1a) and nanoparticle tracking analysis (NTA) (Figure 1b). The exosomes (smaller than 200 nm in diameter) account for about 95.5% of plasma EVs according to the results of NTA. The WT children exhibited significantly higher plasma ePD-L1 concentration compared with that of controls (U = 51, P = .0284), as shown in Figure 1c. The plasma ePD-L1 concentration of WT patients significantly correlated with WT stage (r = 0.6913, P = .0083), as shown in Figure 1d. The differential expression of PD-L1 in peripheral EVs of WT children was further confirmed with confocal microscopy imaging by randomly selected 3 WT patients with low, intermediate, and high plasma ePD-L1 concentration. The percentage of PD-L1-positive EVs from five randomly selected high power fields was calculated, as shown in Figure 2. ePD-L1 Contributes to Immunosuppression of CD8 + T Cells In Vitro The role of ePD-L1 on CD8 + T-cell function was assessed. The isolated normal human T cells were activated and co-incubated with PD-L1 high (from WT children) or PD-L1 low (from healthy Figure 3. ePD-L1 inhibits the activation of human CD8 + T cells in vitro. Human PBMCs from healthy donors were isolated and cultured in a 48-well plate. The PBMCs were activated with anti-CD3/anti-CD28 antibody and were treated with EVs with high (>20 pg/mL; from WT patients) or low (<10 pg/mL; from healthy donors) PD-L1 expression, or treated with PBS as control. Cell surface marker and intracellular cytokine staining for flow cytometry analysis were performed. Representative histograms of flow cytometry analysis were shown (a). (b) The percentage of CD28 + , and the MFI of CD69, PD-1, IFNγ, and TNFα in CD8 + T cells of the group of control, ePD-L1 high and ePD-L1 low were shown as scatter plots. Statistical analysis was performed using a non-parametric Mann-Whitney U-test. Abbreviations: ePD-L1, extracellular vesicles PD-L1; MFI, mean fluorescent intensity; PBMCs, peripheral blood mononuclear cells. controls) EVs in vitro. The activated human CD8 + T cells demonstrated upregulated surface CD28, CD69, and PD-1 expression as well as increased intracellular IFNγ and TNFα production. Following co-culture of these T cells with PD-L1 high EVs, levels of CD69, IFNγ, and TNFα decreased significantly (Figure 3b). The expression levels of surface CD28 and PD-1 in CD8 + T cells had not decreased significantly under the treatment of PD-L1 high EVs (Figure 3b). Contrary to the PD-L1 high EVs, co-culture of T cells with PD-L1 low EVs did not significantly decrease the levels of CD69, IFNγ, or TNFα (Figure 3b). These results demonstrated that the EVs (with high PD-L1 level) of WT children are biologically active in interfering with the activation of effecter CD8 + T cells. Plasma ePD-L1 Levels Correlated with Inhibited Peripheral CD8 + T-Cell Function of WT Patients The peripheral CD8 + T-cell function of WT patients was investigated by flow cytometry. Compared with peripheral CD8 + T cells from healthy control, peripheral CD8 + T cells from WT patients had significantly increased proportion of CD28 + cells (U = 9, P = .0143, Figure 4a), and significantly decreased intracellular IFNγ (U = 12, P = .0339, Figure 4d) and TNFα (U = 8, P = .0103, Figure 4e) levels. The correlations of significantly changed markers (CD28, IFNγ and TNFα) with plasma ePD-L1 levels were then assessed, as is shown in Figure 5. The plasma ePD-L1 levels significantly negatively correlated with the intracellular TNFα production (r = −0.6001, P = Figure 4. Inhibited peripheral CD8 + T cells of WT patients. PBMCs from WT patients or healthy controls (plasma ePD-L1 <10 pg/mL) were isolated and stimulated with Phorbol-12-myristate-13-acetate and ionomycin for 4 h. Cell surface markers (CD28, CD69, and PD-1) and intracellular IFNγ and TNFα staining for flow cytometry analysis were performed. Representative histograms of flow cytometry analysis were shown. The percentage of CD28 + , and the MFI of CD69, PD-1, IFNγ, and TNFα in CD8 + T cells of WT patients and controls were shown as scatter plots (a-e). Statistical analysis was performed using a non-parametric Mann-Whitney U-test. .0233, Figure 3f) of peripheral CD8 + T cells. While the plasma ePD-L1 levels were not correlated with the cell surface CD28 expression (r = 0.3927, P = .1648, Figure 5a) or intracellular IFNγ production (r = −0.4218, P = .1331, Figure 5b). Discussion Although improved therapies have greatly increased the survival rate of WT, recurrence occurs in about half of WT children. 25 The survival rate of unfavorable histology ranges from 84% to 38%. [2][3][4] On the other hand, immune checkpoint inhibitors, especially antibodies against PD-L1/PD-1, have made great progress in cancer treatment in recent years. It is feasible to treat recurrent or unfavorable-histological WT with immune checkpoint inhibitors. Recent studies reported the association of plasma ePD-L1 with the low response rate of anti-PD-L1/PD-1 therapy on cancer patients. [20][21][22] It is interesting to study the expression level of plasma ePD-L1 in WT patients and to explore its role on T cells. The present study demonstrated that the plasma concentration of ePD-L1 was significantly increased in WT children compared with that in healthy controls. Through immunofluorescent staining and confocal microscopy imaging, we confirmed the expression of PD-L1 on EVs. The results of NTA indicated that more than 95% of plasma extracellular vesicles were exosomes (30-200 nm in diameter). This demonstrated that exosome PD-L1 accounts for the vast majority of plasma extracellular membrane-bounded PD-L1 in WT children. It has been reported that tumor cell-derived exosomes contributed to immunosuppression through membrane PD-L1. 13,14 By investigating the role of EVs, from WT patients (with higher PD-L1 expression) and from healthy controls (with lower PD-L1 expression), on the activation of cultured human CD8 + T cells, we demonstrated that EVs form WT patients significantly decreased the expression of cell surface CD69, intracellular IFNγ and intracellular TNFα production. These results indicated that increased ePD-L1 concentrations in WT patients were involved in the inhibition of CD8 + T-cell activation. Our results are consistent with previous findings that EVs or exosomes from cancer patients mediate the immune suppression of activated T cells. [13][14][15][16]23 In order to explore the immune-suppression status of peripheral CD8 + T cells in WT patients, we compared the function of peripheral CD8 + T cells in WT children and healthy controls. Results show that the peripheral CD8 + T cells of WT patients had significantly decreased intracellular IFNγ and TNFα production, although the expression level of CD28 was increased in peripheral CD8 + T cells of WT patients. These results demonstrated the in situ immunosuppression of plasma ePD-L1 against peripheral CD8 + T cells in WT. Our results are consistent with Poggio's report that suppression of ePD-L1 could induce systemic anti-tumor immunity. 26 In this context, the plasma level of ePD-L1 may be associated with tumor characteristics and may affect anti-PD-L1/PD-1 therapy. Nephrectomy plus systemic chemotherapy is the routine treatment for WT. For most patients, standard therapy can achieve satisfactory results. But for patients with recurrence or more aggressive disease, combination chemotherapy is usually administered. 27,28 For WT patients with poor prognosis, considering the side effects of chemotherapy, immune checkpoint inhibitors may be a feasible selection. The study of plasma ePD-L1 in WT patients is helpful for the choice of immunotherapy. In summary, our results demonstrated that the significantly increased plasma ePD-L1 levels in WT patients were biologically active in suppressing the activation of CD8 + T cells, and were significantly correlated with intracellular TNFα production in peripheral CD8 + T cells of WT patients. Further studies are needed to validate their potential role in WT patients. Ethical Approval This study was approved by Beijing Children's Hospital Ethics Committee (2017-53). All human subjects or their parents had provided written informed consent.
2021-09-15T06:18:00.648Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "6a8e79ee77abef4211ff8958d66e338dae247791", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15330338211041264", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea6801ad3aadfdee8285c6f8796e4c69e194af3f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
241319498
pes2o/s2orc
v3-fos-license
Anthropometric surrogates of Birth weight reproducible by Community Health Workers: A hospital-based cross sectional study at Mbarara Regional Referral Hospital, South Western Uganda In many resource constrained countries Uganda inclusive, women continue to give birth from home/in the community where there are no weighing scales to measure and record birth weight. There is also lack of enough weighing scales and skilled health personnel at health facility level to ensure that birth weight for every child is timely determined. Birth weight is an indicator of the chances for survival, growth, long-term health and psychosocial development of neonates. Different anthropometric parameters are reliable surrogates of birth weight although they are highly contextual. This study assessed the best anthropometric surrogate of birth weight usable at facility and community levels in western Uganda. study whereby anthropometric values of measured by to determine the Data regarding birth foot length and circumference of head, were and recorded. Frequencies, percentages and mean and standard deviation were used to describe categorical demographics of neonates respectively. Pearson correlations, specificity, sensitivity, likelihood ratios, diagnostic odds ratios and area under the curve (AUC) were determined and used to establish the most reliable anthropometric parameter that best estimates birth weight of neonates. Background In many resource constrained countries Uganda inclusive, women continue to give birth from home/in the community where there are no weighing scales to measure and record birth weight. There is also lack of enough weighing scales and skilled health personnel at health facility level to ensure that birth weight for every child is timely determined. Birth weight is an indicator of the chances for survival, growth, long-term health and psychosocial development of neonates. Different anthropometric parameters are reliable surrogates of birth weight although they are highly contextual. This study assessed the best anthropometric surrogate of birth weight usable at facility and community levels in western Uganda. Methods A cross sectional study was conducted between July and September 2017, whereby anthropometric values of 553 neonates born at Mbarara Regional Referral Hospital were measured by two midwives and later repeated by two community health workers to determine the reproducibility. Data regarding birth weight, height, foot length and circumference of head, mid upper arm, chest, thigh and calf were collected and recorded. Frequencies, percentages and mean and standard deviation were used to describe categorical and continuous demographics of neonates respectively. Pearson correlations, specificity, sensitivity, likelihood ratios, diagnostic odds ratios and area under the curve (AUC) were determined and used to establish the most reliable anthropometric parameter that best estimates birth weight of neonates. Conclusions Chest circumference taken within 24 hours after birth is the best nthropometricsurrogate measure of birthweight. Community Health workers can measure chest circumference with almost the same accuracy like midwives. Background Resource constrained countries lack enough weighing scales and trained personnel to determine birth weight for every child (Jitta and Kyaddondo, 2008). Birth weight is an indicator of a neonate's chances for survival, growth, long-term health and psychosocial development (McGuire, 2017). Long-term health translates into increased growth domestic product, a strong driver of national development and improved live-hoods of the entire population (Cheruiyot, 2015). Many scholars have reported different anthropometric parameters that are reliable proxy measures for determining birth weight (Dhar et al., 2002, Gozal et al., 1991. However, the use of anthropometric parameters to determine birth weight is highly contextual (Begić et al., 2016). From a study in eastern Uganda, it was reported that foot length (when compared to chest, mid upper arm, head, thigh and calf circumferences), is the best surrogate measure of birth weight (Nabiwemba et al., 2013). In most of these studies, anthropometric data were collected by only midwives casting doubt on whether the community health workers can effectively use the designed anthropometric tools. Community health workers in Uganda frequently interface with new born babies before trained health workers since a significant number of mothers are still delivering from home/community (Uganda Bureau of Statistics & ICF, 2017 ). This study aimed at determining the best anthropometric parameter to use as a proxy measure of birth weight and reproducible by community health workers in western Uganda. Methods This study employed a cross-sectional study design. The study was conducted at Mbarara Regional Referral Hospital (MRRH) in south western Uganda. The hospital is situated in Mbarara municipality along Kabale road about 270km from Kampala, a capital city of Uganda. This is a 400 bed capacity hospital but serves a far higher inpatient population since about 4 million people reside within its wide catchment area. It is a University Teaching Hospital for Mbarara University of Science and Technology and its Gaenacology and Obstetric Department has a maternity wing that handles about 21 deliveries per day. We recruited neonates from labour suit and maternity ward in the gaenacology section of the hospital department in the months of July to September 2017. Characteristics of study participants In this study, we recruited neonates within their first 24hours after birth using consecutive sampling method. Our exclusion criteria were neonates not weighed by midwives within one hour after birth, and before breast feeding. Also, sick and weak neonates or under intensive care, those with congenital abnormalities or dysmorphic features and or weighing <1000g were to be excluded. However, no neonate met the exclusion criteria. Purposive sampling was used to select two midwives who had worked for at least 6 months at maternity ward/ labour suit of MRRH. Similarly, two community health workers (CHWs) with at least ordinary secondary education and worked for more than 6 months as members of village health teams in Mbarara municipality were recruited. Midwives measured birth weight and anthropometric data from the neonates. CHWs repeated measurements of anthropometric parameters on the neonates. Study processes Using Kish and Leslie formula (1965) and a design effect of 2.0 used in childhood anthropometrics (Hulland et al., 2016) were used to recruit 638 neonates. Of a total of 1,200 neonates born during the study period, 562 were not measured in the first hour after birth and 85 neonates measured by midwives were not accessed for measuring by community health workers because they could not be traced on the martenity ward (Appendix 1).. Thus only data from 553 neonates was analysed. Two midwives working in the maternity wing of the hospital and two community health workers from Mbarara municipality were recruited and trained for two days, under one roof, on anthropometric techniques. Our main outcome variable was birth weight of neonates. Birth weight was determined by midwives using a weighing scale (Salter model 180). Weighing scale standardization was done on daily basis throughout the process of data collection. The neonate would be put lying down on the leveled pan scale of the weighing scale and then the midwife could read and record weight in grams. Anthropometric correlates of birth weight that were measured in this study included circumferences (head, mid upper arm, chest, thigh, and calf), foot length and height were measured. Using non extendable measuring tapes, with a width of 1.0 cm and subdivisions of 0.1cm, midwives measured circumferences of neonates' head, mid upper arm, chest, thigh, and calf. Head circumference was measured between the glabella anteriorly and along the occipital prominence at the back of the head. Mid upper arm circumference was measured from midpoint between tip of shoulder and elbow by moving the tape around the arm to the starting point while chest circumference was measured by fixing the starting point of a tape measure at the tip of xiphoid process and move it around the back of the neonate to the starting point. While keeping the neonate sleeping on its back the measurement would be read on full inspiration. Similarly, the thigh and calf circumferenceswere measured from respective fixed points. They used a hard transparent plastic ruler to measure foot length and a calibrated height/length measuring board was used to measure the neonates' lengths. Foot length was measured from the heel to the tip of the big toe of the right foot using a transparent ruler. Length was measured using a calibrated measuring board. The neonate was made to lie supine on the calibrated measuring board. The heel of the neonate was fixed on zero point then the length from the heel to the crown was noted and recorded in centimeters. These measurements, except birth weight, were independently repeated on each neonate by the Community Health Worker who had been previously introduced to the hospital premises and departmental staff for familiarization on maternity ward. Data was entered in Microsoft excel version 2010 from where it was edited; checked for completeness and consistency. Data were then exported into SPSS version 20 for analysis. Categorical characteristics of participants were analyzed and summarized using descriptive statistics; frequencies and percentages. Continuous data were summarized and recorded as mean and standard deviation. Using Pearson correlation analysis of linearity between birth weight and all other anthropometric parameters understudy was determined, and Correlation coefficient (r) and confidence intervals reported. Non-parametric receiver operating characteristic (ROC) curve analysis was carried out to calculate 95% confidence intervals of areas under the curve (AUC). Finally the predictive performances of the cut off points were calculated. We used paired t-test to compare the accuracy in anthropometric techniques among mid wives and community health workers. Then mean, standard deviation, mean difference, chi square, and p values at 95% confidence interval were determined and reported. Table 1 shows that of the 503 neonates, majority 388(70.2%) were from parents residing in Mbarara district, Banyankole, 464(9%), residing in rural setting, 320 (57.9%), males, 294 (53.2%) of mean gestation age of 38.5 weeks (SD = 1.0) and birth weight of 3.2 kgs (SD = 0.5). Table 2 shows zero-order correlations between birth weight and other anthropometric parameters under study. Chest circumference and calf circumference showed the highest correlation with birth weight for midwives and CHWs respectively. ROC analysis was conducted to find out the AUC and DOR. This was done to determine the overall accuracy, sensitivity and specificity at selected cut off points to identify best surrogate anthropometric measurement. Sensitivity and specificity for each anthropometric parameter were calculated for a range of measures but the optimum cut off was the parameter value with the highest sum of specificity and sensitivity. Positive likelihood ratio (+LR), negative likelihood ratio (−LR) and diagnostic odds ratio were determined at each cut-off point. This was done to test the effectiveness of each diagnostic parameter. Results The anthropometric parameter, at a selected cut-off point, with the highest AUC, and diagnostic odds ratio was considered to be the most reliable anthropometric parameter that estimates birth weight. Table 2 show AUC and the diagnostic odds ratios of each anthropometric parameter included in the study. Diagnostic odds ratio measured the effectiveness of a diagnostic test since it is the ratio of the odds of the test being accurate if the parameter estimates birth weight relative to the odds of the test being accurate if the parameter does not estimate birth weight. Chest circumference showed the highest diagnostic odds ratio value (33.57) while foot length showed the lowest value (6.65) as shown in Table 2 above. The measurements taken by the CHWs, MUAC (AUC = 0.734) and chest circumference (AUC = 0.713) showed highest AUC and DOR respectively. Findings in However, chest circumference showed the highest AUC (0.89) and foot length showed the lowest AUC (0.77) (Figure 1).. Using apaired t-test, the accuracy of the anthropometric measurements taken by both midwives and community Health workers was compared. There was no statistical difference in the mean differences in chest circumference measurements taken by midwives and community Health workers (Mean difference = 0.03 cm, 95% CI: -0.22-0.29, p = 0.7963) and foot length (Mean difference = 0.03 cm, 95% CI: -0.22-0.29, p = 0.7963) as measured by midwife and CHWs. Discussion In this hospital-based cross-sectional study at Mbarara regional referral hospital in southwestern Uganda, chest circumference was found to be the best surrogate measure of birth weight reproducible by community health workers. Our results are consistent with most reports from other anthropometric studies carried out from similar resource constrained settings. In a hospital-based study in Vietnam, Thi et al. (2015) reported chest circumference as the best surrogate measure of bithweight with an area under the curve of 0.98. Just like in our study, these values were taken in the first 24 hours after birth at a cut-off of 31 cm. The diferrence in the area under curve in both studies may be due to apperent ethnical differences between both population studies.Similarly a hospital-based study in Eastern Uganda reported an area under curve of 0.9 for chest circumference at a cut-off value 31 cm (Nabiwemba et al. (2013). In this very study, foot length had a close area under the curve (ROC = 0.9) and was recommended as the serrogate measure for birthweight of neonates since it can be measured with minimal disturbance to the neonate compared to chest circumference contrary to the recommendations of our study. In addition, our study established that at midwives measure chest circumference > = 30.9 cm in normal birth weight neonates at 98.8% accuracy compared to 96.6% accuracy when community health workers measured the chest circumference in the same neonates. Though the difference in measurement efficiency between midwives and community health workers is very small, this can be explained by the different training levels and experience in neonatal handling between the two cadres. Since the difference is negligible, with consistency in practice and hands on training, community health workers can reliably use chest circumference values to estimate birth weight, and identify low birth weight neonates for referral to the nearest health center (Waiswa et al., 2015b). Similarly, after receiving minimum training, community health workers in Ethiopia were able to measure weight, height and mid-upper arm circumference with almost same accuracy like that of anthropometrists (Ayele et al., 2012). Chest circumference can be used as surrogate measure for birth weight where there are no weighing scales, or complements the use of weighing scales. Community health workers can use chest circumference to identify low birth weight neonates for referral. The community health workers package does not include taking anthropometric measurements in the newborns to detect low birth weight. From this study incorporating anthropometric measurements in the community workers package will offer a valuable way of identifying low birth weight neonates at community level and consequently early and timely referral (Waiswa et al., 2015a). Limitations In this study, the Community Health Workers collected data in a hospital environment contrary to their usual community work environmentt that could have affected their work confidence and hence that slight non significant difference in accurancy compared to the midwives. Conclusions Chest circumference taken within 24 hours after birth is a surrogate measure of birthweight Community Health workers can measure chest circumference with almost the same accuracy like midwives. Although this was a hospital based study, we involved community health workers in data collection and thus the developed anthropometric tool can be used effectively in the community setting in south western Uganda. The tool however, can be validated for use in other community settings because anthropmetric values are population specific. ROC curve for anthropometric parameters of Neonates at Mbarara Regional Referral Hospital This graphical plot illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied.
2019-09-16T23:02:54.931Z
2019-08-02T00:00:00.000
{ "year": 2019, "sha1": "1fbb86436cc745cdd1964f2cc743fcd5b77d6ded", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-3126/v1.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "89da596ff212a46b33447684671560fa85452d64", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
219310593
pes2o/s2orc
v3-fos-license
A new species of Myiocephalus Marshall (Hymenoptera, Braconidae, Euphorinae) from China Abstract A new species of the genus Myiocephalus Marshall, 1898, M. cracentis Li, sp. nov. from the Palaearctic (China, Ningxia, Hubei), is described and illustrated. A key to known species of Myiocephalus is provided. Myiocephalus boops (Wesmael, 1835), is a new record for Jilin province (NE China). Introduction Euphorinae (Hymenoptera, Braconidae) is a large subfamily of endoparasitoid wasps with more than 1,270 described species worldwide (Yu et al. 2016). Their morphology varies greatly and this is quite unusual for a single subfamily. This highly polymorphic subfamily is characterized by only one derived character state: the postero-distally open first subdiscal cell of the fore wing because fore wing CU1b is absent (Shaw 1985;Tobias 1986). The other general morphological character of Euphorinae is the often, but not always, bent fore wing vein SR1+3-SR, and a more or less specialized scape, eyes, clypeus, mandible, fore leg, first metasomal tergite, and ovipositor. Stigenberg et al. (2015) divided this subfamily into 14 tribes and 52 genera. Chen and van Achterberg (2019) recognized two additional tribes, Eadyini and Proclithrophorini, of which the latter had been included in the Townesilitini by Stigenberg et al. (2015) on the basis of their concatenated molecular data (18S, 28S, CAD, and COI). However, the morphology of Proclithrophorini conflict this synonymy and its position remains unresolved. Myiocephalus is the only genus of the Myiocephalini Chen & van Achterberg, 1997, which is associated with ant nests of the genus Formica but has never been reared (Donisthorpe 1927). The genus Myiocephalus Mashall (recognized as Loxocephalus Forster) was first placed in its own tribe as Loxocephalini by Shaw (1985). Myiocephalus is the sister group of the lineage comprising Comsmophorus + Syntretini supported by morphological characters: mesonotum shiny and notauli absent; scutellar furrow without cross-carinae; and dorsope absent (Shaw 1985). Stigenberg et al. (2015) supported the Syntretini as the sister tribe to the Myiocephalini on the basis of their concatenated molecular data (18S, 28S, CAD, and COI) and unique morphological characters (the bulging eyes and smooth mesosoma). There are only two other known tribes of euphorine wasps (Syntretini and Neoneurini) attacking Hymenoptera except Myiocephalini. Myiocephalini is more closely related to Syntretini, based on both morphological and molecular evidences. The genus Myiocephalus Marshall is, although they are rare in collections, one of the most distinctive euphorine genera with its strongly transverse (and females anteriorly more or less concave) head, and elongate first metasomal tergite with very large laterope and compressed metasoma. Four species of the genus Myiocephalus are currently known: M. boops (Wesmael, 1835); M. niger Fischer, 1957;M. laticeps (Provancher, 1886); and M. zwakhalsi van Achterberg, 2019 (Tan et al. 2019). The first author examined the collections applying the key of Tan et al. (2019) and discovered a new species than it was confirmed by second author and is described below. Materials and methods Studied material was selected from the entomological collections of the Beneficial Insects Institute, China (BIIC). The specimens were collected using a sweep net. All specimens studied are deposited in BIIC. Specimens were examined using a Zeiss Stemi 2000 stereomicroscope. The photographs were taken with a computer-connected Leica DFC450 digital camera mounted on a Leica M205C stereo microscope. All images were further processed using minor adjustment in Adobe Photoshop CC, such as image cropping and rotation, adjustment of contrast and brightness levels, color saturation, and background enhancement. The terminology used for measurements and descriptions of morphological characters follows van Achterberg (1988van Achterberg ( , 1993. The veins of the fore wing are illustrated on Figure 20. Diagnosis. Laterope large, deep and submedially situated in slender first tergite; head in dorsal view strongly transverse and usually slightly concave anteriorly; eyes enlarged and protruding; clypeus rather narrow; scapus slightly or not enlarged and subequal to or shorter than third antennal segment; maxillary palpi with five segments, labial palpi with three segments; vein 1-SR+M of fore wing absent; vein 1-R1 longer than pterostigma; vein M+CU1 largely unsclerotized; middle and hind legs elongated; metasoma of ♀ strongly compressed with fifth sternite of ♀ finger-like protruding posteriorly; hypopygium of ♀ with long setae apically or hypopygium medially membranous. (Tan et al. 2019 The scapus of ♂ 1.0 × as long as wide (Fig. 17); minimum width of face 2.0 × as long as height (Fig.17); length of the malar space of ♂ 1.1 × basal width of the mandible (Fig. 17) The scapus of ♂ 1.3 × as long as wide (Fig. 18); minimum width of face 1.6 × as long as height (Fig. 18); length of the malar space of ♂ 1.2 × basal width of the mandible (Fig. 18) Area near occipital carina dark brown and occiput dorsally brown (Fig. 19); vein cu-a of fore wing of ♀ as long as 1-CU1 and oblique (Fig. 20); prepectal carina absent medio-ventrally (Fig. 21) (Fig. 3); vein cu-a of fore wing of ♀ distinctly longer than 1-CU1 and vertical (Fig. 7); prepectal carina present medio-ventrally (Fig. 8); antenna of ♀ with 32 segments (Fig. 6) Description. Holotype, ♀, length of fore wing 3.4 mm, and of body 3.7 mm. Head. Antenna with 32 segments and 1.2 × as long as fore wing, third segment 1.1 × as long as fourth segment, third, fourth and penultimate segments 4.6, 3.9 and 2.8 × as long as wide, respectively (Fig. 6); eye 3.4 × as long as temple in dorsal view; temples directly and linearly narrowed behind eyes (Fig. 3); OOL:OD:POL = 8:4:13; vertex and frons largely superficially coriaceous and shiny; in front of anterior ocellus with small convexity (Fig. 3); occipital carina complete and dorsally remaining shortly below upper level of eyes (Fig. 5); minimum width of face 1.9 × as long as height; face mainly very finely densely punctulate, but latero-ventrally largely smooth, with whitish setae and satin sheen (Fig. 4); clypeus convex medially and with slightly concave and thin ventral lamella (Fig. 4), medially finely rugulose; anterior tentorial pits large (Fig. 4); malar suture deep, narrow and straight; length of malar space equal to basal width of mandible and malar space in anterior view straight (Fig. 4); mandible slender, strongly twisted, outer side convex and with deep basal depression (Fig. 4), its second tooth similar to first tooth and acute. Legs. Middle and hind legs very slender tibia and tarsus together ca. 2.4 × longer than femur, tibia ca. 3.7 × longer than coxa; fore leg normal, tibia nearly 3 × as long as coxa; length of femur, tibia and basitarsus of hind leg 7.6, 22.7 and 6.0 × as long as their maximum width; hind tibial spurs 0.2 × as long as basitarsus. Male. Length of fore wing 3.0 mm, and of body 2.9 mm; antenna with 30 segments; length of malar space 1.8 × basal width of mandible; first tergite smooth and shiny; only sternites of basal half of metasoma folded medially and tergite three to eight weakly concave posteriorly (Fig. 13). Biology. Unknown. Distribution. China (East Palaearctic). Etymology. Named after the slender pterostigma and marginal cell of the fore wing, long narrow legs, and antennae: "cracentis" is Latin for "slender, graceful".
2020-05-21T09:11:34.144Z
2020-05-18T00:00:00.000
{ "year": 2020, "sha1": "bed4ecca6292b88ac7a92544a0b9b04df9ea4826", "oa_license": "CCBY", "oa_url": "https://zookeys.pensoft.net/article/49607/download/pdf/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec74a99b193a86819b0b3d63bd1ca83c0957f577", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
265391245
pes2o/s2orc
v3-fos-license
Screening 2D Materials for Their Nanotoxicity toward Nucleic Acids and Proteins: An In Silico Outlook Since the discovery of graphene, two-dimensional (2D) materials have been anticipated to demonstrate enormous potential in bionanomedicine. Unfortunately, the majority of 2D materials induce nanotoxicity via disruption of the structure of biomolecules. Consequently, there has been an urge to synthesize and identify biocompatible 2D materials. Before the cytotoxicity of 2D nanomaterials is experimentally tested, computational studies can rapidly screen them. Additionally, computational analyses can provide invaluable insights into molecular-level interactions. Recently, various “in silico” techniques have identified these interactions and helped to develop a comprehensive understanding of nanotoxicity of 2D materials. In this article, we discuss the key recent advances in the application of computational methods for the screening of 2D materials for their nanotoxicity toward two important categories of abundant biomolecules, namely, nucleic acids and proteins. We believe the present article would help to develop newer computational protocols for the identification of novel biocompatible materials, thereby paving the way for next-generation biomedical and therapeutic applications based on 2D materials. INTRODUCTION Ever since the discovery of Graphene, two-dimensional (2D) nanomaterials have revealed a cornucopia of novel material science and gained enormous research attention owing to their remarkable mechanical stability and unreactive nature along with fascinating optical, thermal, structural, and electronic properties. 1−6 High hopes have been placed on Graphene and analogous 2D materials for their outstanding applications in the realms of optoelectronics, sensing and separation technologies, electrochemistry, and energy storage. 6−12 The emergence of 2D materials as an excellent choice for interaction with biomolecules ascended due to their flat surfaces, high surfaceto-volume ratio, and tunable functionalities, despite the fact that hybrids of inorganic nanomaterials and biomolecules scarcely cohabitate in nature. 10ne of the first demonstrations of the biomedical properties of 2D materials which intrigued material scientists and biologists alike was carried out by Dai et al. when they used graphene oxide (GO) as an efficient nanocarrier for cellular imaging and drug delivery. 13GO has been conjugated with folic acid and SO 3 H groups and then loaded with doxorubicin (DOX) and camptothecin (CPT), the hybrid system showing high cytotoxicity to human breast cancer cells. 14Polyethylenimine (PEI) and chitosan-functionalized GO has been used as a gene delivery vector via condensation with plasmid DNA and siRNA. 15,16Exploiting the fluorescent behavior of functionalized graphene, they have been widely employed as probes for diagnostic imaging. 17,18Pushing forward the biomedical usage, graphene-based materials (GBMs) have been used as scaffolds for tissue engineering, cell culture, and bone regeneration therapy. 19,20Nanopores in graphene and GO have been studied for their widespread applications in the sensing of biomolecules such as nucleotides, amino acids, antibodies, peptides, and adenosine triphosphate (ATP), among others. 21,22−26 The first candidate, which has been expected to show a promising future in biomedicine, is h-BN.It is predicted that h-BN, having strong fluorescent behavior, may have potential usage in the realm of imaging technologies. 27Alternatively, the flat unreactive surface of h-BN has been used for drug loading and delivery. 28Similarly, BP has been explored for the controlled delivery of doxorubicin and platinum-based anticancer agents. 29−31 TMDs have seen biomedical applications only recently, especially in the realm of biomolecular sequencing. 32It has been reported that a nanopore in MoS 2 could perform "real-time" polynucleotide detection at a single nucleotide resolution, and MoS 2 nanoflakes have been used to develop glucose sensors. 33On the other hand, WS 2 has been utilized to develop a testing kit to detect blood glucose level. 34A novel TMD, TiS 2 has been predicted to have a prosperous future in theranostics, while TiO 2 nanoparticles have been used for nonevasive cancer treatment. 35,36Another class of 2D materials which has recently witnessed biomedical research attention is the carbon-doped graphene-like 2D materials such as h2D-C 2 N, graphitic carbon nitride (g-C 3 N 4 ), C 3 N, and C 5 N, among others.Initial studies with these materials have suggested that they may display substantial potential in biomedical research; however, investigations related to these materials are still in infancy. 10,23,32,37espite providing a bright glimmer of optimism in the field of bio-nanomedicine, many of these 2D materials have been proven to be nanotoxic, causing damage to biomolecules.A decade of in vitro and in vivo nanotoxicology research has demonstrated that nanomaterials often interact with biological systems in chemically and physically distinct ways. 8,10,23,38hemical reactions that occur during these interactions, such as oxidation, functional group interconversion, selenium/sulfur replacement, ligand modification, generation of reactive oxygen species (ROS), and ion-capture, among others, cause irreversible damage to the native structures that leads to serious biological anomalies.However, rather than chemical changes, the principal mode of interaction has been identified as noncovalent interactions. 38,39Graphene and GO have been found to show antibacterial properties, destroying the bacterial cell membrane via lipid extraction. 40GO has been found to exhibit size-dependent toxicity toward red blood cells and mammalian fibroblasts. 41GBMs also rupture the secondary and tertiary structures of proteins and inhibit protein−protein interactions (PPIs). 41,42Most of the toxic properties of graphene are also maintained in the case of h-BN.Cytotoxic effects of h-BN on lung alveoli cells and human embryonic kidney (HEK) cells have been found to be higher compared to carbon nanotubes. 43h-BN nanosheets have been reported to reduce cell survival while also causing severe effects via intracellular ROS production and mitochondrial depolarization. 44,45Similarly, polydispersed BP has been found to show selected antitumor and antimicrobial properties. 46,47−50 BP was shown to have intermediate cytotoxicity between GO and TMDs against human lung cancer cells. 51t is evident from a thorough review of the literature that the majority of 2D materials exhibit cytotoxic behavior at the nanoscale, i.e., nanotoxicity.However, the extent of nanotoxicity inflicted by the 2D nanomaterials on biomolecules remains elusive since the number of experimental studies is significantly limited and they do not provide the molecularlevel details of the underlying mechanism.As a result, the bionano research community is clamoring for a unified understanding of the mechanism that causes nanotoxicity.In addition, it is crucial to assess the biocompatibility of newer 2D materials as well as their impact on various categories of biomolecules including proteins, nucleic acids, biological receptors, and enzymes, among others.One of the alternatives to experimental verification of the cytotoxicity of 2D materials is in silico techniques, such as density functional theory (DFT) and molecular dynamics (MD) simulations.In recent years, these techniques, especially MD simulations, have been utilized to study the effects of 2D materials on biomolecules, leading to a comprehensive understanding of the induction of nanotoxicity.In this feature article, we describe the important computational protocols applied to infer the detrimental effects of 2D materials on two kinds of biomolecules, namely, nucleic acids and proteins.Any living organism's physiological environment contains substantial concentrations of both of these biomolecules, which are in charge of controlling genetic traits and biological as well as chemical processes, and interactions of 2D materials with them become inevitable upon entering the cellular environment, thereby making them soft targets for toxic interactions. NANOTOXICITY OF 2D MATERIALS TOWARD NUCLEIC ACIDS The possibility of 2D material-mediated gene transfection involves interaction between them without disturbing the structure of nucleic acids, and a better understanding and visualization of the molecular-level interactions can be achieved by employing computational techniques.Dynamical methods such as classical and ab initio molecular dynamics simulations would certainly be the preferred modus operandi in this regard.However, ab initio molecular dynamics (AIMD) simulations suffer from the limitations of system size along with the requirement of massive computational memory and time.Consequently, classical MD simulations appear to be the best alternative to tackle such situations.During the past few years, several simulation strategies have been designed to investigate the nanotoxic effects of the state-of-the-art 2D materials on various types of nucleic acids, namely, singlestranded DNA (ssDNA), double-stranded DNA (dsDNA), RNA, and guanine-quadruplexes (GQ), among others.These nucleic acids differ from each other in their secondary structures and folding patterns.The usual course of these studies progresses via studying the adsorption of nucleic acids, followed by tracking the temporal evolution of the structural characteristics of the nucleic acids, which directly provides evidence for the destacking of nucleobases and depletion of Watson−Crick (WC) and/or Hoogsteen (HS) H-bonding.The structural analyses are often complemented by deducing the underlying energetics, which is, in particular, an indirect way to shed light on the molecular-level basis for the structural alteration of nucleic acids.During the past decade, quite a lot of 2D materials have been subjected to computational screening employing MD simulations for evaluating their toxic effects on the structure of nucleic acids, and the following section addresses key contributions in this realm.For the sake of better understanding of the reader, specific nucleic acids have been chosen followed by the discussion of how they behave upon interaction with the 2D materials. Single-Stranded DNA (ssDNA) ssDNA is the simplest nucleic acid consisting of a single polynucleotide strand where the nucleobases are slip-stacked on each other, and the slippage during their stacking results in turns. 52Naturally, a single-stranded DNA can be found in class II viruses such as Parvoviridae while it can be artificially produced by rapidly cooling a heat-denatured dsDNA, where heating causes the strands to separate and the rapid cooling prevents them from recombination. 53The sole strand of ssDNA is not noncovalently bonded to any other biomolecular species and therefore is much more susceptible toward interaction with foreign substances.One of the first investigations concerning the adsorption of ssDNA on graphene and graphene oxide was performed by Zeng et al. 54,55 They identified the presence of both H-bonds between nucleotide residues and the hydroxyl and epoxy groups of graphene oxide as well as π−π stacking between aromatic nucleobases and aromatic rings of the 2D material, as opposed to only π−π stacking interactions in the case of pristine graphene.dsDNA, on the other hand, was adsorbed much weakly compared to ssDNA, presumably due to the compact nature of the double helical structure which provided lesser exposure to the individual nucleobases. 54Xu et al. thoroughly studied the adsorption of ssDNA on a graphene sheet which had both pristine and oxidized domains (Figure 1(a)). 56sDNA was observed to be completely adsorbed (Figure 1(b)) much faster (∼70 ns) on the oxidized section of the 2D material while adsorption on the unoxidized section was somewhat delayed (∼150 ns), primarily due to the dynamic cooperation of H-bonds with π−π stacking interactions in case of the oxidized domain of graphene (Figure 1(c)).This effect was also manifested through the DNA-material interaction energy which, in the case of oxidized graphene, was found to be much higher as compared to the unoxidized section (Figure 1(c)).However, the DNA-graphene π-stacking interactions were formed at the expense of inter-residue π−π stacking interactions between the nucleobases, thereby disrupting the relative spatial native arrangements of the nucleobases in the ssDNA (Figure 1(d)).Ranganathan et al. investigated the adsorption of different polynucleotide ssDNA of various lengths using experiment-calibrated classical MD force-field parameters. 57They revealed that the shorter the size of the ssDNA, the greater is the extent of structural disruption from the native state, since the number of nucleobases is not sufficient to maintain their self-stacking.It is worthwhile to mention that in MD simulations interaction energies are calculated through summing up the total electrostatic and van der Waals (vdW) interaction energies between two different molecular entities.The interaction energy is dynamic since it reveals even if there are small changes in the adsorbed molecular conformations on each other and also considers the screening effect of solvent.Since graphene and graphene oxide significantly perturb the internal structure of ssDNA, other newly synthesized materials could be used instead of GBMs.However, a direct comparison with graphene was necessary to judge whether the disrupting effects of graphene are prevalent even in other materials.To this regard, the adsorption of a model ssDNA was studied on h2D-C 2 N, pristine graphene, and h-BN (Figure 2(a,b)). 58It was observed that for graphene and h-BN, DNA fluctuation occurs initially for only a few hundreds of picoseconds followed by rapid adsorption while for h2D-C 2 N, the adsorption was significantly delayed.For C 2 N and graphene, there was a stepwise increase in the number of contacts during adsorption while in the case of h-BN, ssDNA adsorption was exceedingly fast (Figure 2(e)).Similarly, the interaction energies between ssDNA and 2D materials showed a rapid decrease during adsorption, the trend being h-BN > graphene ≥ C 2 N (Figure 2(d)).Decomposition of the interaction energies into van der Waals (vdW) and electrostatic components revealed that, during adsorption on C 2 N, the increase in interaction energies has nearly equal contributions from both types while for graphene/h-BN, vdW interaction was the sole/predominant contributor.For h2D-C 2 N, 2−3 Hbonds were formed between ssDNA and the material, and therefore a significant share of the electrostatic interaction energy could be attributed to hydrogen bonding.As discussed earlier, graphene and h-BN did not form any H-bonds due to the absence of any long-range polarity.Initial structure of the ssDNA contained 11 π−π stacking contacts, while during the adsorption of graphene and h-BN, all the π-stacking contacts were lost, thereby completely disrupting their native state.On the other hand, even after ∼300 ns of adsorption simulation on C 2 N, 4−5 inter-residue π-stacking native contacts were maintained (Figure 2(e)), which clearly suggested that C 2 N is a better candidate for the preservation of ssDNA. −61 We studied the adsorption of four different ssDNA on C 2 N, each consisting of 12 nucleotides, corresponding to poly A, G, C, and T, respectively. 62Both parallel and perpendicular orientations of the ssDNA on C 2 N were considered; however, both of them produced a similar perpendicular adsorbed structure, suggesting that the outcome of adsorption is independent of initial orientation.The structure of C 2 N consists of a 2D array of aromatic rings, albeit intervened by pores (Figure 3(a)).In addition, each of these pores is surrounded by six electronegative nitrogen atoms, and therefore, the sugar−phosphate backbone and the nucleobases of ssDNA can interact with C 2 N through hydrogen-bonding interactions.Three different stages of adsorption were identified, namely, anchoring, adsorption, and reorganization.In fact, for the adsorption of ssDNA on any 2D material surface, these three stages of adsorption existed.After a few nanoseconds of the adsorption simulations (Figure 3(b)), ssDNA comes close to C 2 N and makes the first contact through π−π stacking of a terminal nucleobase (Figure 3(c)).This residue acts as an "anchor" to the surface and remains conformationally locked throughout the simulation.After that, different segments of the ssDNA got sequentially adsorbed (Figure 3(d)), which was realized in terms of a sharp increase in the number of contacts, contact surface area, interaction energy, and number of surface-DNA H-bonds (Figure 3(e)) while the intra-ssDNA native π−π stacking contacts significantly decrease.However, after the completion of adsorption, the ssDNA experienced reorganization, being characterized by the decrease in the number of H-bonds with a simultaneous increase in intra-ssDNA π−π stacking interactions.This reorganization occurred to maintain the intra-ssDNA bonding while keeping the adsorption energy on the surface nearly intact, thereby attaining a configuration in phase space where inter-residue stacking as well as interaction with C 2 N is optimum (Figure 3(e)).The same situation was observed for all four polynucleotides; however, they differed in the extent of reorganization.Even if the nucleobases reorganized to form their native stacks, the interaction between the nucleobases and the 2D material could be preserved by hydrogen bonds with the nitrogen atoms surrounding the pore.This observation was adequately supported by DFT calculations, which showed that the most stable adsorbed geometry of all four nucleobases were perpendicular conformations with respect to the material plane (Figure 3(f)).To include temperature effects, the DFT optimized structures were further subject to AIMD simulations, producing a somewhat titleperpendicular geometry of the nucleobases, where there is a subtle trade-off between π−π stacking with the pyrazine rings of C 2 N, while maintaining H-bonding with pore nitrogen atoms (Figure 3(g)).The adsorbed state was relatively elongated compared to that found in water, characterized by a higher mean radius of gyration in an adsorbed state (⟨R g ⟩).It was attributed to the presence of surface which interacts with ssDNA, providing a template for adsorption and preventing hydrophobic compaction.For poly A and poly G, the mean number of stacking contacts in the adsorbed state was greater compared to a free state while the situation for poly C and poly T was exactly the opposite.Adenine (A) and guanine (G), consisting of two aromatic rings, undergo a greater extent of reorganization owing to enhanced capability of π−π stacking while cytosine (C) and thymine (T), having only one aromatic ring, show a rather smaller inclination toward the same. 63As a whole, the native structure of ssDNA was not much perturbed, and most of the native stacking contacts remained intact.The non-nanotoxicity of C 2 N toward ssDNA was further demonstrated by considering the possibility of winding two complementary ssDNA molecules poly-A and poly-T, placed antiparallelly with respect to each other on C 2 N surface (Figure 3(h)).Within only a few nanoseconds, the ssDNA strands formed the first H-bond between them, followed by a rapid increase in the number of contacts and interstrand Hbonds, resulting in Watson−Crick H-bonding between complementary sets of nucleobases (Figure 3(h)).However, even after many attempts on graphene and h-BN, no such event was observed due to the enhanced stacking tendency of the aromatic rings of these materials with the nucleobases in conjunction with the unavailability of alternative modes of binding and disruption of the native arrangement of the nucleotides. A nitrogen-doped 2D GBM similar to C 2 N that has gained immense attention in recent years is graphitic carbon nitride (g-C 3 N 4 ).Inspired by the success of C 2 N in adsorbing nucleic acids, we performed adsorption simulations with g-C 3 N 4 and ssDNA. 64It was revealed that the ssDNA adsorbed on g-C 3 N 4 loses most of its primary stacking contacts unlike on C 2 N, and most of the nucleobases form π−π stacking contacts with the material.However, the structural deviation from the native state was observed to be lesser compared to that observed in case of graphene and h-BN while being higher than C 2 N. Evaluation of the interaction energies suggested that for both C 2 N and C 3 N 4 , electrostatic and van der Waals interaction energies build up the total interaction energy; however, the contribution from electrostatics was predominant in C 2 N, while the van der Waals interaction energy predominates for C 3 N 4 .It is worthwhile to mention that classical MD simulations recognize the π−π stacking interactions in terms of van der Waals (vdW) energies only.Therefore, whenever π−π stacking interactions are mentioned, the reader is requested to recall the origin of such interactions.Practically, π−π stacking interactions refer to a particular geometry of the interaction between molecular units through vdW forces.When a nucleic acid molecule interacts with a polar 2D material, both electrostatic and vdW interaction energies are built up.If the long-range electrostatic interactions combined with the H-bonding energies dominate over the vdW interactions (e.g., in C 2 N), the nucleic acid molecule can reorganize itself and regain the initially lost native π−π stacking contacts while maintaining the overall adsorption energy; however, in an opposite scenario (e.g., in C 3 N 4 ), desorption of nucleobases from the material leads to significant loss in interaction energy, which in turn, reduces the probability of nucleic acid reorganization. 64Nevertheless, the relative magnitude of the vdW and electrostatic energies depends on both the nature of the 2D material as well as the nucleic acid, and therefore, different nucleic acids of various polarities can result in different outcomes on a specific 2D material.For nonpolar graphene and locally polar h-BN, the contribution from electrostatic interactions was either absent or negligible, and therefore, maximum distortion of nucleic acids was observed on these materials. Double-Stranded DNA (dsDNA) dsDNA molecules are ubiquitous in cellular environments, and it is expected that nanomaterials entering a physiological environment would interact with them. 7,8,23−68 From a molecular point of view, both carbon nanotubes and graphene share fundamentally similar structures, both consisting of uncharged aromatic rings; however, the shapes of these low-dimensional materials are strikingly different, which may affect the dynamics of the adsorbed nucleic acids.Graphene and other 2D materials are essentially flat, while nanotubes have a barrelshaped structure, and they possess a certain curvature.Johnson et al. pointed out that the single-walled carbon nanotubes (SWCNTs) induce DNA molecules to undergo a curvatureinduced spontaneous conformational change that enables the hybrid to self-assemble via the π−π stacking interaction between the nucleobases and SWCNT outer surface. 65DNAs have been observed to spontaneously wrap around the SWCNT within a few nanoseconds, and the native spatial arrangements of the nucleobases were completely disrupted.However, for graphene and other 2D materials, the nature of nucleic acid adsorption is expected to follow a different mechanism.Few of the initial works involving MD simulations of dsDNA with graphene and graphene oxide were performed by Chen et al. and Zeng et al, where they confirmed complete adsorption of the dsDNA on both surfaces, the interaction being dominated by vdW interactions in case of the former while having an additional contribution from H-bonding and electrostatics for the latter. 54,55Although these studies shed light on the mechanism of adsorption, no comment was made on the time evolution of the structure of the dsDNA molecule.We compared the structural evolution of a model double helical DNA, on both graphene, h-BN, and C 2 N. 58 The latter was chosen owing to its remarkable performance toward the structural preservation of ssDNA. 62Figures 4(a-c) shows the initial and final structures of the dsDNA on these three 2D materials.From Figure 4(a), it is evident that a parallelly placed dsDNA on the surface undergoes flipping and becomes perpendicular.On the other hand, on both graphene and h-BN, the initial parallel orientation remained unaffected throughout the simulations.For each of the materials there was stepwise decrease in interaction energies during adsorption which followed the order: h-BN ≈ graphene ≫ C 2 N (Figure 4(d)).Further decomposition of the C 2 N-dsDNA interaction energy revealed that the contribution of the electrostatic energies was nearly double that of the van der Waals interactions.In the absence of the 2D materials, the dsDNA was stable at 300 K, characterized by an average of 26 WC bonds (Figure 4(e)) and nearly 22 stacking contacts (Figure 4(f)).However, even after adsorption on C 2 N, both of these structural quantities remained similar, whereas on graphene and h-BN, continuous unzipping of the two strands was observed via successive cleavage of WC H-bonds and simultaneous loss of intrastrand stacking contacts, beginning with the anchored terminal base pair and propagating inward.Indeed, the driving force for such structural disruption came from the stability gained by the nucleotide residues through πstacking with surfaces.Nonetheless, we did not observe complete unzipping of the DNA, and the adsorption was comparatively slower than ssDNA.Therefore, it might be speculated that unzipping would be complete at longer time scales.Such unzipping and structural disruption has also been observed by Hughes et al. for an adenosine-binding DNA aptamer on graphene, while Zhou and co-workers observed structural preservation of a dsDNA on C 2 N. 69,70 Additionally, C 2 N was observed to adsorb 11-mer of a less stable dsDNA containing 3 pairs of complementary unnatural bases (UBPs) d5SICS and dNaM without perturbing the intra-DNA interactions and H-bonding, while on both graphene and h-BN, the nucleobases (both natural and unnatural) underwent immediate adsorption on the 2D materials, thereby disrupting the interactions between the two strands. 58,71In a nutshell, C 2 N was capable of adsorbing dsDNA through both longrange electrostatics and H-bonding along with vdW interactions without hampering the native state. Further, the adsorption of a dsDNA with g-C 3 N 4 was studied in both parallel and perpendicular initial orientations. 64ontrary to C 2 N, dsDNA did not undergo parallel-toperpendicular transition (Figure 4(g)), and it could be speculated that the adsorption affinity on g-C 3 N 4 was somewhat higher compared to that on C 2 N, thereby not allowing the dsDNA to undergo flipping.The structural integrity of the dsDNA was intact in both modes of adsorption, as suggested by the time evolutions of the interstrand WC Hbonds (Figure 4(h)) and the intrastrand π−π stacking interactions (Figure 4(i)).It was revealed that the unperturbed structure of the dsDNA on g-C 3 N 4 resulted due to its much higher electrostatic interactions compared to vdW interactions (Figure 4(j)), thereby allowing the nucleic acid to interact through long-range interactions, a situation being similar to the adsorption of dsDNA on C 2 N. 64 One of the recent additions to the family of carbon nitride 2D materials is polyaniline C 3 N, which has been envisaged to display biotechnological application similar to C 2 N, g-C 3 N 4 , and graphene. 72,73Gu and co-workers studied the interaction between a dsDNA and C 3 N following a similar strategy as delineated above. 74They found that dsDNAs experienced significant unwinding upon adsorption with 20−40% loss in the WC H-bonding between the two nucleic acid strands.The unwound nucleobases experienced twisting from the doublehelix and adsorbed on the material through vdW interactions.In fact, the magnitude of the vdW interaction was much higher compared to the electrostatic interaction, which left no other option for the dsDNA but to interact via adsorption of nucleobases, thereby partially disrupting the double-helix.Evidently, nitrogen-containing graphitic 2D materials significantly differed in their interactions with DNAs, and the possibility of structural disruption depended on the relative magnitude of the vdW and electrostatic interactions, a higher magnitude of the former favoring disruption while the latter preferring the preservation of the structure. Considering noncarbon-based 2D materials other than h-BN, recently, Liu et al. and Zhou et al. studied the interaction between a dsDNA with MoS 2 and MoSe 2 , respectively, both belonging to the transition-metal dichalcogenide (TMDC) family of 2D materials. 75,76These materials behave similar to each other toward dsDNA, interacting primarily through the vdW interactions via the terminal nucleobases, electrostatic interactions bestowing additional stabilization to the adsorbed molecules.However, the magnitudes of both of these interactions were substantially low when compared with graphene-based 2D materials.Interestingly, they found that the adsorption of the nucleic acid on these materials followed the removal of water molecules present within the immediate vicinity of the terminal nucleobases.Therefore, clearly the interaction strength between the dsDNA and the 2D material was higher compared to the solvation strength with surrounding water molecules.12-mer and 8-mer dsDNAs were significantly stable on both materials, while a 6-mer dsDNA was more prone to unwinding, probably due to the increase in intra-dsDNA interactions compared to the interaction energy with the 2D materials with an increase in the polynucleotide length.Therefore, molybdenum-based dichalcogenides are non-nanotoxic especially toward DNAs having longer length, and they might be used for biotechnological purposes.The behavior of phosphorene toward dsDNA was found to be similar to those of TMDCs through combined experimental and simulation studies of Zhou and co-workers. 77Phosphorene as well did not perturb the secondary structure and WC H-bonds of dsDNA, and they ascribed the non-nanotoxic effect of phosphorene to the lower interaction energy with the 2D material, which did not surpass the stabilization gained through the intra-DNA interactions.It is worthwhile to mention that the π−π stacking type of interactions are essentially absent in the case of both TMDCs and phosphorene, due to their nonaromatic nature, undulated structures, and inherent inability to form stacking contacts. Till now we have discussed the effect of pristine forms of 2D materials on dsDNA; however, under experimental conditions, these materials usually contain wrinkles and defects.Wrinkles are ubiquitous and produced primarily through thermal vibrations and difficult to avoid during the preparation of 2D materials. 78,79To deduce the effect of large wrinkles on the nanotoxicity of 2D materials, Zhou and co-workers studied the adsorption of a dsDNA on a graphene sheet containing a large wrinkle (Figure 5(a)). 80It was found that, whenever the dsDNA was adsorbed on the flat section of graphene, the terminal base pairs were unwound, as also reported by us. 58,62ontrarily, if the dsDNA was adsorbed on the wrinkled domain of the material, it suffered from nearly complete unwinding (Figure 5(b)).After anchoring, gradually the nucleobases get adsorbed on the wrinkled section of graphene (Figure 5(c)) at the expense of H-bonds between the nucleobase pairs (Figure 5(d)), thereby inducing a "zipperlike unfolding".Interestingly, the vdW interaction energy of the dsDNA adsorbed on the wrinkled part of the material was several times higher compared to that of the pristine section (Figure 5(e)), acting as the driving force for the process.Therefore, large wrinkles in materials can indeed induce severe nanotoxicity toward nucleic acids.Similarly, Li et al. studied the adsorption of a dsDNA on defective graphene sheets. 81efects on graphene were modeled as a vacancy in the structure, comprising 12 carbon atoms saturated by alternative hydroxyl groups and hydrogen atoms (Figure 5(f)).They observed that the dsDNA adsorption on defective graphene started via the adsorption of the terminal base pairs on a pristine section, followed by interaction with the defective section through H-bonding and electrostatic interactions.The polar defective parts of the material behaved as potential traps, thereby immobilizing the dsDNA through interaction with the terminal part of the nucleic acid.Under this condition, dsDNA underwent a similar unwinding process as observed in the case of wrinkled graphene (Figure 5(g)).The driving force was identified as vdW interactions; however, anchoring of the DNA to the defects behaved as a "pulling force" and held one end of the DNA while the vdW interactions with the pristine sections separated the two strands through H-bond depletion (Figure 5(h)), thereby accelerating the process of unwinding. Guanine Quadruplexes (GQ) Guanine quadruplexes consist of different secondary and tertiary structures as compared to ssDNA and dsDNA and are stabilized through a delicate balance of hydrophobic interactions, π-stacking, and hydrogen-bonding interactions. 82e structure of quadruplexes consists of two or more guanine quartets, each of them being a cyclic square-planar arrangement of four guanine molecules stabilized by intermolecular Hoogsteen hydrogen-bonding interactions. 83Two or more such quartet motifs get stacked upon one another during the formation of quadruplexes and are further stabilized by various other forces, such as dehydration of cations and metal-ion binding. 84Having such a unique structure and arrangement of nucleobases, quadruplexes are prone to interaction with a variety of ligand molecules and surfaces.Several experimental studies have been performed to design sensors based on graphene, graphene oxide, and other 2D materials to detect GQs.Therefore, it is of fundamental interest to investigate whether the structural integrity of these nucleic acids is maintained on 2D materials. To this regard, the interactions between the three-quartet parallel human telomeric GQ molecule with graphene, h-BN, and h2D-C 2 N (Figure 6(a,b)) were studied. 58It was found that adsorption on C 2 N proceeds through significantly low number of contacts (Figure 6(c)) and interaction energy (Figure 6(d)) while on graphene and h-BN, the resulting adsorption was significantly strong, the interaction trend following the order h-BN > graphene ≫ C 2 N (Figure 6(d)).The structural evolution of GQ showed interesting results.Time evolution of the root-mean-square deviation (RMSD) of nucleic acid backbone (Figure 6(e)) when adsorbed on C 2 N was observed to be very similar to that in a blank simulation, as opposed to both graphene and h-BN where the RMSD increased rapidly and to an immense extent, suggesting large deviation from the initial structure.This was also supported by the similar time evolution of the intra-GQ H-bonds (Figure 6(f)) in the blank simulations and over C 2 N, while being greatly reduced on graphene and h-BN.Since the H-bond is one of the key features in stabilizing the quartet structures, it could certainly be predicted that the quartet motifs were disrupted due to the adsorption.We identified two stability parameters, namely, the number of intra-GQ π−π stacking contacts (Figure 6(g)) and number of quartets (N Q ) survived (Figure 6(h)).The disruption of the GQs on graphene and h-BN was observed to follow an "adsorption-induced quartet-byquartet disruption" mechanism, and Figure 7 provides the schematic representation of the hierarchical steps involved in this mechanism.First, the GQ gets adsorbed on the 2D material (graphene and h-BN) via the adsorption of the bottommost quartet motif (Q1) through π−π stacking of the quartet-forming nucleobases with the 2D material.For graphene and h-BN, no stabilizing electrostatic interaction energy is present to allow nucleobase reorganization, and therefore, immediately after adsorption, the quartet structure of Q1 is lost within only 10−20 ns.Therefore, π-stacking with the more hydrophobic graphene and h-BN provides the quartet nucleobases a stronger stabilization, which in turn acts as the driving force behind the disruption of Q1.In addition, the nucleobases not involved in quartet formation also get adsorbed on the surface, thereby drastically reducing the structural flexibility of the GQ.These two synchronous events run parallel to each other and weaken the GQ.After the disruption of Q1, the other two quartets Q2 and Q3 also get adsorbed one after another and disrupted in a similar fashion.The sequential disruption of the three quartets was also understood in terms of the steady decrease in the number of intraquartet π−π stacking contacts.Furthermore, a similar disruption mechanism of several other GQs on both graphene and h-BN while being completely stabilized by h2D-C 2 N suggested that the disruption event was not specific to a specific GQ. In a later study, the effect of adsorption of GQs on g-C 3 N 4 was investigated using a similar simulation protocol. 64There was an inherent tendency of the GQ to be disrupted upon adsorption on g-C 3 N 4 due to the predominance of the vdW interaction energies as compared to the electrostatic interactions.However, the ultimate outcome depends on the initial configuration and adsorbing geometry of the GQ.If the axis of the quartet channel is nearly penpendicular to the plane of the material, the vdW interaction strength is maximum, and therefore the tendency of the nucleobases to form π-stacking with the surface would be higher, thereby inflicting significant perturbation to the structure.However, if the quartet channel axis is tilted with respect to the surface, the formation of πstacking contacts would be less probable, thereby preserving the GQ structure.Therefore, the toxic effect of g-C 3 N 4 toward GQ is lesser compared to graphene and h-BN, while being higher than h2D-C 2 N. RNA Ribonucleic acid (RNA) molecules have gained tremendous attention for their plausible applications in biomedicine, especially therapeutics.Among these applications, antisense therapy is of fundamental interest.−87 2D nanomaterials have been suggested to be potential carriers of RNA for these purposes. 88,89However, succefful delivery of the RNA to a specific target would certainly involve the preservation of its structural integrity, and therefore, studying the interactions between RNA and 2D materials has become imperative to contemplate the practical applications of these materials in gene transfection.Chakrabarti and co-workers studied the interactions of a doublestranded RNA and one of its analogues xylonucleic acid (XNA), which contains xylose as the sugar moiety. 90XNA has an unique structure, adopting a zipper-like double-stranded geometry with a near-orthogonal arrangement of complementary base pairs on opposite strands. 91It was found that graphene can easily adsorb both XNA and RNA through vdW interactions, displacing the internal arrangements of the nucleobases and cleaving the interstrand H-bonds between complementary base pairs. 90Clearly, graphene destroys the structure of RNA molecules, behaving as a nanotoxic material.Comparing the interactions between a folded RNA aptamer and graphene oxide as well as h-BN, Mashatooki et al. reported that both graphene oxide and h-BN were able to disrupt the structures of the RNA, h-BN cleaving the H-bonds and disrupting the secondary structure faster as compared to graphene oxide, thereby showing a higher degree of nanotoxicity. 92As a result, it was indeed necessary to explore other biocompatible 2D materials toward RNA molecules.In search for such materials, we studied the adsorption of an siRNA on g-C 3 N 4 . 64siRNA is a class of double-stranded noncoding RNA molecules, having 20−25 base pairs and consisting of phosphorylated 5′ and hydroxylated 3′ ends.Two different initial structures were considered where the siRNA is perpendicular and parallel to the material plane (Figure 8(a,b)).The initial orientation of the siRNA did not change upon adsorption onto the surface, and neither did the secondary structure.The parallel and perpendicular forms of the adsorbed siRNA were clearly distinguishable from each other through visual analyses.Detailed analyses revealed that interstrand WC H-bonding (Figure 8(c)), π−π stacking interactions, and other structural parameters reamined nearly unaltered when compared with a blank simulation.Therefore, g-C 3 N 4 could easily be used for the purpose of adsorption of RNAs unlike graphene and h-BN.This behavior of g-C 3 N 4 was again explained in terms of the predominance of the electrostatic interactions of the nucleic acid with the material over the vdW interactions (Figure 8(d)), an observation similar to that of dsDNA. Thermodynamic Considerations Nanotoxic effects of 2D materials are manifested via chemical modifications of nucleotides and/or through the disruption of the secondary and tertiary structures of the polynucleotides owing to their strong adsorption on the materials.Evaluation of the energetics of such interactions may shed light on the thermodynamic foundation of nanotoxicity.DFT and MD simulations are invaluable computational tools in this regard, since DFT has the capability to track the chemical interactions while MD simulations can be used to deduce the adsorption free energies of large molecular entities in a solvent environment, taking the thermal effects into considerations.In recent years, several research groups have evaluated the binding energies of nucleobases and nucleotides on various 2D materials, and Table 1 lists some of these for graphene, h-BN, h2D-C 2 N, and MoS 2 .In DFT, the binding energy is calculated by subtracting the electronic energies of the individual molecular components from the electronic energy of the hybrid system showing binding.It does not include the temperature, solvent, and entropic effects, as opposed to the binding free energies calculated from MD simulations, where all of the above-mentioned effects are taken into consideration.−95 They found that the binding energies follow the order G > A ≈ T ≈ C under LDA and G > A > T > C in case when the MP2/6-311++G(d, p) level of theory is applied.Rao and co-workers used AMBER force fields in vacuum phase to calculate the binding energies which followed the same order as obtained from their isothermal titration calorimetry (ITC) experiments. 96Cho and coworkers employed local (LDA), semilocal, and van der Waals energy-corrected periodic density-functional theory (PBE + vdW) to show that the magnitudes for different schemes of calculations following the order PBE+vdW > LDA > PBE. 61While the LDA scheme produced a binding energy trend similar to that (G > A ≈ T ≈ C) obtained by Gowtham et al., the PBE scheme without vdW corrections predicted a different trend: G ≈ C > T > A. 59 Inclusion of the vdW corrections predicted the binding energies to follow the same trend as obtained by Rao et al. 96 Interestingly, the trend and magnitudes of the binding energies obtained by the PBE+vdW scheme corroborated with those found in ITC and single solute adsorption isotherm studies. 96,97The same trend in binding energies was found for the interactions between h-BN and nucleobases, although the magnitudes were higher as compared to graphene. 61An analogous trend in the binding affinities was also found by Johnson et al. for the adsorption of nucleobases on single-walled carbon nanotubes. 98They decomposed the enthalpic and entropic parts of the free energies to demonstrate that the solvent and entropic effects were negligible, and the adsorption of the nucleobases was essentially guided by van der Waals interactions.Calculations of the band structures of the nucleobases adsorbed on both graphene and h-BN demonstrated that the occupied molecular states of the nucleobases had no band dispersion, suggesting negligible hybridization with the π-states of the graphene and BN. 61Mulliken charges showed an insignificant charge transfer of less than 0.03e between the nucleobases and the 2D materials. 61Therefore, the strong adsorption of the nucleobases on graphene and h-BN resulted from physisorption rather than having a chemical interaction between them.We calculated the binding energies of the four nucleobases on h2D-C 2 N which followed the order C > G > A > T, and the magnitudes of binding energies in the PBE + vdW scheme were similar to those observed for graphene and h-BN. 62 energy calculations of the nucleobases on h2D-C 2 N in vacuum also predicted the same trend, albeit the magnitudes were reduced from the DFT calculated values due to temperature and entropic effects. 62,99Analysis of the density of states (DOS) of the nucleobase-C 2 N hybrids confirmed the minimum electronic perturbation of the nucleobases after adsorption, clearly suggesting the absence of any chemical interaction with the surface.Sadeghi et al. determined the binding energies of the nucleobases on MoS 2 employing vdW corrected periodic DFT, which showed a different trend: G > A > C > T; however, the magnitudes appear to be smaller compared to the above-mentioned 2D materials, presumably due to the absence of π−π stacking interactions between the nucleobases and nonaromatic MoS 2 . 100n recent years, 2D materials have been predicted as plausible candidates for nucleic acid delivery vectors.Although experimental methods can determine whether the nucleic acid delivery is possible using 2D materials, MD simulations can be used for the indirect prediction of the same a priori to experiments.Such a strategy has been developed and applied in our recent studies where adsorption free energies of nucleic acids have been utilized in conjunction with the Smoluchowski equation to calculate the mean first-passage time of the nucleic acids from the 2D materials. 58,64The calculated magnitudes of the adsorption free energies for various nucleic acids on h2D-C 2 N, g-C 3 N 4 , graphene, and h-BN suggest that the free energy penalty for their desorption is significantly higher compared to the energy available due to thermal motions. 58,62,64An external stimulus would invariably be required to facilitate this purpose, and the spontaneous release of DNAs from an adsorbed state might be considered as a rare event.Consequently, the time scales of occurrences of such events might be in the order of seconds, hours, or days, which are far beyond the accessible time scales of the current state-of-the-art of classical MD simulations.Therefore, we modeled the release of these molecules as diffusion in the presence of a potential W(z) along a reaction coordinate employing the analytical Smoluchowski equation.To develop the necessary theoretical framework, the diffusion of the nucleic acids was assumed to occur in the presence of a potential W(z) along a onedimensional reaction coordinate, the distance along the z (vertical) axis, in particular.The probability P(z ' ,t|z,0) of finding the particle at a position z ' and at time t, knowing that the particle was present at position z at time t = 0, where t is prior to the time t = 0, can be written in terms of the backward Smoluchowski equation as 58,64 where D(z) and W(z) are the diffusivity and one-dimensional free energy landscapes, respectively.The time taken for the release (τ release ) of the nucleic acids is expressed in terms of the mean first-passage time where τ(z,z out ) is the average time taken for the nucleic acid to travel from an initial position z to the final released state with position z out .z ref is termed as a reflective boundary and D is the diffusion coefficient.The diffusion coefficient D in the above equations is originally position (z)-dependent; however, here we consider it to be constant for an adsorbed structure since small changes in the vertical distance between the 2D material and the nucleic acid do not change the diffusivity appreciably, as long as some residues are adsorbed on the surface.The free energy landscape W(z) was calculated in terms of the onedimensional potential of mean forces, representing the free energy profiles for adsorption.The instantaneous position of the nucleic acid lied between the two boundaries z out and z ref , i.e., z ref ≤ z ≤ z out .Insertion of the diffusion coefficients and free energy profiles into eq 2 yields the release times for various nucleic acids on different 2D materials, which are listed in Table 2. From the magnitudes of the release times, it can safely be concluded that relatively weakly adsorbed nucleic acids on h2D-C 2 N and g-C 3 N 4 can be released at a higher rate; however, for GQs adsorbed on g-C 3 N 4 , a much higher release time is required compared to h2D-C 2 N.In contrast, for graphene and h-BN, the release times were orders of magnitude higher, making them poor platforms for nucleic acid delivery.An important aspect regarding the delivery of nucleic acids from an adsorbed state is the specificity of targeting.It has been reported that biomolecules adsorbed on 2D materials might undergo longitudinal diffusion across the surface activated by thermal motions. 70The 2D materials used in real experiments are significantly large in their dimensions, and therefore, "crawling" of nucleic acids on the surfaces may lead to loss in specificity during their delivery.Therefore, it is imperative to understand the driving force and materialselectivity for the lateral movement of the DNA molecules.The possibility of lateral translation of adsorbed nucleic acids on 2D materials can be investigated by constructing 2D free energy landscapes for the movement of the biomolecule on the material, which reveals the corrugation in the free energy landscapes encountered during "crawling".We calculated the free energy profiles for the lateral movement of a guanine nucleobase in vacuum on C 2 N, taking the projection of the center-of-mass (COM) distance along both x and y axes as the reaction coordinates, as shown in Figure 9(a). 58,101C 2 N contains periodic pores (designated as A, B, and C in Figure 9(a)) surrounded by electronegative nitrogen atoms, where the nucleobases can attach through H-bonding, being simultaneously stabilized by the neighboring aromatic rings through π−π stacking.However, there are no atoms present just above the pore to interact with the nucleobases.Therefore, nucleobases remain in a low free-energy region beside the pores while encountering a barrier of ∼2 kcal/mol (Figure 9(b)).As the molecule moves away from the pore, the free energy progressively decreases and eventually reaches a plateau of the landscape having a small free energy barrier of <1 kcal/ mol, where the molecules are nearly freely diffusing.After that, another barrier is encountered separating the three porous regions, when the molecule interacts only through π−π stacking with the aromatic rings but the H-bonds are lost.Interestingly, upon withdrawal of the partial charges, the Hbonding disappears, and the free energy magnitudes are substantially reduced (Figure 9(c)).The only barrier encountered is the one situated on the top of the pore due to the absence of atoms.Therefore, the porous nature and presence of localized polarity in C 2 N induce formation of periodic potential energy traps, thereby clamping the nucleobases during their adsorption and preventing lateral motion.However, for graphene (Figure 9(d)), the free energy landscape is nearly uniform due to the presence of symmetric hexagonal rings, which isotropically interact with an adsorbed molecule, making the encountered barriers within the limit of thermal energy and providing ample opportunity for the molecule to move across the surface.Next, we deduced the free energy landscapes in the presence of aqueous medium for a mononucleoside deoxyadenosine (dA) (Figure 9(e-h)). 101The free energy pattern remained grossly similar in nature; however, the magnitudes of the energies were reduced due to the screening effect of solvent.The maximum free energy barrier encountered was about ∼2.5 kcal/mol compared to ∼6 kcal/mol observed for guanine in vacuum, both being significantly higher than the thermal energy available, and therefore led to immobilization of the nucleobases through Hbonds and/or π−π stacking interactions.For graphene and h-BN, the free energy landscape pattern and magnitude do not change in the presence of water, and nucleic acids are nearly free.Therefore, evidently C 2 N appears to be a good alternative compared to graphene and h-BN due to the higher specificity of the location of adsorption of single nucleobases and nucleosides on the former. NANOTOXICITY OF 2D MATERIALS TOWARD PROTEINS AND PEPTIDES The biomedical use of graphene and related 2D materials involves inevitable interaction with amino acid molecules, peptides, and proteins, upon entering the physiological environment, since proteins are one of the more abundant classes of molecules present in the cellular and extracellular environments of living organisms.−108 Therefore, with the development of MD parameters for the simulations of 2D systems, classical MD simulation turns out to be the preferred tool to attack the problem of computational evaluation of the nanotoxicity of 2D materials toward proteins and peptides.After realizing the biomedical potential of graphene and related 2D materials, significant efforts have been invested regarding such computational investigations, which in turn have unravelled a great deal of information underlying the interactions between proteins and 2D materials, and some of the important conclusions have been discussed in the following section. Interactions between Proteins and Peptides with 2D Materials One of the first investigations concerning peptides and solid surfaces was performed by Penna and Biggs. 109,110They identified a three-phase adsorption mechanism, as illustrated by Figure 10 (a): (1) the first stage involves the biased diffusion of the peptide/protein from the solution toward the surface, (2) reversible "anchoring" of the biomolecule via hydrophilic groups of the peptide to the second water layer adjacent to the surface, and (3) a "lockdown" phase, consisting of the slow and stepwise rearrangement of the peptide initiated by the anchor group piercing into the first water layer, along with other hydrophobic and hydrophilic residues.However, these conclusions were based on a limited number of simulations, and generalization of this mechanism required significant modification.Yu et al. proposed that the biased diffusion phase was accompanied by the building up of longrange electrostatic interactions with the highly oriented layers of water molecules near the surface in conjunction with van der Waals and hydrophobic interactions with the material. 111Hbonding with the water molecules has also been predicted to play important roles during the adsorption of proteins and peptides on solid surfaces.Simulation studies have found that the fundamental nature of adsorption of proteins remains the same on 2D materials, the adsorption strength being controlled by the mutual interactions between the amino acid residues and the materials. 112uo et al. studied the interactions betweeen three proteins having different secondary structures, namely, the WW-domain (β-sheet), BBA protein (mixture of α-helix and β-sheet), and λ-repressor (α-helix) with graphene. 113Being nonpolar, graphene interacted with the proteins only through van der Waals forces, and there was no electrostatic influence.Upon adsorption, the antiparallel β-sheet secondary structure of the WW-domain remained nearly intact, while the α-helical content of the λ-repressor protein underwent significant perturbation and was either lost or converted to the 3 10helix.In case of the BBA protein, the perturbing effect of the material was significant, destroying most of its α-helical component and significantly affecting the β-sheets.However, the extent of structural perturbation was dependent on the binding orientation of the protein and the residues available for interaction.It was suggested that the rigidity of the β-sheet structures was responsible for the protection from denaturation, while the flexible α-helix was prone to adsorption-induced disruption by graphene.This observation was also complemented by the investigation of the adsorption of Villin Headpiece (HP35) on graphene by Guo et al. employing MD simulations, where they found nearly complete conversion of the α-helix into 3 10 -helix. 114Free energy calculations predicted that the adsorption of protein on graphene was highly favorable from an enthalpic point of view, while being slightly disfavored entropically, making the overall free energy favorable for adsorption.Since the nature of amino acid residues controls the extent of adsorption on 2D materials, mutating a protein with one or few amino acids may change the local conformation of the protein.For instance, the immunoglobulin G (IgG) antibody-binding domain of protein G, also known as G B , easily gets adsorbed on graphene through vdW and π−π stacking interactions and gets structurally denatured by the material.Wei et al. mutated the strongly interacting Gln32 and Asn35 residues of G B with weakly adsorbing alanine residues and demonstrated that the sequence-engineered mutated protein did indeed maintain its secondary structure. 115Therefore, MD simulations also allowed researchers to visualize and study the effect of protein mutations on their adsorption. The nanotoxicity of 2D materials toward proteins and peptides, as discussed until now, depends on the nature and population of the amino acid residues, a larger fraction of strongly adsorbing residues enhancing the toxic and disruptive effects while a higher population of weakly interacting amino acids increasing the probability of structural preservation.To fundamentally understand the adsorption behavior, Hughes and Walsh calculated the free energies of all 20 natural amino acids on graphene in the presence of aqueous medium (Table 3), which revealed a few key points: 116 (1) aromatic amino acids (Tyr, Trp, His, and Phe) have high adsorption free energies on graphene, (2) other than the aromatic amino acids, Arg, Asn, Gln, Gly, and Met have significantly high adsorption propensities, (3) the most weakly adsorbing amino acids were Ile, Lys, Pro, Leu, and Val, (4) the size of the side chains has no correlation with the free energies, since large side chains bearing Arg, Gln, and Trp have higher free energies while Gly has significantly strong adsorption on graphene in spite of having the smallest size, (5) weakly adsorbing amino acids can either be hydrophobic (Ile, Leu, and Val) or hydrophilic (Lys), and therefore, hydrophilicity has no correlation with the free energies, and (6) amino acids possessing planar groups (phenyl, indole, guanidium, and amide groups) interact with graphene through π−π stacking interactions and, therefore, have higher free energies, while those having bulky or strained side chains (Ile, Leu, Val, and Pro) are weakly adsorbing. 116part from graphene, the material of interest would certainly be h-BN, which has already been shown to interact more strongly with nucleic acids as compared to graphene, destroying their secondary structures, and it is expected that h-BN would also behave in a similar manner toward proteins.In this regard, Paul and co-workers have studied the interaction of a model protein hen egg white lysozyme (HEWL), toward both graphene and h-BN. 117It was observed that HEWL was spontaneously adsorbed on both the materials (Figure 10(b)), the interaction energy with h-BN being much higher (Figure 10(c)), completely corroborating the trend observed for nucleic acids.Decomposition of the interaction strengths (Figure 10(d)) of an individual residue with the 2D materials reveals that for both materials most of the interacting residues were common with several clusters of residues comprising TYR, ARG, GLY, TRP, and HIS, which have been listed above as strongly interacting residues, and each of these amino acid residues had higher interaction strength with h-BN.It was demonstrated that graphene nearly maintained the secondary structure of the protein (Figure 10(e)) while upon adsorption on h-BN (Figure 10(f)), the α-helix content was reduced and the β-sheet content of HEWL got completely diminished, an observation opposite of those observed for several proteins on graphene.Therefore, the toxicity of h-BN was significantly higher compared to graphene, which was suggested to originate from the higher adsorption free energies of the amino acids. Due to the success of C 2 N in adsorbing and delivering nucleic acids, Zhou and co-workers evaluated the adsorption of HP35 on h2D-C 2 N (Figure 10(g)). 118It was revealed that the adsorption of HP35 on C 2 N did not cause significant distortion to the secondary structure of the protein (Figure 10(h)).Additionally, the magnitudes of the protein-material interaction energies were less compared to those observed for the adsorption of prototypical proteins on graphene and h-BN.In contrast to graphene and h-BN, the adsorbed protein was highly restricted on the initial site of adsorption on C 2 N, due to the presence of periodic potential wells present on the surface.It was concluded that the mild adsorption of the protein on C 2 N was dominated by long-range electrostatic interactions rather than vdW interactions, and therefore, the constituent residues were able to interact with C 2 N without disrupting their native arrangement.The same group studied the adsorption of the same protein on C 3 N, another carbon nitride polyaniline 2D material. 119In complete contrast to the behavior of C 2 N, HP35 underwent partial denaturation, and the α-helical content of the protein was significantly reduced to random coils.The driving force was found to be predominat vdW interactions compared to electrostatics, a behavior opposite to that observed for the adsorption on C 2 N, thereby causing the amino acid residues to disrupt their secondary structures.Therefore, C 2 N was predicted to be biocompatible and non-nanotoxic toward protein, while C 3 N demonstrates significant denaturating effects.The biocompatible property was not generalized for all nitrogen-doped graphene-based materials; rather, it was dependent on the subtle balance between electrostatics and vdW interactions. Another 2D material, phosphorene, was subject to evaluation for its nanotoxicity toward proteins, by Zhang et al., employing a signal protein WW domain. 120They observed two types of structural disruption of the protein native structure depending on the orientation of adsorption.In the first mechanism, the secondary structure of the protein remained intact; however, the ligand PRM was snatched from the protein followed by blocking of the active site.Alternatively, in the other mechanism, the β-sheet structure of the WW domain was completely disrupted, but the ligand position was intact.In both cases, the native function of the signal protein was lost.The debate regarding the nanotoxicity of some of the above materials was settled by Liu et al. by studying the adsorption of several model proteins on graphene, phosphorene, and C 2 N. 121 They found that graphene disrupted both the α-helical and β-sheet structures, the former being affected to a greater extent.On the other hand, the αhelical structure changed slightly on the BP surface, while the β-sheet maintained its structural integrity.For the adsorption of the proteins on C 2 N, all of the secondary structures were preserved completely, thereby behaving as a nontoxic and biocompatible 2D material. Biocompatibility of 2D Materials toward Cyclotides The results discussed above clearly suggest that several 2D materials are nanotoxic to proteins, with graphene and h-BN being the most toxic, followed by other materials such as C 3 N among others.However, proteins that are extremely resistant to typical denaturants could be adsorbed on nanotoxic 2D materials such as graphene and h-BN without structural disruption.Less nanotoxic 2D materials, such as C 2 N, and C 3 N 4 would certainly have little impact on their structures.Cyclotides, a family of topologically fascinating disulfide-rich plant peptides with 28 to 37 amino acid residues, is one of the well-known peptides with high stability. 122,123The cyclotides share a head-to-tail cyclized peptide backbone and three interlocking disulfide bridges built from six conserved cystine residues.A cyclic cystine knot (CCK) motif (Figure 11(a)) is formed when one of the disulfide bonds penetrates via a macrocycle formed from the other two disulfide bonds, thereby providing a rigid framework and resulting in exceptional thermal stability and resistance to denaturants. 124,125Cyclotides may be classified into two categories depending on the presence of a cisproline (Pro) residue.Cyclotides that contain this Pro residue are called Mobius cyclotides (Figure 11(a)) while those for which the Pro residue is absent are referred to as Bracelet cyclotides (Figure 11(a)).Recently, another class of the cyclotides, the cyclic knottins, has been found which is distinct from the Mobius and Bracelet families (Figure 11(a)). 126−130 On the other hand, both graphene and h-BN have also been found to show antibacterial activity, penetrating cell membranes through their sharp edges and disrupting the structure. 40,58,62,131Therefore, it may be envisaged that the bio-nanocomposites of these materials with cyclotides may retain the antifungal, antibacterial, and anticancer effects of individuals and may even show improved cooperative effectiveness. To test these hypotheses, we studied the adsorption of three cyclotides belonging to different families (Figure 11(b)) viz.katala B1 (Mobius), cycloviolacin O1 (Bracelet), and MCotI-II (cyclic knottin) on both graphene and h-BN. 132Contrary to the results observed for other peptides and proteins, we did not observe any significant structural disruption of the cyclotides, as observed from the final structures of the peptides on both materials (Figure 11(c)).Secondary structural analysis showed that neither the α-helical nor the β-sheet content was reduced following adsorption on the 2D materials, and the time evolution of the secondary structure had patterns similar to those observed for blank simulations (Figure 11(e)).The interaction energy trends for the cyclotides followed the order MCoTI-II > Cycloviolacin O1 > Katala B1, the difference being attributed to the amino acid sequences.Calculation of the residue-wise interaction energies (Figure 11(d)) with the materials revealed significant insights.As observed from Figure 11(d), the strongly interacting amino acid residues are mostly common for both 2D materials.The residues Tyr, Trp, Arg, and Asn had an average interaction energy equal or more than −15 kcal/mol while Cys, Ser, Leu, and Ile had an interaction between −10 and −15 kcal/mol, corroborating with the free energy calculations by Hughes et al. 116 Additionally, Cys residues had significantly low interaction energy with the 2D materials, and therefore, the disulfide linkages were relatively free to protect the structure.Interestingly, Cycloviolacin O1 has only two aromatic residues (Tyr4 and Tyr23), while both Katala B1 and MCoTI-II have only one aromatic residue (Trp19 and Tyr32).Even though the average interaction energies of the aromatic residues with both 2D materials were larger than −15 kcal/mol, their overall contribution to the total interaction energy was minimal.Therefore, the lesser abundance of aromatic residues in the cyclotide sequence contributes to the increased stability and conservation of the secondary structure. Cyclotides exist in plants mostly in aggregated states, while they operate in both single and aggregated forms.Therefore, we studied the interaction between aggregated cyclotides and the 2D materials.It was observed that cyclotide aggregates could spontaneously adsorb on the materials without distorting their structures.Adsorption free energies were calculated for single cyclotide molecules (ΔG ad single ) from an adsorbed state and from an adsorbed aggregate (ΔG ad,agg single ).The free energy profiles were inserted in eq 2, and the release times (Table 4) were calculated accordingly.The calculated release times for single molecules (r release single ) follow the same trend as observed for the adsorption free energies on any of the 2D materials, i.e., Katala B1 < Cycloviolacin O1 < MCoTI-II, and the release time from h-BN is 10 3 −10 6 times higher than that observed from graphene.However, magnitudes of r release single clearly suggest that the adsorbed cyclotide molecules are not expected to be desorbed from the 2D materials during the interaction with chemical substances upon entering the physiological environment.Consequently, we could safely conclude that the adsorption of the cyclotides on 2D materials is mild, although it is sufficiently strong to form stable heterostructures.The situation of the aggregate was investigated indirectly, and the average free energy of adsorption of a single cyclotide (ΔG ad,agg single ) in the aggregates was obtained dividing the free energy of adsorption of the aggregate (ΔG ad agg ) by the number of cyclotides.It was revealed that with addition of more and more cyclotides, the average adsorption free energy was reduced, which in turn also reduced the release time of the cyclotides from the adsorbed aggregates (r release agg ) by orders of magnitude.It is conceivable that controlling the size of the aggregates may offer the opportunity to fine-tune the degree of interaction between 2D materials and cyclotides, which would increase the propensity for sustained release of individual peptides from the aggregates, thus giving an opportunity to interact with other biomolecules.Alternatively, the adsorption affinity of cyclotide can further be adjusted by introducing different functionalization into the 2D material or by controlling the degree of oxidation of the material. Cleavage of Dimeric Proteins Using 2D Materials One of the significant methods of the functioning of proteins is protein−protein interactions (PPIs), and proteins are known to carry out several biological functions predominantly through PPIs, such as signal transduction, and cell metabolism. 133In addition, there are a large number of multidomain proteins, which act in their assembled state, and therefore, loss in PPI can lead to the disruption of biological functions, resulting in several diseases. 134Since it has already been established that proteins can strongly adsorb on 2D materials, it is obvious to inspect if they can disrupt PPIs and cleave multidomain protein structures.Zhou and co-workers designed a computational protocol to decipher such interactions and applied the protocol for evaluating the interactions between a dimeric protein HIV-1 integrase and graphene, graphene oxide, and graphyne. 42,135,136e applied a similar strategy to deduce the possidimeric protein cleavage probability of black phosphorene (BP), taking HIV-1 integrase and the λ 6−85 repressor protein as the substrates. 137HIV-1 integrase exists as a homodimer of two identical monomers, each containing a five-stranded antiparallel β-barrel and a three-residue 3 10 -helix.In all the simulations, the 2D material was placed close to the dimeric interface (Figure 12(a)) and simulations were continued to observe whether the material enters the interface and separates both the monomers.It was found that all of the abovementioned 2D materials were able to cleave the monomeric units of HIV-1 integrase, as observed in the representative snapshots in Figure 12(b)).The process of dimer cleavage could be characterized by a decrease in the contact surface area (Figure 12(c)) and the interaction energies (Figure 12(d)) between the two monomers.However, there was no definite time scale of these cleavages, the process being dependent on the spatial orientation of the material to the dimeric protein as well as the probability of the interaction of the material with the dimer interface.In the case of several trajectories simulated for the dimer cleavage simulations for any material, different trajectories usually yielded the dimer separation event at different time scales, and no one-to-one correlation existed for the observed time scales for different materials (Figure 12(b)).A careful inspection of the snapshots obtained from the concerned articles suggested that a perpendicular arrangement of the material with respect to the axis of the protein dimer allows the material to enter the interface leading to cleavage while a parallel arrangement induces sidewise interactions between the protein with the material, thereby resulting in regular adsorption without dimer cleavage.Time evolution of the intermonomer contact surface area and the interaction energy for such a failed trajectory (trajectory-7) is provided in Figure 12(c) and 12(d), where the magnitudes of these quantities were very similar to those observed in a blank simulation.Also, if the material became distant from the dimeric interface and abruptly changed its spatial orientation due to solvent-induced fluctuation and/or thermal effects, it could not take part in cleavage in computationally accessible time scales. 135,137We also investigated the time evolution of the secondary structure of the HIV-1 integrase and the λ 6−85 repressor proteins upon adsorption on BP to check whether the protein shows nanotoxicity through the alteration of the secondary structures. 137The proteins were strategically chosen, since HIV-1 integrase predominantly consisted of a β sheet structure while λ 6−85 repressor was mostly alpha helical.In both cases, we did not observe significant changes of the secondary structures (Figure 12(e)), thereby arriving at the suggestion that adsorption of proteins on BP had little to no perturbing effect on their structures.The cleavage of the dimers could be considered as a "clean-cut", and it was predicted that BP might be used as a "green" 2D material for cleaving polymeric structures of proteins without structural disruption.Therefore, most of the 2D materials show toxicity through the separation of the different monomeric domains of polymeric proteins and also disrupt their secondary structures.It might not be safe to administer these 2D materials into a cellular environment; however, BP, in particular, might be used in the fabrication of bionano devices intended for the cleavage of biomolecules and can act as a potent therapeutic agent against HIV. Furthermore, a free energy calculation protocol was devised to investigate the thermodynamic basis for the nanomaterial insertion. 137Two different adsorption free energies were determined, namely, the free energy of binding between the two protein monomers (ΔG pure ) and the adsorption free energy of a monomer on phosphorene (ΔG phosphorene ).The thermodynamic preference for material insertion was estimated in terms of the free energy gain (ΔΔG pref ) when two monomers dissociate and each of them get adsorbed on the surface of the phosphorene sheet, according to the relation ΔΔG pref = 2ΔG phosphorene − ΔG pure , and Figure 12(f) and 12(g) depicts the ΔG pure and ΔG phosphorene values for the two proteins.The calculated magnitudes of ΔG pref were −32.1 and −78.6 kcal/mol for HIV-1 integrase and λ-repressor, respectively.Therefore, insertion of the phosphorene nanosheet was thermodynamically favorable for both of the protein dimers provided the 2D material had a preferable orientation to get inserted across the dimeric interface before approaching any other plausible mode of interaction.The thermodynamic preference was explained in terms of the significantly higher magnitude of the adsorption free energy between any of the protein monomers and phosphorene, which largely outweighed the intermonomer free energy of binding.The outlined free energy strategy can further be used in determining the possibility of cleavage of multidomain proteins employing any 2D material. LIMITATIONS Despite being able to predict the consequences of the interactions between 2D materials and biomolecules such as lipids, proteins, and nucleic acids, computational methods suffer from several limitations as well.One of the major drawbacks of classical MD simulations is the reliability of forcefield parameters for 2D materials used to model such interactions. 138Traditionally, MD force-field parameters had been generated for common biomolecules and small organic molecules, and they have undergone decades of refinement through careful recalibration, parametrization, and fine-tuning.Regrettably, the parameters for most of the 2D materials have only been developed recently, and even though these parameters are capable of predicting the major physical properties such as Young's and bulk modulus, water interactions and contact angles, interactions with nonaqueous solvent and liquid-phase exfoliation, the accuracy of these parameters in modeling physiological environments is largely unknown and requires substantiation through experimental verification of the computationally predicted outcomes.−141 In addition, the partial charges on the atoms that make up 2D materials have also been produced by using small molecular models of infinitely large material sheets.These partial charges depend on the level of theory employed and may not be consistent with those used for biomolecular species, which might result in an overestimation or underestimation of the interaction energies between them. 142herefore, to get close agreement between cytotoxicity experiments and computationally evaluated toxicity results, the force-field parameters must be fine-tuned through continuous calibration using experimental data.A decade earlier, such experimental results were a handful, and as a result, quality control of the MD parameters for 2D materials was problematic.Additionally, it has been an ongoing challenge for computational chemists and material scientists to provide these parameters on demand given the recent acceleration in the discovery of novel 2D materials.Fortunately, nowadays, with the remarkable advancement of nanotechnology, advanced experimental techniques are being used to produce high-quality data sets, which has made the validation of the MD parameters substantially easier, thereby pacing up the development of considerably accurate forcefields.For instance, the newly developed polarizable force-field parameters for nanomaterials have markedly enhanced the accuracy of the biomolecule-nanomaterial interactions, especially in those cases where the presence of the biomolecules affects the electronic distribution of the materials and vice versa.In addition, the bionano community has been hugely benefitted through the development of all-atoms force-field libraries such as CHARMM General Force Field (CGenFF) and General Amber Force Field (GAFF), since they provide common atom types related to several 2D materials. 99,143,144It is expected that with the advancement in force-field development strategies and introduction of machine learning protocols, high-quality MD parameters for low-dimensional nanomaterials would be developed, thereby providing computational material scientists a huge boost for the screening of 2D materials for their toxic effects. The other drawback that MD faces is the size-limitation of the simulation systems, which in turn directly affects the computationally accessible time scales for investigations.Biological systems are large enough to encompass an enormous variety of biomolecules; therefore, mimicking the biological environment inevitably demands large size of the simulation systems, the number of atoms varying between thousands to millions.However, the larger the system size, the smaller the accessible time scales of simulation.The events related to nanotoxicity occur on the time scale of hours to days, and hence, computer simulations of nanoseconds and even microseconds may not be adequate to produce the desired events.A good example would be refolding of 2D material-unfolded proteins, which is not expected to happen within the usual accessible computational time scales.While AIMD and reactive force-field simulations can be used to study the possibility of chemical reactions between 2D materials and biomolecules such as the generation of reactive-oxygen species (ROS), one of the common pathways of nanomaterial-induced toxicity in physiological environment, the accessibility of extremely low computational time scales of only hundreds of picoseconds for large biological systems has made these techniques futile in this context.Nonetheless, remarkable development of high-performance supercomputing technologies during the past decade has made it possible to simulate millions of atoms for a few hundreds of nanoseconds and is expected to achieve much longer time scales for both classical and AIMD in the near future, which in turn would be indispensable for the high-throughput computational screening of materials for biomedical purposes. CONCLUSIONS AND OUTLOOK In summary, this feature article outlines the methodologies adopted for the computational evaluation of the nanotoxicity of 2D nanomaterials toward nucleic acids and proteins, employing DFT and classical as well as ab initio MD techniques.The temporal development and continuous progress in computational protocols have been discussed, taking various 2D materials as platforms for biomolecular interactions.DFT and MD studies suggest that the possibility of the chemical reactions between 2D materials and biomolecules is less and interaction between these substances takes place via adsorption through noncovalent forces such as van der Waals interactions, hydrophobic association, and π−π stacking along with long-range electrostatic interactions.From the careful inspection of the nature and mode of such interactions, it may be put forward that a subtle balance between the van der Waals and electrostatic interactions controls the outcome of the interaction event.The high strength and predominance of vdW interactions over the electrostatic interactions between 2D materials and biomolecules tend to destroy the internal spatial arrangement of various residues in biomolecules, leading to a significant disruption of secondary and tertiary structures, which in turn causes nanotoxicity.Graphene, h-BN, and C 3 N display nanotoxicity via this mechanism.In another scenario, where both vdW and electrostatic interaction energies are significantly high, the structures of biomolecules are perturbed, and graphene oxide exhibits nanotoxicity in this fashion.On the other hand, when the electrostatic interactions are predominant along with weak vdW interactions, the biomolecules prefer to interact from a larger distance with the 2D materials without perturbing the internal native disposition, thereby behaving as a biocompatible material.h2D-C 2 N and g-C 3 N 4 belong to this category and are biocompatible toward genres of biomolecules.Indeed, both experimental and computational studies reveal that h2D-C 2 N is one of the most biocompatible 2D materials found to date.In the last scenario, where both vdW and electrostatic interactions between the 2D materials and the biomolecules are weak, the adsorption is mild and does not affect the structure or function of the biomolecules.Several transition-metal dichalcogenides such as MoS 2 and MoSe 2 fall under this category.In addition, surface inhomogeneity, defects, and chemical modifications of 2D nanomaterials can modify the balance between the vdW and electrostatic interactions, thereby leading to the alteration of the nanotoxic effects.Accordingly, surface passivation and chemical functionalization may be utilized to reduce the nanotoxicity of these materials. In general, evaluating the nanotoxicity of 2D materials is a complex task that can be accomplished by using both experimental and computational techniques.In silico methods have the advantage of identifying molecular interactions leading to nanotoxicity and, therefore, have the potential to complement in vivo and in vitro studies.To date, computational techniques, particularly MD simulations, have proven to be effective and efficient in predicting such properties.However, high-throughput screening of nanotoxicity would require a better design of simulation protocols and is very important for future research.The use of machine learning methods and force fields can accomplish this goal as long as the accuracy of the results is not compromised.Another question to be answered is the reconstruction of the secondary and tertiary structures of nucleic acids and proteins after their release from 2D materials.Targeted and accelerated MD simulations along with advanced sampling techniques such as replica-exchange MD might be used for this purpose, although it would use extensive computational resources.The potential for long-term cytotoxicity of 2D nanomaterials to host cells also requires further computational studies of dosage and exposure time.To conclude, although in silico approaches carry the potential to predict and screen 2D materials for their nanotoxicity, the exact physiological effects must be confirmed from experiments prior to their application in bio-nanomedicine.We strongly believe that the computational approaches discussed in this paper would inspire computational chemists, biologists, and material scientists to develop improved methods and simulation protocols for the in silico assessment of cytotoxicity, thereby paving the way for the applications of 2D materials to meet the existing and imminent demands for nontoxic bionano devices and therapeutics. Figure 1 . Figure 1.(a) Oxidized (1) and pristine (2) domains of a graphene sheet along with an ssDNA adsorbed.(b) Contact surface area (CSA) and (c) interaction energy, number of H-bonds, and number of π−π stacking contacts for the adsorption of the ssDNA on oxidized and unoxidized graphene.(d) Snapshots representing a H-bond formed between oxidized graphene with the phosphate backbone and a π−π stacking contact with the nucleobase.Reproduced with permission from ref 56, copyright 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim. Figure 2 . Figure 2. Snapshots of the (a) initial and (b) final configurations of ssDNA adsorption on C 2 N, graphene, and h-BN.Time evolution of dynamical quantities characterizing the adsorption and structural evolution of ssDNA on 2D materials: (c) number of contacts, (d) total interaction energy between ssDNA and materials, and (e) intra-ssDNA sequential π−π stacking contacts.Reprinted with permission from ref 58, copyright 2020 Royal Society of Chemistry. Figure 3 . Figure 3. (a) C 2 N sheet considered for the adsorption of ssDNA.Different stages of adsorption of ssDNA on C 2 N: (b) initial structure, (c) anchoring, and (d) adsorption.(e) Dynamical quantities related to adsorption of poly(A) 12 on C 2 N: contact surface area, interaction energy with C 2 N, number of ssDNA-material H-bonds, and intra-ssDNA π−π stacks.(f) DFT optimized structures of nucleobases on C 2 N. (g) Snapshots from ab initio MD simulations of adenine and cytosine on C 2 N. (h) Initial and final structures from the simulations of two separate ssDNA over C 2 N forming a quasi-double-stranded DNA.Reprinted with permission from ref 62, copyright 2018 American Chemical Society. Figure 4 . Figure 4. Snapshots representing the initial and final structures for dsDNA adsorption on (a) C 2 N, (b) graphene, and (c) h-BN.Dynamical quantities for the adsorption of dsDNA and its structure: (d) interaction energy between dsDNA and 2D materials, (e) intra-dsDNA H-bonds, and (f) π−π stacking contacts.Reproduced with permission from ref 58, copyright 2020 Royal Society of Chemistry.(g) Snapshots corresponding to the initial and final configurations of perpendicular and parallel modes of dsDNA on g-C 3 N 4 and its structural properties: (h) intra-dsDNA Hbonds and (i) π−π stacking contacts.(j) Probability distribution of the average vdW and electrostatic interaction energies between dsDNA and g-C 3 N 4 .Reproduced with permission from ref 64, copyright 2020 Wiley. Figure 5 . Figure 5. (a) Initial and (b) final structure and (c) snapshots representing the mechanism of "zipper-like unwinding" of dsDNA on wrinkled graphene.Time evolution of (d) ratio of H-bonds of dsDNA during adsorption on wrinkled graphene with those observed in a blank simulation and (e) dsDNA-wrinkled graphene van der Waals interaction energy.Reprinted with permission from ref 80, copyright 2020 American Chemical Society.(f) Initial structure and (g) snapshots representing the mechanism of unwinding of a dsDNA on defective graphene.(h) Ratio of H-bonds during adsorption on defective graphene.Reprinted with permission from ref 81, copyright 2021 American Chemical Society. Figure 6 . Figure 6.Snapshots representing the (a) initial and (b) final configurations of GQ on C 2 N, graphene, and h-BN, respectively.Dynamic properties characterizing the adsorption and structural evolution of a GQ on 2D materials: (c) number of contacts and (d) interaction energy between the GQ and the material, (e) backbone RMSD, (f) intra-GQ H-bonds, (g) intra-GQ π−π stacking contacts between successive quartets, and (h) number of survived quartets.Reproduced with permission from ref 58, copyright 2020 Royal Society of Chemistry. Figure 7 . Figure 7. Schematic representation of the hierarchical steps for the "quartet-by-quartet" disruption of human telomeric quadruplex on both graphene and h-BN.Reproduced with permission from ref 58, copyright 2020 Royal Society of Chemistry. Figure 8 . Figure 8.Initial and final configurations for the adsorption of siRNA on g-C 3 N 4 having an initially (a) parallel and (b) perpendicular orientation.(c) Time evolution of the intra-RNA hydrogen bonds.(d) Normalized probability distribution of the average vdW and electrostatic interaction energies between the siRNA and g-C 3 N 4 .Reproduced with permission from ref 64, copyright 2020 Wiley. Figure 9 . Figure 9. (a) Section of C 2 N for which the 2D free-energy profiles are constructed.2D free energy profiles (in kcal/mol) for the in-place displacement of a guanine molecule adsorbed on (b) C 2 N, (c) C 2 N without partial charges, and (d) graphene in vacuum.Reprinted with permission from ref 101, copyright 2018 American Chemical Society.Free energy landscapes for the lateral movement of a single mononucleoside deoxyadenosine (dA) on (e) C 2 N, (f) C 2 N without partial charges, (g) graphene, and (h) h-BN in an aqueous medium.Reproduced with permission from ref 58, copyright 2020 Royal Society of Chemistry. Figure 10 . Figure 10.(a) Hierarchical stages of the adsorption of peptides on 2D materials.Reprinted with permission from ref 109, copyright 2014 American Chemical Society.(b) Initial and final structures, (c) interaction energy, (d) residue-wise decomposition of interaction energies, and time evolution of the secondary structure of HEWL adsorbed on (e) graphene and (f) h-BN.Reprinted with permission from ref 117, copyright 2023 American Chemical Society.(g) Initial and final configurations of HP35 protein on h2D-C 2 N, and (h) time evolution of the secondary structure.Reprinted with permission from ref 118, copyright 2017 Wiley. Figure 11 . Figure 11.(a) Secondary structures of model cyclotides, namely, katala B1 (Mobius), cycloviolacin O1 (bracelet), and MCoTI-II (cyclic knottin).(b) Amino acids sequences of the cyclotides.Cystine residues are highlighted in yellow, the cyclic knots are represented by red lines, and the connections between residues are shown by blue lines.(c) Snapshots representing the adsorbed structure of cyclotides on graphene and h-BN.(d) Time evolution of the secondary structures of katala B1 in absence of any 2D material and during adsorption on graphene and h-BN.Reprinted with permission from ref 132, copyright 2022, the Royal Society of Chemistry. Figure 12 . Figure 12.(a) Initial structure of the dimer cleavage simulations employing 2D materials; (b) representative snapshots for the interaction of 2D materials with a dimeric protein HIV-1 integrase.Time evolution of the intermonomer (c) contacts and (d) interaction energy for several successful (traj2, traj4, traj6) and one failed (traj8) trajectory for the cleavage simulations of HIV-1 integrase and phosphorene, and (e) secondary structure of HIV-1 integrase after being separated by phosphorene.Adsorption free energies of a protein monomer with another monomer in a dimeric structure (ΔG pure ) and a hydrogen-terminated phosphorene surface (ΔG phosphorene ) for (f) HIV-1 integrase and (g) λ-repressor.Images have been reprinted with permission from ref 42, copyright 2014 American Chemical Society, ref 135, copyright 2016, AIP publishing, ref 136, copyright 2016 American Chemical Society, and ref 137, copyright 2021 American Chemical Society. Table 1 . Free Binding Electronic Energies of Free Nucleobases on Graphene, h-BN, and h2D-C 2 N, Calculated Using Density Functional Theory, and Binding Free Energies of Free Nucleobases, Calculated from MD Simulations a Reprinted with permission from ref 59, copyright 2007 The American Physical Society, ref 96, copyright 2009 Wiley, ref 61, copyright 2013 American Chemical Society, ref 62, copyright 2018 American Chemical Society, and ref 100, copyright 2017 Elsevier B.V. Binding energies have been converted from eV per molecule to kcal/mol. a Table 2 . Estimated Release Times (τ release ) of Nucleic Acids (in nanoseconds) from an Adsorbed State on the 2D Surfaces a a Reprinted with permission from ref 58, copyright 2020 Royal Society of Chemistry and ref 64, copyright 2020 Wiley. Table 3 . Free Energies of Adsorption for Natural Amino Acids on Graphene a Reprinted with permission from ref 116, copyright the Royal Society of Chemistry, 2015.The free energies have been converted from kJ/ mol to kcal/mol to maintain continuity. a Table 4 . Free Energies (in kcal/mol) and Release Times for Single Cyclotide Molecules and Their Aggregates on 2D Materials: Average Adsorption Free Energy Per Cyclotide in an Aggregate (ΔG ad,agg single ), Free Energy of Adsorption for a Single Cyclotide (ΔG ad single ), Free Energy Loss Per Molecule (ΔG loss ), Release Times for a Single Adsorbed Cyclotide (r release single ), and Release Times for a Single Cyclotide from an Adsorbed Aggregate (r release agg ) a Ayan Datta − School of Chemical Sciences, Indian Associationfor the Cultivation of Science, Kolkata 700032 West Bengal, India; orcid.org/0000-0001-6723-087X;Email: spad@ iacs.res.in AuthorsTitas Kumar Mukhopadhyay − School of Chemical Sciences, Indian Association for the Cultivation of Science, Kolkata
2023-11-24T16:16:38.503Z
2023-11-21T00:00:00.000
{ "year": 2023, "sha1": "5268b27a088abd34e8e42609d644f7265c99eea8", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1021/acsphyschemau.3c00053", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9b9ba30ebe474313ac1177b24015cd4ef6e957c8", "s2fieldsofstudy": [ "Materials Science", "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
268962993
pes2o/s2orc
v3-fos-license
Predicting neural activity of whole body cast shadow through object cast shadow in dynamic environments Shadows, as all other objects that surround us, are incorporated into the body and extend the body mediating perceptual information. The current study investigates the hypothesis according to which the perception of object shadows would predict the perception of body shadows. 38 participants (19 males and 19 females) aged 23 years on average were immersed into a virtual reality environment and instructed to perceive and indicate the coincidence or non coincidence between the movement of a ball shadow with regard to ball movement on the one hand, and between their body shadow and their body position in space on the other. Their brain activity was recording via a 32-channel EEG system, in which beta (13.5–30 Hz) oscillations were analyzed. A series of Multiple Regression Analysis (MRA) revealed that the beta dynamic oscillations patterns of the bilateral occipito-parieto-frontal pathway associated with the perception of ball shadow appeared to be a significant predictor of the increase in beta oscillations across frontal areas related to the body shadow perception and the decrease in beta oscillations across frontal areas connected to the decision making of the body shadow. Taken together, the findings suggest that inferential thinking ability relative to body shadow would be reliably predicted from object shadows and that the bilateral beta oscillatory modulations would be indicative of the formation of predictive neural frontal assemblies, which encode and infer body shadow neural representation, that is, a substitution of the physical body. Introduction Whole body perception is considered an unconscious inference, albeit human beings are experts in representing and recognizing their body (Bubic et al., 2010).Humans exhibit an important variety of perceptual and motor behavior that enables them to interact with various objects within different environments.They can easily mentally represent the different parts Giannopulu et al. 10.3389/fpsyg.2024.1149750Frontiers in Psychology 02 frontiersin.org of their body (Giannopulu and Mizutani, 2021), reconstruct body movements and immobility (Patel et al., 2022), predict the body's future trajectory (Israël et al., 2013), and analyze and understand actions made by and with objects, including their shadow (Pavani and Galfano, 2015).Humans are skilled at analyzing and comprehending the shape, shadow, identity, and movement of objects, whether in real or virtual environments (Bonfiglioli et al., 2004;Lovell et al., 2009).Intriguingly, not only are the body and objects interwoven and incorporated (Yamamoto et al., 2005;Higuchi et al., 2007;Giannopulu et al., 2022a,b), and inherently predictive (Bays et al., 2006), but their shadows are tenuous components of the visual environment (Watanabe, 2018).Shadows prolong objects and the body beyond their physical boundaries (Kuylen et al., 2014).Considering an object's shadow as an extension of the body from where the body shadow per se could be inferred, in the current study the bilateral electrical brain activity of healthy participants was recorded when immersed in a virtual environment.They were instructed to judge the coincidence or non coincidence between the movement of a ball and its shadow on the one hand and their body and its shadows on the other. Theoretical background on objects and body shadows Though watching an object's static shadow facilitates object recognition (Elder et al., 2004), the moving object's shadow, although omnipresent, usually appears to be misinterpreted with regard to one's position (Kersten et al., 1997).Most studies have analyzed the role of motionless object shadows presented on a computer screen, and provided consistent and valuable information with regard to the visuospatial relationship between the objects' shadows and the objects themselves (Madison et al., 2001).More specifically, they have demonstrated that shadows highly contribute to the accurate evaluation of object distance (Allen, 1999) but do not affect object recognition (Braje et al., 2000).The identification of geometric and familiar objects was found to be easier when they were presented with congruent rather than incongruent shadows (Castiello, 2001).In essence, shadows act unambiguously in affecting the visuospatial location of the objects casting them, while they ambiguously participate in recognizing the objects that cast them.Studies have also shown that a moving shadow influences the perceived motion of objects by inducing illusory motion in the depth of the objects (Kersten et al., 1996).At visuomotor performance level, when for instance, participants were reaching and grasping for a real object shadow visually presented, shadows specifically affected the kinematics and trajectory of movement execution (Bonfiglioli et al., 2004).Such results suggest that in object-oriented actions (i.e., reaching or grasping), shadows may participate in the planning and execution of the action as they represent supplemental features of the object.With respect to one's own position, it appears that during motor performance, shadows would be intimately associated with the visuospatial system because they serve as the spatial scheme of a given environment (Kuylen et al., 2014).Moreover, when objects and shadows are in synchronized movement, they furnish relevant information with regard to the relationship between objects and shadows and specify the spatial arrangement of objects within an environment, i.e., they provide indications for the relative disposition of objects in space (Mamassian et al., 1998). Contrary to the assumption that shadows are ignored or represented coarsely by the visual system (Rensink and Cavanagh, 2004), recent findings supported the idea that shadows are processed quickly and provide information about the properties of the environment (Lovell et al., 2009).de-Wit et al. (2012) examined the representational status of objects shadows when projected into the environment and reported that these shadows would emanate from region-based environmental segmentation instead of the representations of the objects per se.Interestingly, the brain refers and infers signals from body parts (e.g., the hand) directly to the object location (Paillard, 1991;Yamamoto et al., 2005), most likely because the connections between the hand and the object, including the object's location, appear to be mentally represented and simulated (Parsons et al., 1995) and thus have neural correlates (Iriki et al., 1996(Iriki et al., , 2001;;Higuchi et al., 2007;Katsuyama et al., 2016). Objects are incorporated into the body (Giannopulu, 2016;Giannopulu et al., 2022a), are internalized and likewise they extend the body (Maravita and Iriki, 2004;De Preester and Tsakiris, 2009).With regard to one's position, because internalized, object shadows are both objects and body extensions (Kuylen et al., 2014).As part of the object, object shadows elongate the object beyond its limits, while body shadows expand the body outside its corporal entity (Kuylen et al., 2014;Kodaka and Kanazawa, 2017;Hirakawa et al., 2020).Body shadows project images of the body in the environment, appear to have structural and anatomical similitudes with the body parts (Pavani and Galfano, 2007), and are subsequently synchronized to body motion.In both real and virtual environments, studies suggested that body shadows might enrich the representation of body position in space by strengthening the relation and interaction with the objects (Pavani and Castiello, 2004;Russo et al., 2017).It was also demonstrated that in the absence of any object feedback, visual or tactile, object processing is used as support for body shadows (Kuylen et al., 2014).Nevertheless, it is unclear whether the perception of object shadows would be associated with the perception of body shadows.However, if an object's shadows are extensions of the body and body shadows are also extensions of the body, the perception of object shadows would predict the perception of body shadows.Thus, by incorporating both the object shadow and body shadow, as well as their relationship, it can be expected that inference and prediction of body shadow, which are inherently associated with neural processing, would enable humans to process and perform decisions accurately and quickly.Consequently, the investigation of brain activity related to object shadow with inferential and predictive mechanisms to the brain activity of body shadow was performed. 1.2 Neural support for predicting the relationship between object and body shadows Poirier and Hardy-Vallee (2005) suggested that the brain emulates the body and simulates the external events (e.g., objects) based on sensorimotor representations.Both simulations and emulations behave as internal models, and are predictive in essence (Bays et al., 2006).Such models may be considered in order to estimate the current state or envision the future state of the nervous system (Miall and Wolpert, 1996;Wolpert and Flanagan, 2001).Deciphering the neural implementation of body representations, Bubic et al. (2010) reported that the predictive processing is associated with a series of neural networks that includes cortico-cortical (i.e., occipital, parietal, frontal and prefrontal) and sub-cortical areas (i.e., thalamus, cerebellum, 10.3389/fpsyg.2024.1149750Frontiers in Psychology 03 frontiersin.orgbasal ganglia).Even though neocortical beta oscillations (15-29 Hz) are strong indicators of perceptual performances in humans (Sherman et al., 2016), predictive processing modulates sensory cortices (Gómez et al., 2004).Anterior brain areas, such as frontal and prefrontal, were considered as bases which specify, prepare and plan intentions and communicate them to sensory areas (Bubic et al., 2010).Nevertheless, it is conceivable to assume that a more unified neural network, signifying neural synchronization/desynchronization between relevant cortical areas, would be a potential indicator of object-body shadow predictive processing.Notwithstanding, when line drawing of hands, real hand actions and intransitive movements by hand cast shadow were observed, significant desynchronization of mu activity (8-13 Hz) across sensorimotor, frontal and central and right parietal cortices relative to the baseline was revealed, but neither a relationship between them nor difference in mu activity was reported in all cases (Zhu et al., 2013).The observation of shadow animations depicting a figure's motion, that is, recognition of biological motion, showed specific resonance motor responses in M1 (Alaerts et al., 2009), in the bilateral MT (Katsuyama et al., 2016) and also involved mirror neurons system activity (Sartori and Castiello, 2013).At a more general functional and behavioral level, patients with neglect resulting from frontal and parietal lesions were able to perceive shadows explicitly or implicitly (Castiello et al., 2003) regardless of their spatial location (left or right of the object).More importantly, in virtual reality environments, body shadow appears to positively contribute to neurocognitive motor improvement after prefrontal, frontal and parietal brain damages (Russo et al., 2017).However, little is known about the perception of object shadows and their relation with the body shadow in virtual reality environments.The aim of the current study is also to fill this gap.Immersed in a virtual reality environment, healthy participants were instructed to perceive and indicate the coincidence or non coincidence between a mobile ball shadow with regard to a mobile ball from the one side, and the coincidence or non coincidence between their own body shadow and position in space from the other.Their brain activity was recorded using a 32-Channel Wireless EEG system.Given that object and body shadows involve left and right hemispherical activity as already reported, bilateral beta (13.5-30Hz) oscillations dynamics of frontal, parietal and occipital brain areas were analyzed as they are considered predictors of perceptual and motor performance (Sherman et al., 2016).Taking into consideration the above analyzed arguments and referred studies, it was hypothesized that beta oscillations associated with the perception of a ball shadow would envision the neural activity of the body shadow.It was expected that the fronto-patieto-occipital beta neural oscillations associated with the perception of an object shadow would predict the neural activity (synchronization vs. desynchronization) of the body shadow. Participants The feasibility of the study was assessed via an a priori G*Power 3.1.The results have shown that the minimum number of participants required was 36 in order to achieve an adequate statistical power of 0.85 with a medium effect size (d = 0.30), and alpha level of 0.05.Forty participants were recruited for the study, consisting of twenty males and twenty females with an average age of 23.63.Their right-handedness was about 100% according to the Edinburgh Handedness Inventory (Oldfield, 1971).All the participants were from a middle to high socioeconomic background and none had specific training experience with virtual reality environments.The participants had normal or corrected-to-normal vision and declared that they were free of vestibular, cardiac or sensorimotor and/or neurological disorders.All participants received a $50 gift card upon completion of the study.The participants had average somatosensory performances as assessed by the Rivermead Assessment of Somatosensory of Performance (RASP) (Winward et al., 2000).The final sample consisted of only 38 participants (1 participant experienced motion sickness, and 1 was eliminated for technical reasons).Approval was granted by the University Human Research Ethics Committee (BUHREC 16121) and conformed to the declaration of Helsinki 2.0.Informed consent for study participation was required and obtained from all participants.Anonymity was guaranteed. Head-mounted device The HTC Vive used in the study was a system consisted of a 2,160 × 1,200 resolution headset including a front camera and adjustable straps.It also had two sensors with SteamVR Tracking 1.0 technology and two motion-tracked controllers.Using the sensors a 360 degrees virtual environment (3.5 m x 3.5 m) can be created.A computer with the minimum requirements to operate the HTC Virtual Reality system software (Vive, 2011) was used.More particularly, the VR program ran on the DELL Precision 5,820 computer with Windows 10 programming, Intel Core Xeon 4 processing system, 32 GB RAM, HDMI 1.4 port and GeForce GTX 970 graphics card. Virtual reality environment All participants were immersed in a Virtual Reality Environment, in a cubic room specially designed for the study.The room consisted of three walls (one front and two side), a ceiling and a floor.The room was empty.The two lateral walls, the floor and the ceiling were colored in grey, the frontal wall was colored in pale grey.The colors and light in the room were constant during the whole experiment.Once equipped with the EEG, the Head-mounted Device (HMD) and the controllers (i.e., left and right key-response), the participants were fully immersed in the room (Figures 1, 2).Following the experimental condition, a ball and its shadow or each participant's body shadow appeared always at the same distance from the participant's spatial position.There were two independent conditions: ball shadow condition (BaSC) and body shadow condition (BoSC).In the BaSC condition (as shown in Figure 1), a spherical ball and its shadow move either coincidentally or non coincidentally.The incident light to the ball was parallel, the shadow silhouette was cast on the horizontal projection plane.The ball was opaque, its shadow was solid.The color of the ball was blue, the color of its shadow was dark-grey.The shape, size and distance between the ball and shadow were constant across the participants in both coincident and non coincident sessions.The speed of the ball and its shadow was slow (translational speed 4 cm/s, angular speed 8.5 deg./s) and constant, their trajectory was linear along the vertical, lateral or diagonal axis within the horizontal plane.Ball and shadow trajectories were always presented within the horizontal visual field of the participant and arranged according to the following scenarios: (i) the ball first (5 s) and the shadow after (5 s) were descending toward the floor or ascending toward the ceiling (vertical axis); (ii) the ball first Pictorial illustration of the virtual environment in the ball shadow condition (BaSC).Ball and shadow trajectories were always presented within the horizontal visual field of the participant and arranged according to the following scenarios: (i) the ball first (5 s) and the shadow after (5 s) were descending toward the floor or ascending toward the ceiling (vertical axis); (ii) the ball first (5 s) and the shadow after (5 s) were moving forward on the anterior-posterior axis across the floor (parallel/sagittal axis); (iii) the ball first (5 s) and the shadow after (5 s) were moving on the lateral axis toward the left or toward right side (lateral/parallel to the ground); (iv) the ball first (5 s) and the shadow after (5 s) were moving diagonally upwards or downwards (diagonal axis).The order of the trajectories of the ball and shadow was randomized across the participants.When the movement between the ball and its shadow was coincident (1a), both described the same linear (sagittal, vertical, lateral, or diagonal) trajectory.When the movement between the ball and its shadow was non coincident (1b) the ball trajectory was the same as above, but the ball shadow followed a different trajectory than the ball.Taking as reference their own position in space, the participants were told to indicate whether the movement of the ball shadow (i.e., with shadow) was coincident or not with the movement of the ball. (5 s) and the shadow after (5 s) were moving forward on the anteriorposterior axis across the floor (parallel/sagittal axis); (iii) the ball first (5 s) and the shadow after (5 s) were moving on the lateral axis toward the left or toward the right side (lateral/parallel to the ground); (iv) the ball first (5 s) and the shadow after (5 s) were moving diagonally upwards or downwards (diagonal axis).The order of the trajectories of the ball and Pictorial illustration of the virtual environment in the body shadow condition (BoSC).The shadow of each participant appeared on the frontal plan.Body shadow scenarios were as follows: (i) without body shadow first (5 s) and with body shadow after (5 s) where the shadow was ascending toward the wall (vertical axis); (ii) without shadow first (5 s) and with body shadow after (5 s) where the shadow was moving forward on the anterior-posterior axis across the floor and projected onto the wall (sagittal axis); (iii) without shadow first (5 s) and with body shadow after (5 s) where the shadow was moving on the lateral axis toward the left or toward right side on the wall (lateral/parallel to the ground).The order of body and shadow was randomized across the participants.When the body and its shadow coincided (2a), the shadow of the participant's body projected onto the frontal plane matched the position of the participant's body.When the body and its shadow were not aligned (2b), the shadow cast on the frontal plane did not correspond to the position of the participant's body.The participants were told that they had to take their own position as reference, and decide if the shadow was coincident or non coincident (i.e., conforming or non conforming) with the position of their body in space. 10.3389/fpsyg.2024.1149750 Frontiers in Psychology 06 frontiersin.orgshadow was randomized across the participants.When the movement between the ball and its shadow was coincident (Figure 1a), both followed a linear (sagittal, vertical, lateral or diagonal) trajectory.On the contrary, when the movement between the ball and its shadow was not coincident (Figure 1b) the ball trajectory was the same as above, but the ball shadow followed a different linear trajectory to the ball. In the BoSC condition (Figure 2), the shadow of each participant was the same color (dark-grey) as previously described and appeared in a coincident or non coincident position with regard to its body position in space.The incident light to the participants body was parallel, its shadow silhouette was cast on the horizontal projection plane and was moving linearly at a constant speed starting from the ground and projected into the front wall.Its speed was linear and constant (as in the BaSC condition).The body shadow scenarios were the following: (i) without body shadow first (5 s) and with body shadow after (5 s) where the shadow was ascending toward the wall (vertical axis); (ii) without shadow first (5 s) and with body shadow after (5 s) where the shadow was moving forward on the anterior-posterior axis across the floor and projected on the wall (sagittal axis); (iii) without shadow first (5 s) and with body shadow after (5 s) where the shadow was moving on a lateral axis toward the left or right side of the wall (lateral/parallel to the ground).As previously, the order of body shadow was randomized across the participants.When the body and its shadow were coincident (Figure 2a), the body shadow projected onto the frontal plane of body participant (i.e., front wall of the room) was consistent with the body position of each participant in space.When the relationship between the body and its shadow was non coincident (Figure 2b), their shadow projected onto the frontal plane and was inconsistent with the participants body position in space. Procedure The experiment consisted of three phases: baseline, initiation and experimental phase.All three phases took place in the same dark and quiet experimental room.The inter phase interval was approximately 3 min. The baseline consisted of one-minute EEG recording in the dark while participants were in the experimental room remaining speechless and motionless. During the initiation phase, participants were given five trials in two different conditions independently: the ball shadow condition (BaSC), and the body shadow condition (BoSC).Half of the participants started with the BaSC, and the other half with the BoSC in a randomized order.In the BaSC condition, all participants were placed in the same position and it was explained that they had to look straight ahead in front of them without moving their head, body or arms.They were also instructed that a ball in movement would appear (i.e., without shadow) and a ball shadow (i.e., with shadow) would also appear within their horizontal plane.Taking as reference their own position in space, the participants were told to indicate whether the movement of the ball shadow (i.e., with shadow) was coincident or not with the movement of the ball.They were also instructed to click on a left key-response in the former case (i.e., coincident); and on a right key-response in the latter (i.e., non coincident) as quick as possible.The participants were allowed 5 s to take a decision (i.e., decision making).The order of the ball and shadow was randomized across all participants. In the BoSC condition, the participants were instructed to look straight ahead without moving as previously (i.e., without shadow). They were also instructed that the shadow of their body would appear (i.e., with shadow) within their horizontal plane.They were told that they had to take their own position as reference, and decide if the shadow was coincident or non coincident (i.e., conforming or non conforming) with the position of their body in space once the shadow was ceased moving onto the front plane.As in the BaSC condition, the participants were given 5 s to produce a response (i.e., decision making) as quick as possible.The order of the body shadow was randomized across all participants. The inter trail interval was approximately 15 s; and the inter condition interval was approximately 3 min.According to the criteria, only participants who provided three correct consecutive trials in each condition (BaSC and BoSC) and declared themselves not to experience motion sickness were included in the experimental phase. During the experimental phase, the participants were immersed in the same virtual environment as in the initiation phase.They were placed in the same spatial location as previously and were again instructed to look straight ahead and remain motionless.All participants were given 3 min in BaSC, and 3 min in BoSC condition.Half of the participants started with the BoSC condition, and the other half with the BaSC condition.The inter condition interval was approximately 3 min. In the BaSC condition (Figures 1a,b), the sequence of events was exactly the same as in the initiation phase, and was the following: without shadow (i.e., ball movement for 5 s), with shadow (i.e., ball and shadow in movement for 5 s), and decision making (i.e., 5 s).Once again the participants were instructed to use their own position as a reference and to indicate if the movement of the ball shadow (BaSC) was coincident (press left key-response) or not coincident (press right key-response) with the movement of the ball as fast as possible.As previously, the order of the trajectories of the ball and shadow was randomized across the participants. Likewise, in the BoSC condition (Figures 2a,b), the sequence of the events for the participants was: without shadow (i.e., body shadow to appear 5 s after); with shadow (i.e., 5 s for the body shadow in movement); and decision making (i.e., 5 s) once the shadow was immobilized onto the frontal space.According to the instructions, the participants had to press the left key-response, if the shadow of their body was coincident with the position of their body in space, and the right key-response if it was not coincident as fast as possible.Once again and for methodological reasons (i.e., control for order effects), the order of the body shadow was randomized across all participants. In both BaSC and BoSC conditions, the brain activity of the participants was recorded continuously via the 32-Channel EEG system.In addition, participants' reaction time (RT) was automatically recorded during the decision making session.The RT corresponded to the duration of time between the shadow apparition (ball or body shadow) and the pressing of the key-response and was measured in milliseconds (ms).The total procedure lasted about 45 min on average. EEG signal processing and preprocessing EEG data was preprocessed and processed with MATLAB (Version R2020b) and FieldTrip toolbox (Oostenveld et al., 2011).Only the data of the experimental phase was considered for all trials and participants.Specifically, 5 s associated with the presence (i.e., 10.3389/fpsyg.2024.1149750Frontiers in Psychology 07 frontiersin.orgwith), the absence (i.e., without) of the shadow and the decision making session within each condition (i.e., BaSC vs. BoSC) were examined.Each "with, " "without" and "decision making" 5 s event was marked at the onset and the end for each trial and participant with a buffering of 20 ms before and after each 5 s period for each experimental condition (i.e., BaSC and BoSC) and a baseline correction of −10 to −30 ms.A high-pass filter of 1 Hz and a low-pass filter of 40 Hz composed the preprocessing and processing script. Artifact detection was performed on all marked events.First all bad channels and high-amplitude EEG artifacts, i.e., above 30 microvolts, were automatically removed from all events.Then all additional artifacts including electromyogram, electrooculogram and electrocardiogram were eliminated manually after visual inspection by experts and corrected via independent component analysis (ICA) methods.To ensure data quality, all data was again visually inspected by two independent experts and the remaining artefacted events were manually removed blind to the experimental condition (i.e., BaSC and BoSC) and 5 s events (i.e., "with, " "without" and "decision making").94% of the trials were preserved, while 3.1% of trials with EOG artifacts and 2.9% of trials with EMG artefacted events were eliminated.The processing script performed a beta frequency analysis (13.5-30 Hz) on all filtered 5 s events per experimental condition.The frequency analysis resulted in an average power spectral density measured in microvolts per Hertz (mV 2 /Hz) in frontal, parietal and occipital areas for beta oscillations in both left and right hemispheres (i.e., bilateral beta oscillations dynamics).The 32 electrodes were grouped into 3 regions of interest (ROIs) in order to effectively cover the difference cortical regions bilaterally (i.e., both left and right hemispheres) of the brain.The analogy between each ROIs and electrodes was: left and right frontal (F7, F3, Fz, F4, F8, FC5, FC1, FC2, FC6, C3, Cz, C4), left and right parietal (CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8), and left and right occipital (PO, O1, Oz, O2) areas. Although a large number of electrodes design are possible, the aforementioned bilateral design was selected as it covers all the brain areas and it is directly associated with the purposes of the present study.The statistical analysis was performed on the aforementioned marked and cleaned events of the experimental data (i.e., 912 trials for 38 participants). Statistical analysis All 38 participants successfully gave three correct consecutive trials in each condition (BaSC vs. BoSC), and did not declare motion sickness during the initiation phase and passed in the experimental phase.Only the experimental data was considered for the statistical analysis.Statistical analysis was completed in SPSS software package version 26.0. A MANOVA was run to examine the effect of gender (i.e., female vs male), presence vs. absence [i.e., "with" (5 s) vs. "without" (5 s)] of shadow and the two dimensions of shadow occurrence [i.e., coincidence (5 s) vs. non coincidence (5 s)] between the shadow and the object on the bilateral beta frontal, parietal and occipital oscillations for the BaSC condition on the one hand, and the BoSC condition on the other, independently.The MANOVA was assessed at a 95% confidence level using Wilks' lambda (λ) with a significance level of a = 0.05. An one-way ANOVA was performed to analyze the effect of the two dimensions of shadow coincidence (i.e., 5 s vs. non coincidence, 5 s) on the reaction time in both BaSC and BoSC condition, independently. A series of Multiple Regression Analysis (MRA) were performed to assess whether the bilateral neural activity in shadow perception (i.e., with shadow, 5 s) or decision making (i.e., coincident or non coincident, 5 s) in the body shadow condition (BoSC) would be inferred by the bilateral neural activity in the perception (i.e., with shadow, 5 s) or decision making (i.e., coincident or non coincident, 5 s) in the ball shadow condition (BaSC).Prior to performing the above comparisons and multiple regression analyses, several assumptions were verified.First, visual inspection of histograms: Shapiro Wilks (p > 0.05) and boxplots indicated that each variable in each comparison and regression was approximately normally distributed.One extreme outlier was removed while the other was kept as it did not affect the results.Second, the assumptions of normality, linearity, and homoscedasticity of residuals were met by inspecting the normal probability plot of standardized residuals as well as the scatterplot of standardized residuals against standardized predicted values for each MRA.Third, Mahalanobis distance did not exceed the critical χ 2 for df = 3 (at α = 0.001) of 16.27 for all cases in the data file, indicating that multivariate outliers were not of concern.Fourth, relatively high tolerances for each predictor in the regression model indicated that multicollinearity would not interfere with the ability to interpret the outcome of the multiple regression analyses.All statistical analyses were performed at alpha 0.05. Results All the participants correctly assessed the relationship between the ball and its shadow on the one hand and their body position and body shadow on the other (i.e., overall error rate = 0% in both experimental conditions).No gender effect has been observed (Wilks' lambda = 34.51,F = 8.92, df (2,78), p = 0.961). Overall, the results imply that the perception of the ball or the ball shadow in the BaSC condition on the one hand, and the body or the body shadow in the BoSC on the other involved similar anterior and posterior brain activities at beta oscillation level bilaterally.Additionally, and in both shadow conditions, that is, within BaSC and within BoSC, the coincidence or non coincidence between the 10.3389/fpsyg.2024.1149750 Frontiers in Psychology 08 frontiersin.orgmovement of the ball and its shadow, on the one hand, and the body and its shadow, on the other, did not affect neural activity differently. Reaction time in ms (RTs) In BaSC condition, the one-way ANOVA showed that there was no statistically significant difference in reaction time (RTs) between the two dimensions of shadow occurrence (i.e., coincident vs. non coincident) (mean = 1,254 ms, sd = 74 for coincident vs. mean = 1,218 ms, sd = 97 for non coincident; F(2, 36) = 1.13, p = 0.714). Regarding the BoSC condition, the one-way ANOVA did not reveal significant difference in reaction time (RTs) when the body and its shadow were coincident and when they were not coincident (mean = 993 ms, sd = 69 for coincident and mean = 1,021 ms, sd = 53, for non coincident; F(2, 36) = 0.73, p = 0.599). Specifically, in both independently evaluated conditions, the reporting time of coincidence or non coincidence (i.e., RTs) between the movement of the ball and its shadow (BaSC) on the one hand, and the position of then body and its shadow (i.e., BoSC) on the other was similar. Ball shadow as a predictor of body shadow A series of MRA was run in order to determine if the bilateral neural activity of the perception and/or decision making of the body shadow (i.e., BoSC condition) would be predicted by the perception and/or decision making of the ball shadow (i.e., BaSC condition).Two MRAs were found to be significant. In combination the left and right beta oscillations (13.5 to 30 Hz) in frontal, parietal and occipital areas associated with the beta oscillations of the ball shadow (BaSC) were associated with body shadow perception (BoSC) (R 2 = 0.29, adjusted R 2 = 0.22, F(3, 35) = 4.54, p = 0.009).By Cohen (1988) conventions, a combined effect of this magnitude can be considered "large" (f 2 = 0.41).As illustrated in Table 1, beta oscillations across sensorimotor frontal, parietal and occipital areas associated with ball shadow perception were significantly predictive of the future activation of bilateral frontal beta oscillations associated with body shadow perception (B = 1.09, p = 0.024; B = 0.21, p = 0.028; B = 0.91, p = 0.002 respectively).Specifically, the bilateral beta oscillations of frontal, parietal and occipital areas associated with the ball shadow perception were significant predictors of the activation of the bilateral frontal beta oscillations corresponding to the body shadow perception (Figure 3).However, bilateral frontal, parietal and occipital beta oscillations associated with ball shadow perception did not significantly predict the bilateral occipital (p = 0.369) and parietal (p = 0.467) neural activation associated with body shadow perception. Moreover, beta oscillations of the bilateral frontal, parietal and occipital neural activity associated with the ball shadow perception significantly accounted for 22% of the variability in the bilateral frontal neural activity of the decision making on body shadow (R 2 = 0.22, adjusted R 2 = 0.15, F(3, 34) = 3.19, p = 0.036).By Cohen (1988) conventions, a combined effect of this magnitude can be considered "medium" (f 2 = 0.28).The beta oscillations (13.5 to 30 Hz) across sensorimotor frontal, parietal and occipital areas related to the ball shadow perception showed negative predictive values of the bilateral beta oscillations of frontal activity associated with the decision making related to the body shadow (B = −1.27,p = 0.022; B = −0.32,p = 0.005; B = −0.80,p = 0.018 respectively) (Table 2).In other words, the decrease of the bilateral beta frontal oscillations related to decision making concerning the body shadow was predicted by a bilateral desynchronization of beta oscillations in frontal, parietal and occipital neural activity associated with ball shadow perception (Figure 4).Nevertheless, the bilateral neural beta oscillations of frontal, parietal and occipital areas associated with the perception of the ball shadow did not significantly predict the bilateral parietal (p = 0.646) and occipital (p = 0.996) activity related to the decision making concerning the body shadow. In summary, it appears that the synchronization of bilateral beta oscillations in anterior and posterior areas (i.e., frontal, parietal and In combination, beta oscillations (13.5 to 30 Hz) across the bilateral frontal, parietal and occipital areas related with ball shadow perception were significantly predictive of the future activation of bilateral frontal beta oscillations associated with the body shadow perception.However, bilateral frontal, parietal and occipital beta oscillations associated with ball shadow perception did not significantly predict the unilateral or bilateral occipital and parietal neural activation associated with body shadow perception. 10.3389/fpsyg.2024.1149750 Frontiers in Psychology 09 frontiersin.orgoccipital areas) during ball shadow perception (BaSC condition) predicted increased bilateral frontal activity during body shadow perception (BoSC condition).On the contrary, decreased bilateral beta oscillations in frontal, parietal and occipital areas of ball shadow perception (BaSC condition) were significant predictors of decreased bilateral frontal activity during the decision making on body shadow (BoSC condition) (Figure 5). Discussion Based on the prediction that the neural activity associated with the perception of an object's shadow would be indicative of the neural activity of the body shadow, participants were immersed in a virtual environment and instructed to identify the ball shadow relative to the ball (BaSC) and their body shadow relative to their own position in space (BoSC).Data analysis included behavioral and electrophysiological measures. At the behavioral level, the results indicated that the participants correctly identified the coincidence and non coincidence between the ball and the shadow (i.e., BaSC condition) and the body and its shadow (i.e., BoSC condition).They also revealed that their reaction times (i.e., RTs) were similar between coincident and non coincident sessions-during the decision making-in the BaSC condition and in the BoSC condition.In other words, immersed in a virtual reality environment, not only did the participants not experience motion sickness, but also were accurate and quick.At electrophysiological level, data analysis revealed that bilateral beta oscillations across anterior (frontal) and posterior (parietal and occipital) areas were similarly activated in "with" and "without" shadow sessions in both the BaSC condition and in BoSC conditions.It was also shown that bilateral beta frontal, parietal and occipital activations were not differentially involved when participants discerned the coincidence or non coincidence between the ball and its shadow (i.e., BaSC) on the one hand, and their body and its shadow (i.e., BoSC) on the other hand.At the behavioral and electrophysiological levels, under both coincidence and non coincidence situations, participants analyzed the shadow and the object (represented by a ball) in an identical manner.Similarly, they analyzed the shadow of the body, a singular object, in the same way as they analyzed their own body position in space (i.e., the physical body).Expressly, shadows are visual objects like any other type of object (Giannopulu et al., 2022b). Based on the predictions formulated in the current study, the multiple regression analysis reported that increased beta oscillations in frontal, parietal, and occipital areas during ball shadow perception predicted increased frontal activity during body shadow perception.However, decreased beta oscillations in frontal, parietal, and occipital areas predicted decreased bilateral frontal activity during body shadow decision making.As such, the results are coherent with previous assertions according to which shadows are visual objects (Casati, 2012).They also enrich these assertions as it has been demonstrated that two kinds of shadow: geometric 3D shadows (i.e., spheric ball) and body shadows (i.e., human shaped) were analyzed as visual entities.The findings are also consistent with Kersten et al. 's (1996) data which displayed that shadows can afford relevant information about Graphical representation of multiple regression analysis (MRA) on Ball Shadow Perception (predictors) with regard to Body Shadow perception (outcome).X axis represents the unstandardized predicted value of each participant for all cerebral regions in combination (i.e., frontal, parietal and occipital) in the ball shadow perception; Y axis depicts the outcome of the prediction for each participant with respect to the Body Shadow perception (Frontal).Bilateral beta (13.5 to 30 Hz) oscillations of the frontal, parietal and occipital brain areas associated with ball shadow perception predicted the activation of frontal beta oscillations corresponding to the body shadow perception [R 2 = 0.29, adjusted R 2 = 0.22, F(3, 35) = 4.54, p = 0.009 combined effect of this magnitude can be considered "large" (f 2 = 0.41)]. 10. 3389/fpsyg.2024.1149750Frontiers in Psychology 10 frontiersin.orgthe object itself including the object's motion, which corresponds to the ball and the body of the participant, in the present case.Overall, these findings illustrate that shadows are a reflection of objects and do not occur without objects (Mamassian, 2004;Casati, 2012).The present results extend this as it was demonstrated, for the first time, to the authors' knowledge, that shadow affordance can also occur in 3D virtual environments, and this is analyzed as a full 3D perception of the ball and body in the virtual scene.The findings also revealed that the motion of the shadow relative to the motion of the ball does not induce illusory motion of the objects, even though the participants were immersed in a virtual environment conducive to inducing illusory behavior (Eskinazi and Giannopulu, 2021).Interestingly, such consequences are valuable for both ball and body shadows.At first glance, the results seem to be in contradiction to findings published by Kersten et al. (1996), which found that illusory motion of objects (i.e., apparent motion) can be induced from the motion of shadows.They also seem inconsistent with reports describing induction of illusory sensation of the whole body, i.e., "shadowed" changes in a patient's body position triggered by electrical stimulation of the temporoparietal junction (Arzy et al., 2006).A possible interpretation of this lies in the fact that efficient perception of objects and shadows are the result of their mutual interaction, which seems to occur equally easily when objects and shadows have a consistent shape or are linked by a coincidental or not coincidental motion patterns.However, the methodological differences between the current study and the aforementioned studies do not really enable a direct comparison of the results.Essentially, the previous studies did not analyze ball and body shadows in the same population and 3D virtual environments and they did not consider neural electrophysiological components and behavioral components as was the case in the current study.Consistent with Lovell et al. (2009), the current findings suggest that the ball and body shadows were both represented and unambiguously analyzed by the visual system.All participants were able to visually perceive the shadow (i.e., ball or body) and, depending on the condition, decide (i.e., clicking on key-response) when it was coincident or non coincident with the ball or their own body.In both coincidence and non coincidence conditions, the results indicate that participants exhibited similar reaction times in each experimental group individually, namely BaSC and BoSC.It seems that the visual system detects the coincidence or non coincidence between the shadow that the ball casts or the one that the body casts in 3D virtual environments without identification errors of illusory motion, that is, without anisotropy.That is to say, once immersed in the virtual environment, all participants were able to correctly perceive and report the relationship between each entity (i.e., ball and body) and its respective shadow.In other words, the coincidence or non coincidence of the ball shadow on the one hand and the body shadow on the other, with respect to the participant's position in space, did not modulate the judged relationship between each visual entity and its respective shadow in the 3D virtual environment. More than a peripheral visual analysis, that is, at retinal level, the beta oscillations of frontal, parietal and occipital neural activities bilaterally did not differ between coincident and non coincident sessions for either entity: the ball and its shadow and the body and its shadow.This suggests that the coincidence or non coincidence between entities and shadows was not represented in distinct areas of the brain, but that the representations of these entities and shadows would depend on environmental land marks and egocentric perception.Specifically, the results suggest activation of the occipito-parieto-frontal pathway which belongs to a distributed neural network and is involved in embodied actions (Tootell and Taylor, 1995).These results provide support for the assumption that the brain deduces information associated with the position of the visual entities (i.e., ball and the body and their shadows) from bodily signals (Paillard, 1991;Yamamoto et al., 2005;Arzy et al., 2006;Blanke, 2012;Riva, 2018).The findings also imply that the immersion into 3D virtual environments does not affect the brain' s inferential capacities for either entity (i.e., ball and body), and the shadows that they cast.With the above in mind, it appears that not only real but also virtual entities including their (virtual) shadows could modify cerebral representations and that the cerebral representations of these entities and the relationship they sustain with the body are constantly updated in virtual environments.The results can also be associated with In combination, beta oscillations (13.5 to 30 Hz) across the bilateral frontal, parietal and occipital areas related to ball shadow perception showed negative predictive values of the beta oscillations of bilateral frontal activity associated with the decision making of the body shadow.Nevertheless, the neural beta oscillations of frontal, parietal and occipital areas associated with the perception of the ball shadow did not significantly predict the unilateral or bilateral parietal and occipital activity related to the decision making of the body shadow. 10.3389/fpsyg.2024.1149750 Frontiers in Psychology 11 frontiersin.orgrecent data demonstrating that body shadow animations involve frontal neural activity of healthy participants (Alaerts et al., 2009) and improve the body representations in stroke patients (Russo et al., 2017).In both coincident and non coincident sessions and for both entities (i.e., ball and body and their shadows), participants reported only correct responses and exhibited similar anterior and posterior beta oscillations activities.The current findings seem consistent with the statement that objects and their shadows are incorporated into the body, they improve body representations and extent the body (Poirier and Hardy-Vallee, 2005;Kuylen et al., 2014).As such, these results also appear to support recent studies and more importantly, scientific speculations (Pavani and Castiello, 2004;Pavani and Galfano, 2007), which suggest the incorporation of objects and shadows and the resultant body extension not only occur in real but also virtual reality environments.Depending on the activation of distributed representations of visuospatial and sensorimotor information in the occipito-parieto-frontal pathways (Goodale, 2008), the findings support the consideration that the capacity to identify objects is driven by the sensorimotor experience people have with objects (e.g., ball and body in the current situation) and seem to be the case, in both real and virtual environments.Based on the similarities in brain activities it appears that perceptual processes in real and virtual environments are both "objectdependent" and "shadow-dependent." Considering that the internalization, emulation and simulation of shadows could serve as a predictive model that envisions the neural activity of the body shadow, the current results report that the bilateral frontal, parietal and occipital beta oscillations associated with the ball shadow might be an indicator of the bilateral frontal beta oscillations related to body shadow.Specifically, it appears that the body shadow perception was predicted by the object shadow perception.Bilateral frontal, parietal and occipital beta oscillations activity associated with the ball shadow likely preceded the body shadow and provided a direct measure of the frontal beta oscillations that correspond to the neurophysiological correlates of prediction.Body shadow perception would be "ball shadow-dependent." This is not only consistent with existing data reporting that objects' shadows are considered a continuation of the body and are inclined to create a sense of embodiment, i.e., they are part of ourselves (Kuylen et al., 2014), but also enriches these data suggesting that the embodied objects' shadows significantly contribute to body representation and are used as a predictive reference for the body shadow.The body shadow would therefore be seen and understood as a substitute for the organic body, that is, the body shadow is a kind of vicarious body. One may suggest that it is controversial whether the bilateral beta frontal oscillations are an authentic reflection of body shadow perception, i.e., that the cortical sources underlying the inference potential reflect the body shadow and its relationship with the participant position in space even before its presence.Nevertheless, it should be considered that the findings are consistent with several data according to which predictions associated with body representations involve frontal, parietal and occipital areas (Bubic et al., 2010).Contrary to the assumption that only motor areas (i.e., frontal areas) provide the basis for predictions (Pickering and Gambi, 2018), the current data signify advancement in aligning the brain motor (frontal) and sensory (parietal and occipital) correlates of predictions.As such, they provide relative support for the implication of Graphical representation of multiple regression analysis (MRA) on Ball Shadow Perception (predictors) related to the decision making on body shadow (outcome).X axis represents the unstandardized predicted value of each participant for all cerebral regions pulled together (frontal, parietal and occipital) in the Ball Shadow Perception; Y axis depicts the outcome of the prediction for each participant with respect to the Body Decision making (Frontal).A decrease of bilateral beta frontal oscillations (13.5 to 30 Hz) related to decision making of the body shadow was predicted by a desynchronization of beta oscillations in frontal, parietal and occipital neural activity associated with ball shadow perception [R 2 = 0.22, adjusted R 2 = 0.15, F(3, 34) = 3.19, p = 0.036; combined effect of this magnitude can be considered "medium" (f 2 = 0.28)]. 10.3389/fpsyg.2024.1149750 Frontiers in Psychology 12 frontiersin.orgmotor areas in prediction, but also imply that predictive mechanisms involve multiple bilateral brain areas including motor regions.More importantly, such predictive mechanisms seem to exist in real (Pulvermüller and Grisoni, 2020) and in virtual environments as reported by the present study.Dynamic per se, occipito-parieto-frontal cortical sources underlying the potential of cortical predictive capacities mainly reflect perceptual and motor components of prospective future events before, they even occur.Sensory, motor and perceptual representations inherently generate probabilities, and draw and construct prospective abstract representations over attainable percepts.In essence, neural predictions along with inference and exploration would be a supplementary general principle of cortical function in real and virtual environments.As a whole, the above mentioned findings suggest that beta oscillations proceeded as an activator filter throughout the cortex, inferencing the location and likely timing of the body shadow, i.e., when and where it would occur.This is coherent with Sherman et al. (2016) data according to which beta motor and somatosensory coordinations mediate top-down predicted behavior.They also imply possible functional similarities between sensory, motor and cognitive beta oscillations.The above considerations could also account for the inferential mechanisms related to decision making process associated with body shadow.The results demonstrated that beta oscillations of the bilateral occipito-parieto-frontal areas associated with ball shadow perception predicted a decrease in bilateral frontal neural activity at the beta band level related to the decision making process of the body shadow.Based on the function of beta oscillations, an additional explanation for this could be echoed at the decision making process itself.Attempting to analyze this leads to a consideration of the components involved in this decision making.In the current study, the participants were instructed to press the key-response to declare if the body shadow was in coincidence or non coincidence with their own position in 3D space.Assuming that the decision making process involves three main temporal components: sensory, decisional and motor, the sensory component would correspond to the onset of the visual information and the onset of neural activation of occipital, and parietal areas that specify perception go the entities (i.e., both ball and body shadows).The decisional component would coincide with the duration of time that elapsed between the occipito-parietal activation and the participant's encoded decision and would involve premotor frontal activation that indicates preparation to undertake a judgment.The motor component would conform to the time necessary to produce a response after the decision and would be associated with motor frontal intervention.When beta oscillations were considered, it was suggested that their modifications before movement are associated with the framing and designing the movement goal (Schmidt et al., 2019).In other words, bilateral beta desynchronization signifies the transition between the moment of somatosensory perception and the motor decision.According to the current findings, bilateral occipito-parietofrontal beta desynchronization associated with ball shadow perception predicted bilateral frontal beta desynchronization associated with the decision making of body shadow perception relative to the body position.Specifically, beta oscillatory modulation reduction in sensorimotor areas associated with ball shadow perception mirrored a reduction in beta oscillations in frontal areas corresponding to the body shadow perception. The results are coherent with existing data, showing that beta oscillatory modulations are interconnected with the information characteristics and the decision-making process (Herding et al., 2016;Spitzer and Haegens, 2017).They also suggest that bilateral beta oscillations of ball shadow perception would be reflective of the subsequent decision making of body shadow.Such findings could be interpreted in light of a supramodal framework in which bilateral beta oscillatory modulations would mirror the dynamic recruiting of the shadow-relevant neural network (i.e., both ball and body shadow).This is coherent with the flexible and transient mechanisms that underlies beta oscillations, which is reported to reflect functionally relevant representations, facilitate inter communication between networks (Siegel et al., 2011) and perceptual and motor top-down interactions (Sherman et al., 2016). In summary, the current study identified similar beta oscillations in bilateral frontal, parietal and occipital brain areas between "with" and "without" shadow sessions and coincident and non coincident motion patterns within each BaSC and BoSC conditions independently were reported.The coincidence or non coincidence between the ball and its shadow and the body and its shadow did not affect reaction time behavior.In addition, it was found that body shadow specific beta oscillatory modulations in the bilateral frontal areas reflect ball shadow relevant sensorimotor perception, i.e., dorsal visual pathway, and subsequent decision making in 3D virtual environments.Such beta oscillatory modulations would be an expression of the formation of predictive neural frontal assemblies, which encode and infer body shadow neural representation, that is, a substitution of the physical body.These findings confirm already existing data on the way the brain harmonizes itself to situations and obtains information from objects and their shadows. A potential limitation of the current study is that participants in both the BoSC and BaSC conditions made real-time decisions by pressing a button.This could have introduced a hand lateralization effect in the reaction time and noise to the beta oscillations, potentially biasing the data.No lateralization effects or irrelevant oscillations were observed.In the current predictive scenario, the face could be expected to be the most significant part of the body and treated differently, as reported by Kanwisher and Yovel (2006), compared to the body itself, as described by Peelen and Downing (2005).However, it is important to note that in the present virtual reality (VR) shadow scenario, the face was considered as integral part of the body and was not distinguishable.Participants were unable to identify their own face when judging the coincidence or non coincidence between the object and its shadow, as well as the body or its shadow.Furthermore, the studies conducted by Peelen and Downing (2005) and Kanwisher and Yovel (2006) did not include shadows or utilize virtual reality environments in their experimental design.Additionally, they did not analyze the predictable relationships between brain activity.Although methodological differences prevent direct comparisons, it can be argued that the present study and the aforementioned ones are consistent in demonstrating the involvement of distinct brain regions in the processing of objects and bodies.Beyond this, our study suggests that distinct brain areas are activated not only by objects and bodies but also by their corresponding shadows, which appear to be inherently predictable. Notwithstanding, the current findings prolong the existing data to the supra modal process by demonstrating that predictions are not exclusive to motor processing, but also to somatosensory and sensorimotor areas bilaterally.Furthermore, the current data suggest the existence of a cortical neural network in which the beta oscillatory dynamics of object shadows provide a mechanism for the formation of functional networks during the internal re/ activation of body relevant cortical representations.As such, it can be suggested that prediction along with inference and exploration are general principles of cortical functioning in real and 3D virtual environments. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. FIGURE 5 FIGURE 5 The figure summarizes (A) the beta oscillations in frontal, parietal and occipital areas bilaterally for ball shadow perception as a predictor of the increased frontal activity during body shadow perception (B = 1.09, p = 0.024; B = 0.21, p = 0.028; B = 0.91, p = 0.002 respectively), and (B) the bilateral decreased beta oscillations in frontal, parietal and occipital areas anticipating decrease frontal activity during the decision making on body shadow (B = −1.27,p = 0.022; B = −0.32,p = 0.005; B = −0.80,p = 0.018 respectively).The variations (+1.27 vs. −1.27) of the unstandardized coefficient (B) are illustrated on the color bar. TABLE 1 Multiple regression analysis (MRA) on ball shadow perception (predictors) with regard to body shadow perception (outcome). TABLE 2 Multiple regression analysis (MRA) on ball shadow perception (predictors) related to the decision making on body shadow (outcome).
2024-04-07T15:31:22.606Z
2024-04-05T00:00:00.000
{ "year": 2024, "sha1": "2c20df40a76079310b08ac2c9630e7fd16ad3422", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1149750/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "545e702aba7a38129eb8b1c1306cc2bd844494ee", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
214156775
pes2o/s2orc
v3-fos-license
Mapping of Domestic Hot Water Circulation Losses in Buildings – Preliminary Results from 134 Measurements The hot water circulation system in a building is a system which helps prevent Legionella problems whilst ensuring that tenants have access to hot water quickly. Poorly designed or implemented systems not only increase the risk to people’s health and thermal comfort, but even result in an increase in the energy needed for this system to function properly. Results from previous studies showed that the total hot water circulation system loss can be as high as 25 kWh/m heated floor area per year. The purpose of this project is to measure the total energy use per year of the hot water circulation system in about 200 multifamily dwellings of different ages to verify that a system loss of 4 kWh/m, year is a realistic assumption for both newer and older/retrofitted buildings. The preliminary results from the first 134 measurements showed that the assumption of 4 kWh/m, year is rarely fulfilled. An average energy use of more than three times this is more common, even in newer buildings. Whilst some of the total energy lost is used to heat the buildings, it is not desirable because it is an uncontrolled energy flow. Background The domestic hot water circulation system (DHWC) in a building is a system that continually circulates the domestic hot water (DHW) within a building, see Fig. 1. DHW is heated in the hot water heater and sent out to the apartments. Hot water that is not used is sent back in a loop to the hot water heater to be re-heated back to the set DHW temperature and is sent back out to the apartments. This system operates continually. This system is used in Sweden as a means of ensuring people have access to hot water quickly while at the same time preventing problems with Legionella. The Swedish National Board of Housing, Building and Planning (Boverket) has several requirements which must be fulfilled as well as recommendations that should be fulfilled when designing and installing hot water circulation systems in buildings. [1] (BBR) One of the recommendations is that the tenant should get hot water (between 50 ⁰C and 60 ⁰C) within 10 seconds, assuming a DHW flow of 0,2 l/s. Some interpret this recommendation such that the water should be hot without using more than 2 litres of water. One of the regulations is that the return line of the circulation system shall not have a lower temperature than 50 ⁰C. In practice, if the return line temperature is too low with the designed flow, the flow of the DHWC system is adjusted up so that the return temperature is a little over 50 ⁰C. This should be done without using too much energy. Low energy buildings, Passive Houses and nearly Zero Energy buildings (nZEB) use less energy than before and the DHW and DHWC losses are becoming a more significant part of a building's energy loss. Renovating a building with a focus on energy use presents several different challenges. While DHW is often reduced by installing low-flow facets, the DHWC system is often overlooked and as a result can ruin the total energy performance of a building. This was the case with the Swedish object within the E2Rebuild project (building from 1963 renovated between 2011 and 2013) where a DWHC system energy use was assumed to be the Swedish standard of about 4 kWh/m 2 ·year (ca 6178 m 2 heated floor area, 91 apartments built in 1963). [2] The first measurements indicated that the combined DWH and DHWC energy use was about 20 kWh/m 2 ·year (based on measured DHW use + assumed DHWC). A deeper analysis done after the E2Rebuild report (unpublished from 2017) with more detailed measurements showed that the DWH actually had an energy use of about 15 kWh/m 2 ·year and the DWHC was about 28 kWh/m 2 ·year, almost double the DWH energy per year. The heat loss in this case was due to a combination of poorly insulated pipes and the total length of the system. For example, the pipes running through the underground parking garage had no insulation. There was also a large length of pipe which ran under the basement slab servicing other parts of the building. It was unknown if there was insulation on these sections or not. Previous studies of DHWC losses Several studies have been done to determine the actual losses from the DHWC system in existing buildings in Sweden. The Swedish building code states that DHWC losses must be included in a building's primary energy use calculation. Therefore, it is valuable to have information on the energy use of such systems in practice so that it is not underestimated during the design stage. In 2017, the Swedish Building Service (Svenskbyggtjänst) wrote a summary of different technical solutions used for hot water circulation systems and how to make them more energy efficient [3]. The summary presented results from several studies [4][5][6], and various technologies to reduce DHWC losses in future projects. In Alros, [4], the hot water circulation system losses in two apartment buildings were measured to be 11 and 37 kWh/m 2 heated floor area and year respectively. In Bergqvist [5], the hot water circulation systems losses were measured in 12 apartment buildings. The results of this study showed that the losses varied between 2.3 and 28 kWh/m 2 per year, with an average value of 9.1 kWh/m 2 per year. Bergqvist also wrote in a later report that in Sweden, a value of 4 kWh/m 2 per year is often the assumed DHWC system loss when doing energy calculations [6]. In a 2014 study by Lindencrona and Lindsköld, [7], the total heat loss from the DWHC system for 540 buildings (mainly residential) were measured. The measurements were done during hot summer nights to reduce the risk of other factors influencing the results. The results showed an average loss of 17.4 kWh/m 2 per year. This figure should be seen as a potentially worst-case scenario as it is unknown how other heat sources, such as transmission losses in the utility room, DHW use, active heating systems (circulation pumps) and other types of heating systems impact the measurements. These potential heat sources are not uncommon during the summer months, even though there is no heating need [7]. An SBUF report by Kempe from 2013, [8], reports that hot water circulation losses in a multi-family building are usually between 5-8 kWh/m 2 ·year but can be as high as 20-25 kWh/m 2 ·year in some cases. The report also recommends several solutions for designers/engineers to minimize these losses, for example by shortening the length of the system in apartments by placing washrooms and kitchens close to the service point (less than 12 meters). [8] Purpose of this paper The purpose of this paper is to present the first short-term DHWC energy losses measured in the project and compare them with values from the literature study. The purpose of the project is to: • Document and collect building information and measured values for different multi-family buildings built at different time periods. • Document and collect measured values for newer multi-family buildings creating a feedback loop back to designers • Test various simplified methods of calculating the DHWC system energy losses using information such as pipe length, insulation thickness, flowrates, etc. Limitations This paper has several limitations. The objects are only multi-family dwellings. The objects have a variation of building years which are dependent on the distribution of the number of multi-family buildings built each year in Sweden. Fig. 2 shows the number of planned measurements for various building years based on the building stock in Sweden. This paper presents preliminary results and only what has been measured to date. The results are not yet analysed and several objects with unusual results will be analysed in more detail in the spring of 2021. This paper presents measured values and the data has not been completely verified. Some errors can still be present in the presented data; however, the data has been checked after the measurement periods for values that could indicate problems with the measurement equipment. When uncertain if there was a problem with Errors from the measuring equipment have not been analysed but will be taken into account during the final analyses. Another limitation in this paper is that most of the preliminary results show the total energy use per year for the DHWC system. In order to convert these into energy loss outside of the heating season, the power signature of the building must be known in order to determine the balance temperature of the building. The presented measurements can contain data collected during unknown problems with the DHWC system (such as a temporary low-flow or incorrect temperatures due to bad sensors or pumps). Efforts to reduce the risk of these types of errors have been taken before the measurement periods, such having as open communication with all the object owners regarding operational stability and errors/faults. Measured buildings and methods of measurement The project will collect DHWC heat losses and associated information for the building and it's DHW system using several methods. A total of 195 short-term measurements will be done using calibrated, portable measuring equipment. More on the measuring equipment can be found in 2.1 Measuring Equipment. In addition to this, at least 50 measurements will be collected in newer buildings with the object's integrated measurement equipment (built in to the hot water circulation system). Finally, five long-term measurements are being done using portable, calibrated equipment. The collected dataset on each building includes, when possible, parameters such as estimated pipe length, number of apartments, number of stairwells, floors, heated floor area, DHWC flow, temperatures (cold water, DHW, DHWC), insulation thickness (DHW and DHWC), uninsulated pipe length, pipe diameter, number of shafts, IR images, type of insulation, estimated DWH use, and more. This data is collected from drawings (when available), historical measurements from the building's systems, measurements done on site within the context of this project and information obtained from the building owner. The short-term measurements with portable measuring equipment are done over a five-day period. To minimise effects of mounting and dismounting equipment, only the three middle days over the five-day period are used in the data analyses. This means that the measurement period during the days when the equipment is put out and retrieved is excluded. This results in three 24-hour periods measuring from 00:00 to 23:59. Longterm measurements are one year using the same equipment types as the short-term measurements. The measuring principle is to measure the water temperature in the return before the water heater (#1 in Fig. 3), the water temperature leaving the water heater (#2), and the water flow of the DHWC system (#3). Fig. 3: Measurement points in the DHWC system. With this information you can calculate the total energy use of the system according to: Observe that the short-term measurements are extrapolated to show the total energy use per year for the DHWC system. If the energy use outside of the heating season is required (for example when it is assumed that the DHWC system energy losses heat the building), then t = hours building is not heated. In this case the time depends on the balance temperature of the building and the local climate. See Eriksson et. al. [9] for more information about how this can be done in practice. Measuring equipment Flow measurements are done with an ultrasonic measurement device, Fluxus F601. The instrument provides instantaneous measurements. The accuracy depends on the calibration method used, standard calibration gives an accuracy of ±1,6% of the reading, extended calibration gives an accuracy of ±1,2% of the reading and field calibration gives an accuracy of ±0,5% of the reading. [10] Temperature measurements are done with two Tinytag Ultra 2 with external sensors. The sensors are attached directly to the DHW pipe after the heat exchanger, and the return line before the cold-water pipe. The accuracy of the Tinytag Ultra 2 is ±0,45⁰C. [11] 3 Preliminary Results and discussion Measured energy use It is important to observe that the project is still gathering data from different projects and none of the data is analysed. This paper presents the preliminary results from the first 134 buildings in Fig. 4 and Fig. 5. The values in Fig. 4 are shown as the total energy use per year for the DHWC system for objects 1 to 134 and range from 0,5 kWh/m 2 ·year to 76 kWh/m 2 ·year. Fig. 5 shows both the measured and calculated energy use (when available) for objects 100 to 134. These objects are newer than 2015. In Sweden, this is not always calculated specifically for the project and in these cases, a value of 4 kWh/m 2 ·year was assumed. To be able to see if there was any trend between the age of the building and the energy use of the DHWC system, the measured energy use data was plotted against building year. Fig. 6 shows that all buildings have a large variation in the DHWC system losses no matter what year it was built, with the highest value recorded from 2001. The largest value (76 kWh/m 2 ·year) had to be verified since it was thought that an error occurred during the measurement time. A new measurement with newly calibrated equipment showed that this result was correct. Fig. 7 shows that this spread can still be large even in new buildings. However, looking at the average DHWC system energy uses, older buildings have an average energy use of 14 kWh/m 2 ·year and the newer buildings have an average energy use of 6 kWh/m 2 ·year. Comparison between measured and calculated energy use In some of the newer buildings built between 2016 and 2019, the calculated energy use for the DHWC system was provided. The difference between the measured versus calculated energy use is shown in Fig. 8. Some of the objects, for example 1-9, 19, 21, 24-28 show good agreement whilst others, such as objects 23, 29 and 33 have very high measured values. It is not yet known why there is such a large variation in these DHWC systems and the explanation for these differences may vary in each case. For example, the explanation for the high energy use in the system with the highest measured value of 76 kWh/m 2 ·year was partly documented. It is a quattro pipe system, see Fig. 9, which has a long length of pipe underground from a central distribution point to several buildings. In this case we know that the quattro pipe contains the DHW and DHWC along with a low temperature heating loop (supply and return from hot water radiators). After talking to a representative from the building owners, it seems that there is a significant heat transfer from the DHWC system to the low-temperature heating system. This was made apparent during the summer months when it was observed by building owners that the radiator system can still have water over 30 ⁰C. The length of the system also means that there is a large heat loss to the ground so much of the heat lost is not used in the buildings. The other extreme is when the energy use is very low. In one such case where the energy use was below 0,5 kWh/m 2 ·year, it was noted on site using thermography that the DHWC system had a "short circuit" in the system. This meant that there was a section of pipe that sent the DHW back to the water heater shortly after it left. From this case, it is important to note that low energy values (less than 2 kWh/m 2 ·year) do not mean that the DHWC system is more energy efficient. This system does not likely fulfil the Swedish building code because the hot water temperature in the system does not stay above 50 ⁰C. This has brought up the question of if the extremely low energy losses are because of an energy efficient design, or if they indicate a DHWC system which is faulty. Another observation in several cases is that the DHWC pipes sometimes go through the concrete floor with not insulation. This results in energy losses due to thermal bridges. The analyses phase of the project has not yet begun, and further analyses will be done on the dataset once it is complete. Conclusions and future work The preliminary results showed that the DHWC systems used in Swedish multi-family buildings have a large variation in energy losses between what is assumed/calculated and measured. The preliminary results even show that newer DHWC systems used in Swedish multi-family buildings are a little more energy efficient than those from previous years but the DHWC system still needs to be taken into account in both new buildings and when implementing energy efficient measures on existing buildings. Since this project is still ongoing, measurements will be continued over 2020 and 2021. Several cases will be chosen for further investigation regarding their very high or extremely low energy use in order to determine if the system performed as designed and why their energy use deviates. Some possible explanations to both high and low energy use which will be looked at include losses due to thermal bridges (when the DHWC system goes through a concrete floor with less/no insulation), differences in DHWC systems (like the Quattro pipe) and the prevalence of short circuits.
2020-03-19T19:55:51.260Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "534d0310808e21afa543729b42f70245674c45b6", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/32/e3sconf_nsb2020_12009.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ab4a323920b5fd62dc47403eed4d9f756d6aa20d", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
118339973
pes2o/s2orc
v3-fos-license
The"in-in"Formalism and Cosmological Perturbations We describe an efficient scheme for evaluating higher order contributions to primordial cosmological perturbations using the"in-in"formalism, which is the basis of modern calculations of non-Gaussian and higher order contributions to the primordial spectrum. We show that diagrams with two or more vertices require careful handling. We present an implementation of the operator formalism in which these diagrams can be evaluated in a simple and transparent fashion. We illustrate our methodology by evaluating the correction to the primordial gravitational wave spectrum generated by scalar loops, a 2-vertex, 1-loop interaction. We then look at a generalized $N$-point, 2-vertex diagram. I. INTRODUCTION The pairing of the Arnowitt-Deser-Misner (ADM) formulation of general relativity [1] and the "in-in" approach to quantum field theory [2,3,4,5] is a remarkably powerful tool for analyzing cosmological perturbations 1 . First applied to cosmology by Jordan [8] and Calzetta and Hu [9], Maldecena's treatment of primordial non-Gaussianities [10] established it as the preferred approach to computing both N -point correlation terms [11,12,13,14,15,16,17] and loop corrections [18,19,20,21,22,23,24,25] to the inflationary perturbation spectrum. The perturbations generated by very simple models of inflation are accurately described by the long established lowest-order expressions [26,27,28]. However, current [29] and forthcoming experiments [30] will put tight limits on the primordial bispectrum (and possibly the trispectrum), and thus place tight constraints on inflationary models with non-trivial perturbation spectra, so these calculations are of substantial practical importance. The purpose of this paper is to present an efficient and transparent scheme for implementing the operator formalism [18]. Additionally, we warn that in some formulations, diagrams with more than one vertex 2 can become pathological if their external momenta or vertices are not distinguished from one another in some way. The problem can be viewed as one of initial conditions -"in-in" takes the initial states of some set of fields, |in , and calculates the expectation value of some set of operators with respect to these fields at a later time t, |Ω(t) . Specifying the initial conditions typically amounts to choosing the Bunch-Davies vacuum: the dynamical degrees of freedom behave like harmonic oscillators at very small length scales, and each mode is assumed to be in its ground state. There is then an operational prescription that enforces this selection in field theoretic calculations. As we show below, specifying the initial conditions implicitly restricts the algebraic manipulations one may perform. To calculate an expectation value at some arbitrary time in the interaction picture we need to evolve the in-state forward in time. Operationally, the simplest way of incorporating this initial condition is to include some evolution in imaginary time, t → t(1 + iǫ). There are (at least) two simple ways of expressing this choice. One may define the integration in the time evolution operator to run along a contour in the complex plane, rather than along the real axis. Alternatively one may analytically continue the time variable itself to include a small imaginary component, leaving the integral on the real axis. Importantly, some algebraic manipulations which are permitted in one formulation are forbidden in the other. Interestingly, this problem cannot arise in conventional "in-out" calculations, or in "in-in" calculations for diagrams with a single vertex -this discussion is thus timely, since theoretical analyses of primordial perturbations have matured to the point where more complicated interactions are now routinely considered [31,32]. We use the operator formalism introduced by Weinberg [18], as it provides an efficient and transparent scheme for performing in-in calculations, and the specific approach we develop here can significantly reduce the algebraic workload associated with a given diagram. This paper is organized as follows. We review the operator formalism in Section II, using the notation summarized in the Appendices of our previous paper [23]. We review two different prescriptions for injecting the Bunch-Davies initial conditions, showing how to obtain consistent results within the formalism of Weinberg [18]. In the two following Sections we present sample calculations. In §III we calculate scalar loop corrections to the graviton power spectrum. As one would expect, the scalar corrections to graviton (tensor) perturbations are too small to have any practical impact, but the calculation is a useful example of our overall methodology. In §IV we write down a two vertex contribution to the N -pt correlation function with p internal lines. With N = 2, p = 2 this is a correction to the propagator with the same topology as the graviton loop correction of §III, and for N = 4, p = 1 we have the topology of the graviton [31] and scalar [32] exchange contributions to the scalar 4-point (trispectrum). We show that these diagrams can become problematic in limits where the external momenta are not distinguishable. We demonstrate that explicitly injecting the initial conditions by deforming the time contours into the complex plane at the outset sidesteps any difficulties and substantially simplifies the algebra. We conclude in Section V. II. SPECIFYING INITIAL CONDITIONS IN "IN-IN" In the in-in formalism, calculations of the expectation value, W (t) , of a product of operators W (t) at time t, require that we evaluate where the fields on the right hand side are Heisenberg fields. The expectation value is taken with respect to the initial state, |in , which we assume to be the Bunch-Davies vacuum. The interaction Hamiltonian H int is defined in the usual way, so that the total Hamiltonian H is the combination of H int and the free-field Hamiltonian, H 0 , Analyses of cosmological perturbations typically start with the Einstein-Hilbert action of general relativity together with the appropriate matter action. One then uses the ADM formulation [1] to obtain an action containing only dynamical degrees of freedom [10,21]. From the action one constructs the Hamiltonian by defining conjugate momenta, and separating out the quadratic from the higher order parts: H 0 consists of terms that are quadratic in the perturbative degrees of freedom (and thus free), while H int consists of all third and higher order terms [18]. The free Hamiltonian H 0 drives the evolution of the operators, while H int evolves the states. This separation is natural, since in a homogenous and isotropic background we can find the eigenstates of the free field Hamiltonian at past infinity. The interaction terms generally have derivative couplings even when the action contains only canonical kinetic terms. These derivative couplings are the end result of perturbatively expanding the action. In models with nonstandard kinetic terms such as DBI inflation [33] or k-inflation [34], derivative couplings are generically present at the outset. Consequently, we must proceed carefully when canonically quantizing the theory. If L int is the portion of the action with terms third order and higher, the usual expression for the interaction Hamiltonian H int = −L int acquires corrections at fourth order, as first shown in [23]. In slow roll inflation these corrections are proportional to powers of the inflationary slow-roll parameters, but are generically unsuppressed in DBI or k-inflation [32,35]. In the more general case, for instance if one wants to calculate correlations in the radiation dominated era, there will be extra interaction terms. To make contact with the familiar "in-out" formalism of QFT, insert complete sets of states labeled by α and β into equation (1), The interpretation is clear, the "in-in" correlation is the product of vacuum transition amplitudes ("in-out") and a matrix element α|W (t)|β , summed over all possible "out" states. The "in-in" formalism is simply standard QFT, rigged to compute correlation functions at fixed time, given initial conditions instead of asymptotic boundary conditions. Initial conditions in QFT are usually specified by finding the eigenstates of the free Hamiltonian H 0 , and stipulating that the system begins in one (or some combination) of these eigenstates. If the system begins in the quantum mechanical vacuum, this amounts to putting our system in the vacuum state of H 0 at the initial time. Operationally, the vacuum is selected by redefining the range of t to include a small imaginary component [36], There are two (and possibly many more) ways one can incorporate this choice within a calculation: 1. Define the time integration in the time evolution operator to run over a contour in the complex plane: With this choice Eqn. (1) becomes Once this is done, complex conjugating the time evolution operator means that the time-forward contour does not coincide with the time-backward contour. The vacuum specification has broken the time-symmetry of the forward and backward time integrals. 2. Analytically continue the interaction Hamiltonian occurring in the time evolution operator so that it becomes a function of a complex variable: This is achieved by analytically continuing each of the times occurring in the expansion of Eqn. (1). Since the momenta, k, generically appear with the conformal time τ in the combination kτ this procedure is equivalent to an analytic continuation of the momenta flowing through the vertex. In practice one needs only to analytically continue the times or momenta appearing in the exponents of the mode functions. If we ignore this issue for a moment it is straightforward to express Eqn. (1) as [18,37]; Eqn. (7) is formally consistent with Eqn. (1). However, this manipulation is only self-consistent if we use the second vacuum specification prescription above. Recall that for any symmetric, holomorphic function, Because f is symmetric under the exchange of its arguments, the integral over the square region on the left hand side of Eqn 8 is twice the right hand integral over the lower triangle, where t 2 < t 1 . This result is easily generalized to more variables and moving from (1) to (7) requires repeated manipulations of this form. This manipulation requires that the integrations be interchangeable. The vacuum specification in Method 1 above breaks the equivalence of the integrals arising from operators on the right and left of W in Eqn. (5). Consequently, terms like HW H, together with the contour specification, prevent one consistently writing down Eqn. (7), as the lower triangle is no longer identical to the upper triangle due to the asymmetry of the contour specification. Persisting with this approach risks losing information about part of the region of integration. In the second prescription, no contour is specified, so the manipulations above are perfectly safe. However, in applying Eqn. (7) one splits terms (e.g. H(t 1 )W (t)H(t 2 )) arising from Eqn. (1), which must be summed before any limits are taken, in order to avoid introducing unphysical divergences. At second order, Eqn. (7) is . The terms appear to occur in conjugate pairs, so it may appear that this expression reduces to This manipulation is valid for HHW and W HH, but the vacuum prescription above prevents one from writing the HW H terms in this fashion. Finally, one might proceed from Eqn. (7) without employing either of the prescriptions above and regulate the oscillatory integrals in the far past by adding the appropriate small imaginary component to the initial point. In this case Eqn. (10) is again self-consistent and for many diagrams this approach will work without difficulty. The in-in formalism associates each vertex with a time integration, and provided each vertex has a distinct momentum flowing through it, these distinct momenta effectively keep track of the distinct regions of integration. Adding all possible permutations of the momenta which arise from the sum over contractions -as per Wick's theorem [38] -sums over all tessellations of the restricted integration region and picks up all contributions. This approach fails when the sum of the momenta at each vertex are not distinguished from one another -as would be the case if one decides to make a specific choice about the external momenta before the time integrals are done -in which case it can lead to spurious divergences, as we will see below. Consequently, our preferred approach is to work directly with Eqn. (5) rather than Eqn (7). This not only avoids unphysical separation of the terms, but involves much less algebra. III. GRAVITATIONAL WAVES FROM LOOPS OF SCALARS We begin with the one-loop correction to the graviton (tensor) power spectrum generated by loops of second order scalar modes. In spatially flat gauge, the degrees of freedom are scalar field fluctuations δφ and transverse-traceless metric fluctuations (gravitons) γ ij . At leading order in slow roll δφ couples to gravitons through the 3-point interaction [10] where τ is the conformal time and the interaction δ ij γ ij δφδφ vanishes by gauge choice. This yields the interaction Hamiltonian, which, after moving to Fourier space, is Expanding the free fields in Fourier modes where the polarization tensors are normalized so that s,s ′ ǫ s ij ǫ ij,s ′ = 2δ ss ′ and the mode functions, U k (τ ) and γ k (τ ), are given by the solutions to the free field equations of motion (obtained from the quadratic part of the action, see for example [39].) In the de Sitter limit these are: The propagators are then To compute the one loop contribution to the tensor power spectrum, we evaluate 3 ¡ FIG. 1: The Feynman diagram corresponding to 2-point graviton correlation with a scalar loop. As we explain below, the diagram may be evaluated off-shell, so the momenta on the external legs have distinct labels, to avoid unphysical divergences. where −∞ ± ≡ −∞(1 ± iǫ) denotes the contour choice. the de Sitter limit a short calculation gives where θ is the angle between the external momentum k and the momentum p ′ . However, if we use (10) to perform the calculation the third line above is replaced by (dropping irrelevant prefactors) where a 1 = k + p ′ + p ′′ , a 2 = k(p ′′ + p ′ ) + p ′ p ′′ and a 3 = kp ′ p ′′ . In this case if we do not carefully track the vacuum specification we will always run into a divergence. If one performs the integration on-shell, the lower limit in the first integral may be made to vanish by adding a small amount of evolution in imaginary time [10]. However, because the momentum flowing through each vertex is identical, the remaining integrand contains factors of e a1τ and e −a1τ which cancel, rendering the second time integral divergent. However, by writing down (21) we have implicitly ignored the difference in the contours between τ and τ ′ . With the vacuum specification explicitly included, the first integral leaves −i(a 1 (1 + iǫ) − a 1 (1 + iǫ ′ ))τ in the exponential in the outer integral. 4 Consequently as long as ǫ, ǫ ′ = 0 the integral is finite. The final limit ǫ, ǫ ′ → 0 is rendered finite and order-independent by symmetrizing over ǫ, ǫ ′ , which is now required because H(t 1 )W H(t 2 ) and H(t 2 )W H(t 1 ) are no longer conjugates. Performing this symmetrization is equivalent to performing the integration in the opposite order and thus picks up the other term. Including the vacuum specification information in this fashion is equivalent to performing the calculation off shell and summing the result, before taking the on-shell limit. Pursing this approach one obtains a term of form Much of this work can be avoided if one begins from Eqn. (5), which yields Eqn. (20). The integral factors into an expression of the form | dηf (η)| 2 , which is not only well behaved everywhere but also reduces the double integral to a single integral. Carrying out the integration yields which is quickly shown to be identical to the expression in Eqn. (22). The full expression is where K = k + p ′ + p ′′ . The second and third lines come from the 1st term of Eqn. (20) while the remaining terms come from the second term of Eqn. (20). This expression takes the form where f α denote functions of (p ′ , p ′′ , k) associated with the α-th power of K. We can eliminate the sin 4 θ term in the integral with the useful identity We are only interested in the log k contributions to this integral, and the sin 4 θ term effectively contains a factor K 2 , so the only terms in equation (25) that are not simply polynomial divergences are those with α > 2. After some straightforward algebra, we obtain (27) where the ellipses indicate polynomial terms which we will drop. Eqn. (27) is IR finite, as the integrand contains no negative powers of p ′ or p ′′ . Following [18], we dimensionally regularize this expression to obtain where P ζζ = H 2 /(4M 2 p k 3 ǫ) and P γγ = H 2 /(M 2 p k 3 ), which are the uncorrected primordial curvature and tensor power spectra respectively. The loop corrections to the tensor power spectrum via a scalar loop are ∼ (H/M p ) 4 , and thus minute, as we would expect. IV. THE N-POINT 2-VERTEX INTERACTION The splitting of the terms in the expansion of (1) can induce spurious divergences in general multi-vertex topologies, as well as the 2-point diagram considered above. These divergences necessarily cancel in any careful calculation, but we now show that the problem can arise for a general N -point function. We consider the class of diagrams with N external legs, 2 vertices, and p internal lines -an example is given in Figure 2. Suppose we have a term in the interaction Hamiltonian of the form f (τ )δφ m (τ ), where f (τ ) is some coupling. At second order one generates a general N -point function through a 2-vertex process with p internal lines. For simplicity we restrict ourselves to symmetric diagrams where each vertex has the same form, and with no loops which begin and end on the same vertex, so there are p = m − N/2 internal lines and N is even. At second order by this diagram is generated by If one had used the expansion of Eqn. (7), one would have where the second term in Eqn. (29) has been split into two pieces in Eqn. (30). In the de Sitter limit the second term becomes The product of delta functions ensures overall momentum conservation, the p! counts the number of equivalent ways of contracting the internal lines, and the final sum over the permutations counts contractions of the external lines into the vertices. At this stage, the only difference between Eqn. (29) and Eqn. (30) is the limits of integration, and that Eqn. (30) requires that we add the same term with the order of integration reversed. The first term can be obtained from this expression simply by flipping the signs of the k i where i ∈ {1, ..., N/2}. Dropping all the irrelevant factors, suppressing the momentum integral and putting in the integration over the times, where Q and R are the polynomials Q( . The upper limit of the τ 2 integral is labelled by τ a , and we will take it to be either τ a = τ * or τ a = τ 2 . If we begin with Eqn. (29), τ a = τ * and Eqn. (31) is simply the product of two integrals. Moreover, after exchanging the momenta these integrals are simply complex conjugates of one other. To evaluate the term one must perform a single integral. Conversely, if we use Eqn (30), τ a = τ 2 and we encounter similar complications to those seen with the loop in the previous section. Performing the first integral in Eqn. (32) with τ a = τ 2 leaves where S(τ 2 ) is another polynomial. Now, if one restricts to N i=1 k i → N j=N/2 k j before 5 performing these integrals one will encounter divergences. These divergences are spurious as discussed above. Properly including the vacuum specification prevents the limit from vanishing. Once the vacuum information is included the limit is really . Adding all the terms together (including those with the integration order reversed, which we have not written down) renders the result finite and order independent in the limit ǫ, ǫ ′ → 0. Overall momentum conservation, δ( Consequently, problems only arise if we render the external momenta indistinguishable before performing the integrals. For the special case where N = 2 overall momentum conservation already requires that the external momenta are identical, so this problem shows itself immediately when computing simple loop corrections to the propagator. Employing an explicit vacuum specification at the outset amounts to working off-shell by an infinitesimal amount, and thus ensures that the calculation is divergence-free. This problem can arise in any multivertex diagram and is not restricted to loop graphs -to see this, simply set p = 1 in our general topology, reducing it to an N -point 2-vertex tree diagram. 6 V. CONCLUSIONS We have explored the role of initial state selection in the operator formulation of the "in-in" approach to quantum field theory, demonstrating that this requires careful handling at second order and above. Issues arise from the two different time contours which select the vacuum |in and in| at early times, and thus similar issues are not encountered in conventional in-out computations. To perform calculations, one uses the time evolution operator, U (τ, τ 0 ) to evolve the "in" state forward in time to τ * at which the correlation is to be computed. We show that many complications are avoided by keeping the initial state prescription explicit throughout the computation. Moreover, avoiding expansions which unnecessarily split terms into pieces which are individually unphysical simplifies many calculations and sidesteps these vacuum issues entirely. This issue was first encountered in 1-loop corrections to the scalar propagator [23], and we again consider a 1loop calculation here. We calculate the correction to the primordial gravitational wave spectrum produced by scalar bremsstrahlung during slow roll inflation. As one would expect, this is tiny, of order ∼ ǫP γγ (k)P ζζ (k) ln(k) where P ζζ (k) and P γγ (k) are the (tree level) spectra of primordial curvature perturbations and gravitational waves respectively, and ǫ is the usual slow roll factor. We show that one can avoid (potential) spurious divergences and messy algebra by working only with the physical terms arising from Eqn. (1) rather than the expansion in Eqn. (7) which subdivides some terms into pieces which, considered alone, are unphysical. We then show that one can encounter similar problems with these terms in any diagram with two or more vertices, including tree-level expressions. We demonstrate this by considering the N -point function generated by a diagram with 2-vertices and p ≥ 1 internal lines. Here, spurious divergences are not ubiquitous, but appear if one specializes to highly symmetric momentum configurations before performing the time integrations. This problem is avoided if the vacuum selection is explicitly and consistently imposed throughout the calculation, even in these specialized limits. In addition, eschewing seemingly convenient splits simplifies the resulting algebra, as a double integral is replaced with the product of a single integral and its complex conjugate. Finally, while our analysis involves only 2-vertex interactions, Eqn. (7) splits up terms at every order, so this issue can arise in any multivertex diagram. None of these divergences will be physical, and in all cases the problem can be ameliorated by imposing the contour choice at the beginning of the calculation. While it may seem that many higher order diagrams in cosmological perturbation theory are of purely academic interest as they relate to intrinsically miniscule effects (for an example where higher order effects can be important, see [41]), as the quality of astrophysical data and the theoretical sophistication of very early universe models increases, non-leading contributions to cosmological perturbations will become increasingly important.
2009-08-17T21:41:54.000Z
2009-04-27T00:00:00.000
{ "year": 2009, "sha1": "7ce615d86174aecdab27e02c66087cb4402db0e9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7ce615d86174aecdab27e02c66087cb4402db0e9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
81056306
pes2o/s2orc
v3-fos-license
Prevalence of childhood obesity and its determinants among rural children of B G Nagara Introduction: India is witnessing under nutrition as one of the major public health problem, however there exists a significant magnitude of overweight and obesity among children and adolescent. Childhood obesity has emerged as one of the global health problems as a result of high intake of calorie rich foods and low level of physical activity. Most of the studies are conducted at metropolitan cities with limited information on prevalence of childhood obesity in rural area. Aim and Objective: To determine the prevalence of obesity among school children in the age group of 6-12 years in rural Mandya. Materials and Methods: A cross sectional survey was conducted among 1328 students in the age group 6-12 years. The body mass index (BMI) for age charts were used to assess weight in relation to height as recommended by Centers for Disease Control and Prevention. The 85-95 percentile and ≥ 95 percentile specific for age and sex were used to define overweight and obesity respectively in children. Results and Conclusion: Data from 1288 students were included for analysis with the overall response rate of 96.98%. Out of 1288 participants 777 (60.3%) were boys, 511 (39.7%) were girls. The prevalence of underweight, overweight and obesity was 54.9% (n=707), 1.8% (n=24) and about 1% (n=12) respectively. The prevalence was more towards pubertal age (10-12 years) than in early childhood. Childhood obesity was not associated with parental obesity or junk food consumption in children. To conclude, even though prevalence of childhood overweight /obesity was about 2.8%, it necessitates immediate action as it is accompanied by many non-communicable diseases such as diabetes, hypertension and cardiovascular disease. Introduction Even though, India is witnessing under nutrition as one of the major public health problem, there exists a significant magnitude of overweight and obesity among children and adolescent. Childhood obesity has emerged as one of the global health problems 1 as a result of high intake of calorie rich foods, low level of physical activity 2 and due to effects of genetic and environmental factors. The dramatic rise in the prevalence 3,4 of over nutrition resulting in overweight and obesity among children have been accompanied by alarming increase in non-communicable diseases such as diabetes mellitus 5,6 hypertension 7 and cardiovascular disease. 8 Most of the reports addressing childhood obesity are from studies conducted at metropolitan cities with limited information on prevalence of childhood obesity in rural areas. Hence the present study was designed to determine the prevalence of overweight and obesity among school children of rural area. Materials and Methods A cross sectional survey in a rural based school was carried out among school children aged 6 to 12 years. A total of 1328 students were included in the study. Students absent on the day of visit, suffering from chronic illness, whose exact date of birth and not willing to participate were excluded. The school authority was informed well in advance and provided all the information about the study. Consent was obtained after explaining the purpose of the study in the local language. The study was conducted in accordance to the principles of Institutional ethical committee and ethical clearance was obtained. The questionnaire was framed in simple language so that the participants could answer them easily. It elicited the information regarding demographic details, parent's education level, number of siblings, diet pattern, physical activity, television viewing time, use of electronic gadgets, consumption of junk foods and family history of non-communicable disease such as obesity, diabetes and hypertension. A digital weighing scale and stadiometer were used to measure weight (nearest to 0.1 kg) and height (nearest to 0.5cm) respectively with bare foot, arms hanging by the sides, heels together for each study participants. The body mass index (BMI) for age charts were used to assess weight in relation to height as recommended by Centers for Disease Control and Prevention. The 85 th -95 th percentile and ≥ 95 th percentile specific for age and sex was used to define overweight and obesity. 9 Statistical Analysis The data was coded and managed in excel spread sheet. All the entries were double checked for any possible transcription error. The variables were expressed as frequency, percentage, and percentile. The study participants were categorized into normal, overweight and obesity depending on the BMI percentile cutoff values. 9 Chi square test was employed to check the association between categorical variables. P value less than 0.05 was considered statistically significant. Results and Discussion A total of 1328 students were invited to participate in the study and data from 1288 students were included for analysis. The overall response rate was 96.98% (n=1288). The 40 non-responders included children absent on the day of visit, not willing to participate, chronic illness and whose exact date of birth was not available. Out of 1288 participants 60.3% (n=777) were boys, 39.7% (n=511) were girls, parents of 72 (5.6%) children were illiterate and 1019 (79.1%) were graduates. Majority of the students (61.1%) belonged to nuclear type of family. Among 1288 children, 84.9% were on mixed type of diet, 8.6% and 17.84% of students consumed daily carbonate drinks and bakery/ fast foods respectively. The prevalence of underweight, overweight and obesity was 54.9% (n=707), 1.8% (n=24) and about 1% (n=12) respectively. 61.1% of obese children were in the pubertal age group (10)(11)(12). More than half of obese children's parents (64%) were obese and 94% of obese children were indulged in consumption of junk foods, but neither parental obesity nor consumption of junk foods was associated with childhood obesity. Developing countries like India are still addressing undernutrition, however in recent years' childhood obesity has emerged as a pandemic problem. There exists a link between childhood obesity and many noncommunicable diseases such as diabetes, cardiovascular diseases, hypertension, etc. In the present cross sectional survey conducted among school children of rural area showed 1.8% and 1% prevalence of overweight and obesity respectively. Various studies conducted in different parts of India showed varied prevalence of overweight and obesity among school children. Preetham Mahajan et al, 10 Premanath M et al 11 and Sen Suchithra et al 12 reported 2.12%, 3.4% and 13% prevalence of obesity respectively. A study by Babitha Rexlin G et al 13 reported a prevalence of 4.3% in rural school children and 12% among school children of urban area concluding that place of residence and socioeconomic status has strong association with obesity. Easy availability of junk foods and ready to use premixes at markets and home of children belonging to urban area and high socio-economic status may be the reason for increased prevalence. 12 In this study, prevalence of obesity was higher in children belonging to 10-12 age group when compared to 6-9 years (x 2 =24.38, p value is <0.00001) and was statistically significant. This high prevalence could be due to the hormonal effects which are seen during pubertal age. The study also documented higher prevalence of obesity in males as compared to females, but was not statistically significant. These results are in accordance to other results reported from Punjab 14 and Delhi. 15 However, study by Babitha Rexlin G et al 13 compared obesity status between gender and concluded that gender has no relation with obesity. Analysis of association between childhood obesity and obesity in parents revealed no significant influence. This result is in accordance with results reported by Babitha Rexlin G et al. 13 Childhood obesity is a stronger predictor of obesity during adult life than parental obesity. Even though, about 95% of obese children in the present study were indulged in consumption of junk foods, obesity was not associated with it. The reason could be due to low frequency of consumption of junk foods in these children and also they are involved in regular physical activity of not less than half an hour every day. Children belonging to high socioeconomic group and urban area are having high prevalence of childhood obesity 13 which highlights the possible influence of life style modifications mainly pertaining to dietary pattern and physical activity. 15 Limitations This is cross sectional survey and data except anthropometric measurements are all self-reported responses based on recall. Conclusion The prevalence of underweight was high (59.4%) and childhood obesity was low (1%) among study participants. This prevalence rate of childhood obesity is not at an alarming rates as seen in other reports from southern parts of India. However, it necessitates appropriate action as childhood obesity is accompanied by many non-communicable diseases such as diabetes, hypertension and cardiovascular disease.
2019-03-18T14:05:21.790Z
2018-11-15T00:00:00.000
{ "year": 2020, "sha1": "1947f41b1d2c2c0633c6fc7c3957609851d033a6", "oa_license": "CCBYNCSA", "oa_url": "https://www.ijcbr.in/journal-article-file/7752", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "830ed071cf91ffb7a83b3f8cc5c1ee8d657513b8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6831664
pes2o/s2orc
v3-fos-license
Weyl groups of some hyperbolic Kac-Moody algebras We use the theory of Clifford algebras and Vahlen groups to study Weyl groups of hyperbolic Kac-Moody algebras T_n^{++}, obtained by a process of double extension from a Cartan matrix of finite type T_n, whose corresponding generalized Cartan matrices are symmetric. Introduction In [11], Feingold and Frenkel gained significant new insight into the structure of a particularly interesting rank 3 hyperbolic Kac-Moody algebra which they called F (also known as A ++ 1 ), along with some connections to the theory of Siegel modular forms of genus 2. The first vital step in their work was the discovery that the even part of the Weyl group of that Kac-Moody algebra is SW(F ) ∼ = P SL(2, Z). (If W is a Weyl group, we will denote its even part by SW.) In [12], a coherent picture of Weyl groups was presented for many higher rank hyperbolic Kac-Moody algebras using lattices and subrings of the four normed division algebras. Specifically, the Weyl groups of all hyperbolic algebras of ranks 4, 6 and 10 which can be obtained by a process of double extension, admit realizations in terms of generalized modular groups over the complex numbers C, the quaternions H, and the octonions O, respectively. In particular, the authors found in the rank four situation that the even part of the Weyl groups of the Kac-Moody algebra A ++ 2 is the Bianchi group P SL(2, O −3 ), where O −3 the ring of integers of Q( √ −3). One could ask if there is a similar phenomenon for all the hyperbolic Kac-Moody algebras T ++ n , where T n is any finite type root system, but it is not clear what to take instead of a normed division algebra. In [14], the authors used the quaternions and the octonions in their study of some Weyl groups. In this paper, we adopt another approach. We use the theory of Vahlen groups and Clifford algebras in order to study the Weyl groups of the hyperbolic Kac-Moody algebras T ++ n whose Cartan matrices are symmetric. A key ingredient needed to obtain our main result in Theorem 6.5 is Corollary 5.10 of [13], which only applies to that class of Cartan matrices. We plan to study more general cases where the Cartan matrices are Lorentzian, and believe that our methods will yield interesting results with connections to number theory. Our paper is organized as follows. In §2, we remind the reader about generalities on orthogonal geometries. Then, we gather some results on Clifford algebras, Pin and Spin groups in §3. Section §4 introduces Vahlen groups. In the literature, Vahlen groups have been defined for the paravector case as well as for the non-paravector case. In §4, we place ourselves in the non-paravector case, whereas the paravector situation is treated in §7.1 and §7.2. Section §5 contains a useful, though very simple, introduction to generalized Cartan matrices, systems of simple roots and Weyl groups. The core of this paper in contained in §6, where we give a description of several Weyl groups. At last, we explain in §7 the connections between our approach and the one adopted previously in [12]. Generalities on orthogonal geometries Throughout this paper, F will denote a field with characteristic different from 2. In fact, all the fields considered in this paper have characteristic zero. The results of this section are well-known and we will not repeat the proofs. We refer the reader to [6] and [9]. Let V be a finite dimensional F -vector space of dimension n. If V is equipped with a symmetric F -bilinear form S, then we say that (V, S) is an orthogonal geometry. If S is clear from the context, we will refer to an orthogonal geometry just by V . Instead of working with the symmetric F -bilinear form S, one can work with the associated quadratic form given by q(v) = S(v, v) for all v ∈ V . A pair (V, q), where V is a finite dimensional vector space over F and q is a quadratic form on V is called a quadratic space. We have a one-to-one correspondence between symmetric F -bilinear forms S and quadratic forms q. Given a quadratic form q, one recovers S via the formula Given an orthogonal geometry V , the radical of V , denoted by Rad(V ), is defined as usual, i.e. it is the kernel of the linear transformation V −→ V * defined by v → S(v, ·). Definition 2.1. Let V be an orthogonal geometry. Then V is called non-singular if Rad(V ) = 0 and isotropic if Rad(V ) = V . A vector v ∈ V is called isotropic if q(v) = 0, otherwise non-isotropic. An orthogonal geometry V is isotropic if and only if every vector v ∈ V is isotropic. Let (V 1 , S 1 ) and (V 2 , S 2 ) be two orthogonal geometries. A linear transformation f : An orthogonal map f : V 1 −→ V 2 is called an isometry if there exists an orthogonal map g : V 2 −→ V 1 satisfying f •g = id V2 and g • f = id V1 . Note that if f : V 1 −→ V 2 is a bijective orthogonal map, then it is an isometry. More generally, an F -linear transformation f : The constant λ ∈ F × is called the factor of similitude of f . It is simple to check that if f : V 1 −→ V 2 is an orthogonal similitude between two orthogonal geometries and V 1 is non-singular, then f is necessarily injective. The set of isometries of an orthogonal geometry V into itself forms a subgroup of the general linear group GL(V ) which is denoted by O(V, S) or O(V ) if S is understood from the context. Moreover, we let GO(V ) be the group of orthogonal similitudes, that is Note that the map λ : GO(V ) −→ F × is a group morphism and O(V ) = ker(λ). If V is an orthogonal geometry over F with symmetric F -bilinear form S and λ ∈ F × , then we let V λ be the orthogonal geometry obtained from V by rescaling the symmetric F -bilinear form S by a factor λ. If V is non-singular, then it is simple to check that any σ ∈ O(V ) satisfies det(σ) = ±1. The ones satisfying det(σ) = 1 are called rotations and they form a subgroup of O(V ) which is denoted by SO(V, S) or more simply by SO(V ). The following result is well known. Proposition 2.2. If V is an orthogonal geometry, then V = L 1 ⊥ . . . ⊥ L r , where L i = Span(v i ) are lines. Moreover, V is non-singular if and only if v i is a non-isotropic vector for all i = 1, . . . , r. The set {v 1 , . . . , v r } of the last proposition is called an orthogonal basis. We will now recall the definition of some important isometries in O(V ). An isometry σ ∈ O(V ) is called an involution if σ 2 = 1. If σ is an involution, then we let (Recall that we are staying away from characteristic 2.) It is then a simple matter to show that V = U ⊥ W and σ = −id U ⊥ id W . The dimension of U is called the type of σ. An involution of type 1 is called a symmetry with respect to the hyperplane W or less precisely an hyperplane reflection or even more simply a reflection. If σ = −id L ⊥ id H is an hyperplane reflection and v ∈ L is a non-zero vector, then v is a nonisotropic vector, since L is non-singular. On the other hand, if we start with a non-isotropic vector v ∈ V , then it is simple to check that r v ∈ O(V ) given by whenever w ∈ V , is a symmetry with respect to the hyperplane L ⊥ where L = Span(v). Conversely, every hyperplane reflection −id L ⊥ id H is of the form r v for some non-isotropic vector v ∈ L. Theorem 2.4 below is fundamental, but we first need the following lemma whose proof is left to the reader. Lemma 2.3. Let V be an orthogonal geometry and let v, w ∈ V . If q(v) = q(w) = 0, then either In case (1), we have r v−w (v) = w, and in case (2), we have r w • r v+w (v) = w. We can now show: Theorem 2.4. Let V be a non-singular orthogonal geometry. Then, every σ ∈ O(V ) is a product of hyperplane reflections. Proof. Let σ ∈ O(V ). By Proposition 2.2, we know that V = F e 1 ⊥ . . . ⊥ F e n for some non-isotropic vectors e i ∈ V . Define ψ i ∈ O(V ) inductively as follows: One checks using Lemma 2.3 that ψ i ψ i−1 . . . ψ 1 σ(e j ) = e j for all j = 1, . . . , i. It follows that σ is a product of hyperplane reflections, and this is what we wanted to show. Assume now that F is an ordered field, so that we can talk about the signature of an orthogonal geometry. If (V, S V ) and (W, S W ) are two non-singular orthogonal geometries having the same signature, then they are not necessarily isometric in general. But if every positive element of F is a square in F (for instance, if F = R), then the possible signatures (r, s) are in bijection with the isometry classes of non-singular orthogonal geometries. In other words, if two non-singular orthogonal geometries (V, S V ) and (W, S W ) have the same signature, then they are isometric. Clifford algebras, Pin and Spin groups By an F -algebra A, we always mean a unital associative F -algebra and its unit element will be denoted by 1 A . By a morphism of F -algebras, we always mean a morphism of unital associative Falgebras. Throughout this section, V will stand for a non-singular orthogonal geometry. The symmetric F -bilinear form is denoted by S and the associated quadratic form by q. Definition 3.1. A universal Clifford algebra for the non-singular orthogonal geometry V is an Falgebra C with an injective F -linear map i : V ֒→ C satisfying and such that the following universal property holds true: Given any F -algebra A and a F -linear map We warn the reader that there is no consensus about the negative sign in (1). Moreover, via the monomorphism i : V ֒→ C, we will identify V as a linear subspace of C. We also identify F with F · 1 C . With these two identifications, the identity (1) becomes v 2 = −q(v). We also have for all v, w ∈ V which reduces to (1) when v = w. The existence of a universal Clifford algebra is standard and can be realized as a quotient of the tensor algebra T (V ). Let {e 1 , . . . , e n } be an orthogonal basis for V . Then, (1) implies for all i = 1, . . . , n. Since V is assumed to be non-singular, we have q(e i ) = 0, and therefore e i ∈ C × for all i = 1, . . . , n. Moreover, (2) implies whenever i = j. We let Ω = {(i 1 , . . . , i s ) | 1 ≤ s ≤ n and 1 ≤ i 1 < . . . < i s ≤ n} ∪ {∅}. Clearly, |Ω| = 2 n . Given I = (i 1 , . . . , i s ) ∈ Ω, we set e I = e i1 · . . . · e is ∈ C × and we agree that e ∅ = 1 C . It is well-known that {e I | I ∈ Ω} is a basis for C considered as an F -vector space. Therefore, dim F (C) = 2 n . As a consequence of the universal property satisfied by a universal Clifford algebra, we have the following result. Theorem 3.2. Let V 1 and V 2 be two non-singular orthogonal geometries and let f : V 1 −→ V 2 be an orthogonal map. If C i is a universal Clifford algebra for V i for i = 1, 2, then there is a unique F -algebra morphismf : C 1 −→ C 2 making the following diagram commutative: We point out in passing that both f andf are necessarily injective, since V 1 is assumed to be non-singular. From Theorem 3.2 and a slight variation of it follow the existence of the principal involution and anti-involution. The principal involution on C will be denoted by x → x ′ . We remind the reader that it is the unique F -algebra automorphism on C satisfying v ′ = −v, whenever v ∈ V . The principal anti-involution on C will be denoted by x → x * . It is the unique F -algebra anti-automorphism of C satisfying v * = v, whenever v ∈ V . It is simple to check that the principal involution and antiinvolution commute, that is (x * ) ′ = (x ′ ) * . At last, we will make use of the Clifford conjugation defined for x ∈ C by x = (x * ) ′ . The Clifford conjugation is also an anti-involution and if v ∈ V , then v = −v. Given a universal Clifford algebra C for V , we let Note that V ⊆ C 1 and that C 0 is a F -subalgebra of C. Hence, C has a Z/2Z-grading. If {e 1 , . . . , e n } is an orthogonal basis for V , then e I ∈ C 0 if and only if |I| is even and e I ∈ C 1 if and only if |I| is odd. It is well-known that Z C (C 0 ) = F + F · e Σ , where Σ = (1, . . . , n). Moreover, the center of C is 3.1. The Clifford group As before, let V be a non-singular orthogonal geometry and let C = C(V ) be a universal Clifford algebra for V . We have an obvious action of C × on C given by conjugation, that is x * y = xyx −1 , whenever x ∈ C × and y ∈ C. The associated Clifford group is R(V ) = {x ∈ C × | x * V ⊆ V }. Given x ∈ R(V ), we get a χ(x) ∈ GL(V ) defined by χ(x)(v) = xvx −1 , whenever v ∈ V . In fact, it is simple to check that χ(x) ∈ O(V ) so that we get an F -linear representation χ : R(V ) −→ O(V ). This linear representation is surjective if dim(V ) is even, but if dim(V ) is odd, then χ(R(V )) ⊆ SO(V ). Moreover, the kernel of χ is different depending on the parity of dim(V ). This phenomena is due to the fact that if v ∈ V is a non-isotropic vector, then Since the landmark paper [7], it has been realized that it is nicer to work with a different action than the one given by conjugation. This will give us a different Clifford group, denoted by Γ(V ), and a representation ρ : Γ(V ) −→ O(V ) which will always be surjective and the kernel will be the same independently of the parity of dim(V ). The key point is that instead of (6), we will have ρ(v) = r v , whenever v is a non-isotropic vector of V . The action of Atiyah, Bott and Shapiro of C × on C is given by x * y = xy(x ′ ) −1 , whenever x ∈ C × and y ∈ C. The Clifford group Γ(V ) is given by Hence Γ(V ) acts on V and it is simple to check that the action of Γ(V ) on V is F -linear. We then get an F -linear representation Γ(V ) −→ GL(V ). Note that if x ∈ Γ(V ) and v ∈ V , then by definition xv(x ′ ) −1 ∈ V . Applying the principal involution gives x ′ vx −1 = xv(x ′ ) −1 whenever x ∈ Γ(V ) and v ∈ V . It is then a simple calculation to show that ρ(Γ(V )) ⊆ O(V ); hence, we have an F -linear representation Moreover, it is simple to check that any non-isotropic vector is in Γ(V ), and given such a vector v, one has ρ(v) = r v . It follows from Theorem 2.4 that the representation (7) is surjective. Proposition 3.3. With the notation as above, we have ker(ρ) = F × . Therefore, we have a short exact sequence This means that x 0 ∈ Z(C) ∩ C 0 . Using (5), one concludes that x 0 ∈ F × . On the other hand, a simple computation shows that if x ∈ C and xv = −vx for all v ∈ V , then x = 0 if n is odd and x = λe Σ for some λ ∈ F if n is even. Since in our situation x 1 ∈ C 1 , we necessarily have x 1 = 0. This completes the proof. Proof. This follows from Corollary 3.4. Abstract Pin and Spin groups Pin and Spin groups will appear in various disguises throughout this paper, and it is convenient to give their axiomatic definitions. Pin groups Suppose we are given: 1. A group G and a non-singular orthogonal geometry V over F . A short exact sequence of groups Then, we get a commutative diagram Definition 3.7. One defines the group P in + (ρ, N ) = ker(N ). Since the map f is surjective, the snake lemma gives the following diagram whose rows are exact. This last diagram induces in turns the following important exact sequence: The group morphism ϑ : O(V ) −→ F × /F ×2 is called the spinor norm morphism. Spin groups A similar theory can be developed for the group SO(V ) instead of O(V ). Suppose we are given: 1. A group G and a non-singular orthogonal geometry V over F . A short exact sequence of groups If one defines the group Spin + (ρ, N ) = ker(N ), then we have the important exact sequence and the group morphism ϑ : SO(V ) −→ F × /F ×2 is also called the spinor norm. Pin and Spin groups Let V be a non-singular orthogonal geometry and let C be a universal Clifford algebra for V . As in the previous sections, we let Γ(V ) be the Clifford group. If x ∈ Γ(V ), then xx ∈ F × . Indeed, it follows from Corollary 3.4 that any x ∈ Γ(V ) can be written as We are now in the setting of §3.2, and we let P in + (V ) = P in + (ρ, N ) and Spin + (V ) = Spin + (ρ 0 , N ). Definition 3.8. From now on, we let and The groups O + (V ) and SO + (V ) are sometimes called the spinorial kernels. 3.4. Lorentzian geometry over R Let V be the real vector space R m and let p, q ∈ Z ≥0 be such that p + q = m. The function where x = (x 1 , . . . , x m ) and y = (y 1 , . . . , y m ), is easily seen to be a symmetric R-bilinear form of signature (p, q). Moreover, the orthogonal geometry (V, S) is non-singular. As we pointed out is §2, these are the only non-singular orthogonal geometries over R (up to isometry). These orthogonal geometries will be denoted by R p,q . One of them will be particularly important for us. It is R n,1 which will be referred to as the Lorentzian geometry. For the reminder of this section, we let S denote the symmetric bilinear form (10) when p = n and q = 1. The corresponding quadratic form will be denoted by q. In this situation, we have O(R n,1 ) and if one chooses the standard basis (e 1 , . . . , e n ) for R n+1 which is an orthogonal basis for R n,1 , then we we have the corresponding matrix groups. If The reader will notice that we already defined a group O + (R n,1 ) in §3.3, but there are no ambiguities, since one can check that both groups are the same. Moreover, it is simple to check that if M = (m ij ) ∈ O(n, 1), then m n+1,n+1 ≥ 1 or m n+1,n+1 ≤ −1, and one has One also lets SO + (R n,1 ) = O + (R n,1 )∩SL(R n+1 ) which one can check is equal to the group SO + (R n,1 ) defined in §3.3. As explained in §3.3, we have the two 2-covers Change of fields Throughout this section, we let F be our fixed ground field, and we let E be a field extension of F . If V is an F -vector space, then we let V E = E ⊗ F V be the E-vector space obtained from V by extending the scalars to E. Given a K-vector space V , where K is a field, we let L(V ) denote the K-algebra of K-linear endomorphisms of V . The universal property satisfied by V E induces an injective morphism of F -algebras The image of an f ∈ L(V ) via this map will be denoted by f E which satisfies on pure tensors the formula f E (e ⊗ v) = e ⊗ f (v). The morphism (13) induces in turn an injective group morphism Now, if V is an orthogonal geometry over F , say with associated symmetric F -bilinear form S and quadratic form q, then V E becomes an orthogonal geometry over E whose symmetric E-bilinear form S E is given on pure tensors by If V is non-singular, then so is V E , and we now assume that. Given , and therefore the group morphism (14) induces an injective group morphism We now let C(V ) be a universal Clifford algebra for V , and C(V E ) will be one for 2 , and therefore, we get from the universal property satisfied by the Clifford algebra C(V ) a morphism of Moreover, the map ψ E behaves well with respect to the three involutions, namely for all x ∈ C(V ). Since ψ E behaves well in particular with respect to the principal involution, we have ψ E (Γ(V )) ⊆ Γ(V E ). Therefore, we get the following commutative diagram: where the vertical arrows are all injective. Now, since ψ E behaves well with respect to the Clifford conjugation, we obviously have Moreover, we get the following commutative diagram: where the first three vertical arrows are injective, but not necessarily the last one. One has a similar diagram for Spin + (V ). Vahlen groups In [20], Vahlen described the group of orientation preserving isometries of the n-dimensional hyperbolic space as the central quotient of a certain group of two by two matrices with entries in the Clifford algebra of R n−2,0 . These groups can be viewed as generalizations of both SL(2, R) and SL(2, C), since P SL(2, R) and P SL(2, C) are the groups of orientation preserving isometries of the 2-dimensional and 3-dimensional hyperbolic spaces respectively. Vahlen's work had been forgotten for a while until Maass used them in his fundamental paper [15]. These groups, now called Vahlen groups, were studied later by Ahlfors in [1], [2], [3], [4] and [5] in connection with the group of Möbius transformations M (R n ). In [10], it was shown how to define a Vahlen group for any non-singular orthogonal geometry over any field of characteristic different from 2, and not just over R as it had been done previously. Moreover, they showed that a Vahlen group is isomorphic to a certain spin group, and this systematically gave isomorphisms between classical groups in small dimensions, a subject which had been previously studied by van der Waerden and Dieudonné among others. In the literature, one can find the definition of Vahlen groups for the so-called paravectors and also for non-paravectors. In this section, we place ourselves in the latter situation, and we define Vahlen groups in the non-paravector situation for any non-singular orthogonal geometry over any field of characteristic different from 2. Our approach is via pin and spin groups. This might be known to the experts, but we have not found it in the literature, so we include these results here. In the recent preprint [16], McInroy describes Vahlen groups over commutative rings, not only fields, using an approach which is very similar to ours. The paravector case and the relationship between the two setups will be explained in §7 below. Vahlen groups We start with a base field F of characteristic different from 2 and we recall the following important definition. Definition 4.1. A non-singular plane (meaning a two dimensional F -vector space) with an orthogonal geometry is called a hyperbolic plane if it contains a non-zero isotropic vector. The following lemma is simple and the proof is left to the reader. Lemma 4.2. Let P be a hyperbolic plane P and assume that f 1 ∈ P is a non-zero isotropic vector. Then, there exists a unique non-zero isotropic vector f 2 satisfying We will call such a pair (f 1 , f 2 ) a hyperbolic pair. Note that a hyperbolic pair (f 1 , f 2 ) is necessarily a basis for the hyperbolic plane P and that the isotropic vectors in P consist precisely of Span(f 1 ) ∪ Span(f 2 ). We remark as well that two hyperbolic planes are necessarily isometric. Let (V, S 1 ) be a non-singular orthogonal geometry (with associated quadratic form q 1 ) and let (P, S 2 ) be a hyperbolic plane (with associated quadratic form q 2 ). For the remainder of §4.1, we set and we fix a hyperbolic pair (f 1 , f 2 ) for P . Note that W is also a non-singular orthogonal geometry. We let S denote its symmetric F -bilinear form, and we let q denote the corresponding quadratic form. Every w ∈ W can be uniquely written as Following [19], we define a map φ : The map φ is clearly F -linear, and a simple computation shows that is the unit element of the F -algebra M 2 (C(V )). By the universal property satisfied by universal Clifford algebras, we get an F -algebra morphism which we denote by the same symbol φ. It is simple to check that φ is surjective, and since C(W ) and M 2 (C(V )) have the same dimensions as F -vector spaces, the morphism φ is an isomorphism. The map φ being an isomorphism of F -algebras, we have in particular φ(C(W ) × ) = M 2 (C(V )) × . We can now define the notion of Vahlen groups. Definition 4.3. We define the following subgroups of M 2 (C(V )) × . We warn the reader that this notation is our own and we have not seen it in the literature. Our goal now is to give a more explicit description of Vahlen groups. The three (anti) involutions ′ , * ,¯of C(W ) correspond to some (anti) involutions of M 2 (C(V )) which we will denote by α, β, γ respectively. Our next task is to find formulas for α, β and γ. Given we set Proof. To check the first equation we just have to show the following three properties: To check the second equation we just have to show the following three properties: We leave these simple computations to the reader. The third equation follows from the first two. The following two results are now easy to check. Then, be such that We will now give a characterization of the A ∈ M 2 (C(V )) which lie in V(V ). Then A ∈ V(V ) if and only if the following conditions are satisfied: Moreover A ∈ V 0 (V ) if and only if (1) through (7) are satisfied as well as: Hence, (1), (2) and (3) are satisfied. Moreover, Since A ∈ V(V ), we necessarily have A · B · α(A) −1 ∈ φ(W ) for all B ∈ φ(W ). Expanding this out in terms of the entries of the matrix A gives (4), (5), (6) and (7). The details are left to the reader. From now on, we let H 2 (V ) = φ(W ), in other words The F -vector space H 2 (V ) has the structure of a non-singular orthogonal geometry coming from W via the isomorphism φ whose quadratic form Q is given by Q(X) = vv − λ 1 λ 2 . We will denote the symmetric F -bilinear form on H 2 (V ) by S. Also, given whenever A ∈ V(V ) and X ∈ H 2 (V ). We get a representation Since φ restricted to W gives us an isometry W ≃ −→ H 2 (V ), we get an isomorphism of groups Theorem 4.8. With the notation as above, we have the following commutative diagram where the two vertical maps are isomorphisms of groups and the rows are exact. Proof. The proof is simple and left to the reader. Similarly, we have the following commutative diagram: whose vertical arrows are isomorphisms, and where η 0 is the restriction of η to V 0 (V ). One can define a spinor norm for Vahlen groups as follows. We let N : We are now in the setting of §3.2, and we let It is simple to check that V + (V ) = φ(P in + (W )) and SV + (V ) = φ(Spin + (W )). In terms of matrices, we have the following result: Then, if and only if all conditions of Theorem 4.7 are satisfied with (1) is replaced by: if moreover the following condition is satisfied: Proof. This should be clear using Lemma 4.5 and the formula for A · γ(A) (which gives the spinor norm for matrices). Change of fields In this section, we let F be a field of characteristic zero as before, and we let E be a field extension of F . Let V be an orthogonal geometry over F , P a hyperbolic plane and set W = V ⊥ P as before. Then, the morphism ψ E : C(V ) ֒→ C(V E ) of §3.5 induces an obvious injective morphism of F -algebras: . This leads to the following commutative diagram: where the vertical arrows are all injective. We also have the following important commutative diagram: where only the far right vertical arrow is not injective. One has a similar diagram for SV + (V ). Generalized Cartan matrices, system of simple roots and Weyl groups Throughout this section, F will be a field of characteristic zero; hence, in particular Q ⊆ F . Most of the proofs of this section will be omited, since they are standard or can be easily provided. We remind the reader of the following definitions (see for instance page 1 of [13]): is called a (generalized) Cartan matrix if it satisfies the following conditions: 1. c ii = 2 for all i = 1, . . . , n, 2. c ij ∈ Z ≤0 for all i, j satisfying i = j, 3. c ij = 0 if and only if c ji = 0. A Cartan matrix C is called non-singular if it has full rank. In order to study Weyl groups, we first introduce the notion of a system of simple roots. Throughout this section, V will be a finite dimensional orthogonal geometry over F with a symmetric bilinear form S. If v ∈ V is a non-isotropic vector and w ∈ W is arbitrary, then we defined the Cartan bracket v, w as usual by the formula v, w = 2S(v, w) S(v, v) . Definition 5.2. A system of simple roots in V consists of a basis Π = {α 1 , . . . , α n } of V such that 1. The elements of Π are non-isotropic vectors, 2. The matrix ( α i , α j ) is a Cartan matrix. A system of simple roots (V, Π) is called non-singular if V is a non-singular orthogonal geometry. A system of simple roots will typically be denoted by (V, Π). Given any symmetrizable Cartan matrix C, there always exists a system of simple roots (V, Π) with associated Cartan matrix C. Indeed, let V be a vector space of dimension n over F , and let Π = {α 1 , . . . , α n } be any basis for V . Since C is assumed to be symmetrizable, we can write where D = diag(ε 1 , . . . , ε n ) is a diagonal matrix, B a symmetric matrix and det(D) = 0. One can define a symmetric F -bilinear form κ on V via the formula κ(α i , α j ) = b ij . We then have an orthogonal geometry on V . Now, one has κ(α i , α i ) = 2ε i and κ(α i , α j ) = ε i c ij . It follows that the entries of the Cartan matrix C satisfy Therefore, (V, Π) is a system of simple roots with associated Cartan matrix C. Definition 5.5. A Cartan matrix C = (c ij ) is called reducible if there exists a permutation τ ∈ S n such that (c τ (i)τ (j) ) is block diagonal with more than one block. Otherwise, it is called irreducible. Note that if (V, Π) is a system of simple roots and Π 1 ⊆ Π is a non-empty subset, then (W, Π 1 ), where W = Span(Π 1 ) is also a system of simple roots. It is clear that a system (V, Π) of simple roots is irreducible if and only if the corresponding Cartan matrix is irreducible. Proposition 5.7. Given any system of simple roots (V, Π), one can find non-empty subsets Π 1 , . . . , Π s ⊆ Π satisfying: Because of this last proposition, one can focus on irreducible Cartan matrices or equivalently on irreducible systems of simple roots. Lemma 5.8. If (V, Π) is an irreducible system of simple roots and F a totally ordered field, then all the S(α i , α i ) have the same sign. Now, we introduce the notion of morphism between systems of simple roots. Definition 5.9. A morphism between two systems of simple roots (V 1 , is a morphism of systems of simple roots and (V 1 , Π 1 ) is irreducible, then φ is an orthogonal similitude. Lemma 5.11. Let C be an irreducible symmetrizable Cartan matrix and assume that we are given two different symmetrizations D 1 · C = B 1 and D 2 · C = B 2 , where the D i are non-singular and diagonal, and the B i are symmetric matrices. Then, there exists λ ∈ F × such that λD 1 = D 2 . If φ : (V 1 , Π 1 ) −→ (V 2 , Π 2 ) is an isomorphism of irreducible systems of simple roots, then it is simple to check that φ induces an isomorphism of groupsφ : is an isomorphism of irreducible systems of simple roots, then given any non-isotropic vector v 1 ∈ V 1 , the vector φ(v 1 ) is also non-isotropic, and moreover φ(r v1 ) = r φ(v1) . As a corollary, we obtain is an isomorphism of irreducible systems of simple roots, then the group isomorphismφ of above induces an isomorphism W(Π 1 ) In Chapter 4 of [13], Kac classifies the irreducible Cartan matrices in three distinct classes: finite type, affine type, and indefinite type. They correspond to finite-dimensional simple Lie algebras, affine Kac-Moody algebras and indefinite Kac-Moody algebras respectively. The finite and affine type ones have been classified, and one can find their corresponding Dynkin diagrams in Chapter 4 of [13]. It is known that all finite and affine Cartan matrices are symmetrizable. Among the indefinite Kac-Moody algebras, the so-called hyperbolic ones have been the most extensively studied. The hyperbolic Cartan matrices of rank greater than or equal to 3 and their corresponding Dynkin diagrams have also been classified. It is known that there are no hyperbolic Cartan matrices of rank strictly greater than 10. See for instance [8]. For the purpose of this paper, we also introduce the following non-standard terminology. If C is an irreducible non-singular symmetrizable Cartan matrix, then one can write D · C = B for some non-singular matrices D, B ∈ M n (R) such that D is diagonal and B is symmetric with signature (p, q). Then, the number ι(C) := |p − q| does not depend on the choice of the decomposition D · C = B by Lemma 5.11. Definition 5.16. An irreducible non-singular symmetrizable Cartan matrix is called Lorentzian if ι(C) = 1. We warn the reader that this is our own terminology, and the word Lorentzian can mean something else in the literature. The corresponding Weyl groups and system of simple roots will be called of finite, affine, hyperbolic and Lorentzian type respectively. Note that if (V, Π) is a Lorentzian system of simple roots over R, the Weyl group W(Π) can be viewed as a subgroup of O(R n−1,1 ) = O(R 1,n−1 ). A useful normalization If C is an irreducible symmetrizable Cartan matrix, then one can write D · C = B for some non-singular diagonal matrix D and where B is symmetric. This decomposition is not unique, as Lemma 5.11 shows. On the other hand, it follows from Lemma 5.11 that there is a unique matrix D = diag(ε 1 , . . . , ε n ) satisfying 1. D is non-singular, 2. ε i ∈ Z >0 , 3. gcd(ε 1 , . . . , ε n ) = 1, 4. D · C is a symmetric matrix. Note that D · C ∈ M n (Z). From now on, given an irreducible symmetrizable Cartan matrix C, we will always take the representation D · C = B, where D is normalized as above. (Note that if C is symmetric, then D is the identity matrix.) Canonical Lorentzian extensions As explained on page 71 of [13], starting with an irreducible system of simple roots of finite type, one can extend it to a Lorentzian system of simple roots. Kac calls the resulting systems of simple roots "canonical hyperbolic extensions". Unfortunately, they are not always hyperbolic systems of simple roots. We shall rather refer to them as "canonical Lorentzian extensions" or "canonical double extensions", to avoid any ambiguity. The purpose of this section is to recall how this is done. One starts with a Cartan matrix C of finite type T n (where T n = A n (n ≥ 1), B n (n ≥ 2), C n (n ≥ 3), D n (n ≥ 4), E 6 , E 7 , E 8 , F 4 or G 2 ). The Cartan matrices of finite type are known to be symmetrizable; hence, we let B = D · C ∈ M n (Z), where D is the unique diagonal matrix normalized as explained in §5.1. We let V = F n and we define an orthogonal geometry over F via κ(α i , α j ) = b ij , where Φ = (α 1 , . . . , α n ) is the standard basis for V . Then (V, Φ) is a system of simple roots with Cartan matrix C. Letting θ be the highest root, we let m = κ(θ, θ), and we set where P is a hyperbolic plane. We also fix a hyperbolic pair (f 1 , f 2 ) for P . For the convenience of the reader, we list below the highest root for each type as well as the integer m = κ(θ, θ). E 7 θ = 2α 1 + 3α 2 + 4α 3 + 3α 4 + 2α 5 + α 6 + 2α 7 2 E 8 θ = 2α 1 + 3α 2 + 4α 3 + 5α 4 + 6α 5 + 4α 6 + 2α 7 + 3α 8 2 Setting and Π = (α −1 , α 0 , α 1 , . . . , α n ), one checks that (W, Π) is a Lorentzian system of simple roots for a certain Cartan matrix which we will denote by C ++ . The corresponding system of simple roots will be said to be of type T ++ n . It turns out that the Dynkin diagram corresponding to C ++ can be obtained from the Dynkin diagram corresponding to the affine type root system T (1) n after adding one vertex labelled α −1 connected to α 0 by a single edge. Again, for the convenience of the reader, we list below the Dynkin diagrams corresponding to T As an example, one recovers from the table that the Cartan matrix corresponding to the canonical Lorentzian extension B ++ 3 is given by  Among the canonical Lorentzian extensions T ++ n , the following ones are hyperbolic: A ++ Change of fields As before, F is a field of characteristic zero and E/F is a field extension. Throughout this section, V will be a non-singular geometry. Our goal in this section is to show that the Weyl group is preserved under base change, and this is the content of Proposition 5.17 below. If V is an F -vector space, then Suppose that (V, Π) is a system of simple roots, where Π = {α 1 , . . . , α n }. Define α i,E ∈ V E by α i,E = 1 ⊗ α i and let Π E = {α 1,E , . . . , α n,E }. It is then simple to check that (V E , Π E ) is also a system of simple roots with associated Cartan matrix C. We have an injective group morphism If we let r i,E denote the simple reflection associated to α i,E and r i the one associated to α i , then it is simple to check that ι E (r i ) = r i,E . Hence, we obtain the following result. 6. Weyl groups of the hyperbolic canonical Lorentzian extensions T ++ n with symmetric Cartan matrices In this section, we let F = Q, V = Q n and we let (α 1 , . . . , α n ) be the standard basis for V . Starting with a Cartan matrix C of finite type, we endow V with an orthogonal geometry as follows. We consider B = D · C, where D is the unique diagonal matrix normalized as in §5.1. The orthogonal geometry on V is given by κ(α i , α j ) = b ij . We let be the corresponding root lattice. Note that because of our choice of D, we have κ(Λ, Λ) ⊆ Z. If θ is the highest root, then we let m = κ(θ, θ) and we consider W = V 1 m ⊥ P , where P is a hyperbolic plane for which we fix a hyperbolic pair (f 1 , f 2 ). We then have the universal Clifford algebra C(W ) and an isomorphism of Q-algebras Φ : As before, we let The Q-vector space H 2 (V 1 m ) has dimension n+2 and its signature is (n+1, 1). We recall that H 2 (V 1 m ) is an orthogonal geometry with quadratic form Q given by Q(X) = vv − λ 1 λ 2 . Moreover, the group V(V The simple roots α i (i = −1, 0, 1, . . . , n) of the canonical Lorentzian extension are mapped to X i ∈ H 2 (V 1 m ), where The root lattice of this system of simple roots will be denoted by Lemma 6.1. With the notation as above, one has Λ ++ = H 2 (Λ), where Proof. The inclusion Λ ++ ⊆ H 2 (Λ) is clear. The other inclusion follows from the identity 0 n 1 Lemma 6.2. The notation being as above and assuming that m = 2, a Z-basis for O is given by the elements {α I | I ∈ Ω}. Proof. Note that the elements {α I | I ∈ Ω} form a basis for the Q-vector space C(V 1 m ). In fact, if m = 2 these elements form a Z-basis for O. Indeed, it suffices to show that they generate O as a Z-module, but this follows from the relation and noting that κ(Λ, Λ) ⊆ Z. For the remainder of §6, we assume that m = 2 so that we are dealing with the simply laced canonical Lorentzian extensions (meaning that they have a symmetric Cartan matrix). We then set We define similarly the groups SV(O) and SV + (O). Then A ∈ V(O) if and only if the following conditions are satisfied. 1 2 ). Letting we have η(Γ) = W(C ++ ) because of (19). Therefore, we have the following chain of subgroups: Theorem 6.5. With the notation as above, one has that η induces an isomorphism of groups for the following hyperbolic canonical Lorentzian extensions: Proof. Corollary 5.10 of [13] shows that for each hyperbolic canonical Lorentzian extension with symmetric Cartan matrix, one has where Aut(C ++ ) is the group of outer automorphisms of the corresponding Dynkin diagram. It is not hard to see that −id / ∈ O + (H 2 (Λ)). For each hyperbolic canonical Lorentzian extension with a symmetric Cartan matrix, we computed the spinor norm of all the automorphism ±a, where a is an outer automorphism. It turns out that these spinor norms are non-trivial exactly in the cases listed in the theorem. Thus, in those cases, the chain of subgroups (20) induces the following equalities Our result follows from this last equality. We did not find a conceptual way of computing the spinor norm of the outer automorphisms, so we used the software PARI ([17]) to calculate those. We explain these calculations in §6.1 below. Spinor norm of outer autormorphisms Given an automorphism of a Dynkin diagram σ, it induces an automorphism of the root lattice via the formula σ(α i ) = α σ(i) . For example, the permutation (1 4)(2 3) of the Dynkin diagram corresponding to A ++ 4 is such an outer automorphism. For each hyperbolic canonical Lorentzian extension with a symmetric Cartan matrix, we calculated the spinor norm of the automorphisms ±a, where a runs through the outer automorphisms, using PARI. Each time, we found an orthogonal basis for the orthogonal geometry V 1 2 , and we used Theorem 2.4 to write the outer automorphism as a product of reflections. One can then calculate the spinor norm of the outer automorphisms. We summarize our calculations below. Connections with previous descriptions of the Weyl group In this section, we would like to explain the connection between our approach and some previous results contained in [12]. Paravectors Throughout §7.1, we let U be a non-singular orthogonal geometry and C a universal Clifford algebra for U . We denote the corresponding quadratic form on U by q. Definition 7.1. The paravectors in C are defined to be the F -vector space U para = F ⊕ U ⊆ C. Note that if x ∈ U para , then x * = x and x ′ =x. We define a symmetric F -bilinear form of U para via the formula S para (x, y) = 1 2 (xȳ + yx), whenever x, y ∈ U para . The corresponding quadratic form will be denoted by q para ; hence, q para (x) = xx for all x ∈ U para and one can check that q para (x) = xx =xx = q para (x) whenever x ∈ U para . It is also simple to check the equality S para (x,ȳ) = S para (x, y), whenever x, y ∈ U para . If x = λ 1 + u 1 and y = λ 2 + u 2 for some λ i ∈ F and u i ∈ U , then S para (x, y) = λ 1 λ 2 + S(u 1 , u 2 ). Definition 7.2. Similarly as we did for U , we define a Clifford group as follows. We let Here is the relationship between Γ(U ) and Γ(U para ). Proof. Let x ∈ Γ(U ) and let y = λ + u ∈ U para . Then, we have xy( , we have xu(x ′ ) −1 ∈ U and we just have to show that xλ(x ′ ) −1 ∈ F , but this follows directly from Corollary 3.4. From now on, we let ρ para denote the group morphism Γ(U para ) −→ GL(U para ) induced by the Atiyah-Bott-Shapiro action of Γ(U para ) on U para . In fact, ρ para (Γ(U para )) ⊆ O(U para ). Indeed, if x ∈ Γ(U para ) and y ∈ U para , then It follows that we have a group morphism ρ para : Γ(U para ) −→ O(U para ). If x ∈ U para is a non-isotropic vector, then x ∈ C × and its inverse is obviously given by Proposition 7.4. For x ∈ U para non-isotropic and y ∈ U para , we have ρ para (x)(y) = −r x (ȳ). In particular, it follows from this last proposition that the non-isotropic vector x ∈ U para are in Γ(U para ). Also, if x ∈ U para is non-isotropic, then ρ para (x) ∈ SO(U para ). We also have the following useful result: Corollary 7.5. With the notation as above, if x 1 , x 2 ∈ U para are non-isotropic, then ρ para (x 1 ·x 2 ) = r x1 • r x2 . It follows from this last corollary and Theorem 2.4 that the group morphism is surjective. Now, the exact same argument as in the proof of Proposition 3.3 shows that ker(ρ para ) = F × . Thus, we have a short exact sequence Corollary 7.6. The group Γ(U para ) is generated by the non-isotropic vector x ∈ U para . We get a group morphism N para : Γ(U para ) −→ F × defined by x → N para (x) = xx. We are now in the setting of section §3.2.2, and we define Spin + (U para ) = Spin + (ρ para , N para ). As before, we have the exact sequence From now on, we let SO + (U para ) = ρ para (Spin + (U para )). Following [18], we will now explain a connection between Γ(U para ) and Γ 0 (V ) for a suitable orthogonal geometry V . Let L be a non-singular line with basis, say e. Assume that the orthogonal geometry on L is given by S L (λ 1 e, λ 2 e) = λ 1 λ 2 so that the associated quadratic form is given by q L (λe) = λ 2 . Then, consider V = U ⊥ L. We clearly have an F -linear morphism ξ : U −→ C 0 (V ), given by v → ev. Moreover, (ev) 2 = −q(v) and therefore, we get a morphism of F -algebras ξ : C(U ) −→ C 0 (V ) which we denote by the same symbol. It is simple to check that if x ∈ C(U ) is written as x = x 0 + x 1 for some unique x i ∈ C i (U ), then we have It follows that ξ is an isomorphism of F -algebras. Moreover, one also obtains from (23) the equality for all x ∈ C(U ). We have an obvious isometry σ : U para −→ V defined by λ + u → λe + u. A simple computation shows that e · ξ(y ′ ) = σ(y) for y ∈ U para . Proof. Using (24) and (25) above, we calculate It follows from Lemma 7.7 that ξ(Γ(U para )) ⊆ Γ 0 (V ). If we let Σ : SO(U para ) ≃ −→ SO(V ) be the isomorphism defined by g → σ · g · σ −1 , then it also follows from Lemma 7.7 that we have the commutative diagram whose rows are exact and the vertical maps are all isomorphisms of groups. Vahlen groups for paravectors Let U be a non-singular orthogonal geometry and let C(U ) be a universal Clifford algebra for U . As before, we let W = U ⊥ P , where P is a hyperbolic plane. We have the isomorphism of F -algebras φ : C(W ) ≃ −→ M 2 (C(U )) which leads us to the following definition. Then V(U para ) acts on H 2 (U para ) via A · X = AXA ♯ . We get a representation which fits into the following commutative diagram: whose vertical arrows are isomorphisms of groups. For the exact same reason as in the non-paravector situation, we have the following more concrete characterization of Vahlen groups. Then A ∈ V(U para ) if and only if the following conditions are satisfied. Combining (26) and (27), we obtain the following commutative diagram whose vertical arrows are all isomorphisms of groups and where V = U ⊥ L and L = F e is a line whose quadratic form is given by q(e) = 1. The isometry σ : U para −→ V induces another obvious isometry H 2 (U para ) −→ H 2 (V ) which in turns induces the group isomorphism Σ of this last commutative diagram. The group isomorphism Ξ comes from the group isomorphism ξ of (23) and is explicitly given by the following formula. If where g 1 = 1 + e 2 and g 2 = 1 − e 2 . Note that g i ∈ C(V ) and g ′ 1 = g 2 and g ′ 2 = g 1 . One difference between our approach and the one adopted in [12] is that we work in a non-paravector situation whereas they work in a paravector situation. We shall explain this further for A ++ 1 and A ++ 2 below. In this section, we let F = Q and we consider V = Q as an orthogonal geometry as we did in §6 for the Cartan matrix of finite type C = (2) corresponding to A 1 . Note that V 1 2 is isometric to the line L = Qe, whose quadratic form is given by q(e) = 1. Letting U = {0} with the trivial quadratic form, we have V 1 2 = U ⊥ L. Note that C(U ) is isomorphic to Q as Q-algebras and U para = C(U ) = Q. It is simple to check that 1. V(U para ) = GL(2, Q), 2. SV + (U para ) = SL(2, Q). The Q-algebra C(V 1 2 ) is isomorphic to Q(i) and the embedding V 1 2 = L ֒→ C(V 1 2 ) = Q(i) is given by xe → xi. Moreover, one can check that The group isomorphism Ξ : GL(2, Q) −→ V 0 (V 1 2 ) of the last section is given explicitly by Note also that the order O in C(V 1 2 ) = Q(i) is nothing else than Z[i]. Therefore, and we clearly have Ξ (SL(2, Z)) = SV + (O). In summary, there are two possibilities in order to study the even part of the Weyl group of the hyperbolic Kac-Moody algebra A ++ 1 . One can work with the short exact sequence as we did or one can work with the short exact sequence In [12], they use the second approach. Moreover, they are not working with H 2 (Q), but rather with the two by two symmetric matrices with rational coefficients. We will explain this other option in §7.3. The even and odd parts of the Clifford algebra are given by −1 , −3 Q 0 = w + zk w, z ∈ Q and −1 , −3 Q 1 = xi + yj x, y ∈ Q . The Clifford algebra C(U ) embeds in the Clifford algebra C(V 1 2 ) via √ −3 → j and therefore, we will identify √ −3 with j. One can check that Moreover, the group isomorphism Ξ : V(U para ) −→ V 0 (V 1 2 ) is explicitly given by Ξ x 1 + y 1 j x 2 + y 2 j x 3 + y 3 j x 4 + y 4 j = x 1 + y 1 k x 2 i + y 2 j −x 3 i + y 3 j x 4 − y 4 k . One can also check that the order O = Z[α 1 , α 2 ] ⊆ C(V So again, there are two possibilities in order to study the even part of the Weyl group of the hyperbolic Kac-Moody algebra A ++ 2 . One can work with the short exact sequence as we did or one can work with the short exact sequence In this last equation, j is the unique non-trivial Galois automorphism of Q( √ −3)/Q. In [12], they use the second approach. Moreover, they are not working with H 2 (Q( √ −3)), but rather with the two by two Hermitian matrices with coefficients in Q( √ −3). We explain this below in §7.3. Hermitian matrices Let U be a non-singular orthogonal geometry. It has been more customary to work with Hermitian matrices rather than H 2 (U para ). We now explain the connection between the two. We explain this in the context of paravectors, but a similar phenomenon is happening in the non-paravector situation as well. We have an obvious isomorphism of F -vector spaces ψ : H 2 (U para ) −→ H 2 (U para ) given by Hence, the F -vector space H 2 (U para ) inherits an orthogonal geometry from H 2 (U para ) whose corresponding quadratic form is given by Q(X) = xx − λ 1 λ 2 . The corresponding symmetric F -bilinear form will be denoted by S. From now on, we let Note that if X ∈ H 2 (U para ) then ψ(X) = XE 2 . It is simple to check that E 2 ∈ V(U para ). Given where λ = ad * − bc * . Note that if A ∈ V(U para ), then A † = E 2 A ♯ E 2 . Since V(U para ) acts on H 2 (U para ), it also acts on H 2 (U para ) via ψ. It is a simple matter to check that the action is given by A · X = AXA † whenever A ∈ V(U para ) and X ∈ H 2 (U para ). Hence, we get a representatioñ η : V(U para ) −→ SO( H 2 (U para )). Moreover, the isometry ψ induces an isomorphism of groups Ψ : SO(H 2 (U para )) −→ SO( H 2 (U para )) given by σ → Ψ(σ) = ψ • σ • ψ −1 . We then obtain the following commutative diagram: whose rows are exact. The reader will have no difficulty in checking that Proposition 7.11 is valid here as well. We also have the group morphism N : V(U para ) −→ F × defined as before by A → A · γ(A). Then, we obtain the important exact sequence Note that ϑ satisfies ϑ(r X1 • r X2 ) = Q(X 1 )Q(X 2 ) · F ×2 whenever X 1 , X 2 ∈ H 2 (U para ) are nonisotropic vectors. One should compare the above with §3.1 and §3.2 of [12]. There, a different formula than the one of Proposition 7.13 is presented. One can check that the matrices coming from Corollary 7.14 are the same up to a sign as the ones of Theorem 2 of [12]. Our treatment here is simplified a little, since we are using the anti-involution γ instead of X →X which is neither an involution nor an anti-involution in general, unless we are in a commutative setting. In summary, if V is one of the orthogonal geometries of §6, and one writes V 1 2 = L ⊥ U for some orthogonal geometry U , where L = Qe is a line with quadratic form q(e) = 1, then the authors of [12] are working with the short exact sequence 1 −→ Q × −→ V(U para )η −→ SO( H 2 (U para )) −→ 1, whereas we are rather working with the following short exact sequence Our approach has the advantage that we can study directly the Weyl group instead of only the even part of the Weyl group. Moreover, the formulas giving the reflections are simpler in our situation. To be precise, the authors of [12] also work over R instead of Q, but it seems to clarify the picture quite a bit if one works over Q instead of over R. As a last remark, we also point out that we are not dealing with an explicit set of known generators for the groups V(V 1 2 ) as was done in [12] for SL(2, Z) and SL(2, O −3 ) for instance. We are replacing this argument with the use of a corollary of Kac as we explained in the proof of Theorem 6.5.
2016-04-07T22:52:55.727Z
2016-01-16T00:00:00.000
{ "year": 2017, "sha1": "dbfb5ccaa36eded80c95cb0e19fa6599ec11037d", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.jalgebra.2017.05.003", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "d197f548f9b14c5c95899e61d7bc7435be775e04", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
96085395
pes2o/s2orc
v3-fos-license
Identification of molecular flipping of an asymmetric tris(phthalocyaninato) lutetium triple-decker complex by scanning tunneling microscopy/spectroscopy The assembling behavior and electronic properties of asymmetric tris(phthalocyaninato) lutetium tripledecker sandwich complex molecules (Lu2Pc3) on highly oriented pyrolytic graphite (HOPG) surfaces have been studied by scanning tunneling microscopy/spectroscopy (STM/STS) methods. Phase transitions were observed at different bias polarities, involving an ordered packing arrangement with fourfold symmetry at negative bias and an amorphous arrangement at positive bias. Molecular switching behaviour for individual Lu2Pc3 molecules was reported here according to the bias-polarity-induced flipping phenomena and the peak shift in dI/dV versus V curves at different voltage scanning directions. The sensitive response of the strong intrinsic molecular dipole to an external electric field is proposed to be responsible for molecular switching of Lu2Pc3 at the solid/liquid interface. Introduction Controlled construction and tuning of molecular nanostructures is a prerequisite for the fabrication of molecular devices and precise control of the molecular orientation and ordering is necessary in order to tailor the physical and chemical properties of molecular architecture. Some external stimuli such as electric fields and thermal annealing can be employed to tune molecular assembly [1 7]. Electric polarity-driven molecular switching behavior has been the focus of considerable attention [8 10], and in this case the intrinsic dipole of the molecules is suggested to be a crucial issue for molecular switching. It was reported that non-planar tinnaphthalocyanine (SnNc) molecules have strong intrinsic molecular dipoles (1.48 D) which enable SnNc to exhibit voltage-induced fl ipping on a surface [9]. On the other hand, we have previously presented a strategy of attaching functional groups with dissimilar adsorption and assembling characteristics to the top and bottom phthalocyaninato moieties 236 Nano Res (2009) 2: 235 241 Nano Research of a tris(phthalocyaninato) lutetium triple-decker complex (Lu 2 Pc 3 ), and orientation-dependent ordering of such molecules at different bias polarities has been identified [10]. In general, the molecular switching behaviors can be explained in terms of the arrangements before and after changing bias polarities. The process of molecular switching itself has rarely been mentioned explicitly up to now, and deserves further investigation. Scanning tunneling microscopy (STM) and spectroscopy (STS) methods enable complementary investigations of the molecular structures and electronic properties at the single molecule level by virtue of their unique ultrahigh spatial resolution and versatility. Many efforts have been made to study the electronic structure and properties of adsorbed organic molecules using STM/STS techniques [11 19]. Bias-dependent visualization of molecules containing both electron-donor and -acceptor moieties has been demonstrated [ 17 19], which reveals the sensitivity of STM to the electronic properties of the adsorbed molecules. In this work, the molecular switching behavior of the asymmetric triple-decker sandwich complex Lu 2 Pc 3 at the liquid/solid interface has been studied by STM and STS methods, and changes of tunneling current during the switching process have been tracked by using dI/dV versus V curves. STM observations revealed that the orientation and ordering of the adsorbed Lu 2 Pc 3 molecules could be readily tuned by changing bias polarities. STS results showed a current jump which was furthermore confirmed to be related to an appreciable biasinduced adsorption reversal process for individual Lu 2 Pc 3 molecules. STM characterization The sandwich complex Lu 2 Pc 3 was synthesized by the reported method and characterized by 1 H NMR spectroscopy and mass spectroscopy [20 23]. For surface assembly studies, the complex was dissolved in 1-phenyloctane with a concentration of about 1 mg/mL and the solution (1μL) was drop-cast on a freshly cleaved highly oriented pyrolytic graphite (HOPG) surface. STM characterization of the assembled film was carried out with a Nanoscope IIIa instrument (Veeco Metrology, USA) on a 1-phenyloctane/graphite interface. The STM tips were mechanically formed Pt/Ir wires (80/20). The bias was applied to the substrate with the tip grounded during STM measurements. STS characterization During STS investigation, the tip was located on top of the center of a Lu 2 Pc 3 molecule while feedback was turned off. Spectroscopy was performed by adding a dithering modulation (peak-to-peak 20 30 mV) to the bias voltage, and the bias was scanned through the designated voltage range. A lock-in amplifi er was used to collect the dI/dV signal. To ensure reliability and reproducibility, the spectra were averaged over a large number of characteristic curves on different Lu 2 Pc 3 molecules in different regions. Computational details We performed theoretical calculations using density functional theory (DFT) provided by the DMol3 code [24]. The Perdew and Wang parameterization [25] of the local exchange-correlation energy was applied for the local spin density approximation (LSDA) to describe exchange and correlation. We expanded the all-electron spin-unrestricted Kohn Sham wave functions in a local atomic orbital basis. A doublenumerical basis set polarization was employed. All calculations were of the all-electron type, and performed with the extra-fi ne mesh. Results and discussion The family of phthalocyanines (Pcs) represents one of the promising candidates for forming ordered thin films in organic electronics due to their significant chemical stability and electronic properties [ 26 28]. Because of the intriguing inter-ring interactions and the intrinsic nature of the metal centers, sandwichtype double-and triple-decker complexes of Pcs display characteristic features, which are different from their non-sandwich counterparts, enabling their applications in different areas such as electrochromic displays, fi eld-effect transistors and gas sensors [29 31]. The chemical structure of the triple-decker complex Lu 2 Pc 3 is shown in Scheme 1(a). Lu 2 Pc 3 is cylindrical in apparent shape with its height reaching nearly 1 nm, comparable to the lateral size of the phthalocyanine ring (1.3 nm). In comparison with their monophthalocyanine counterparts, the assembly of sandwich complexes is more challenging due to their non-planar characteristics [32 36]. The sensitive response of the dipole to an electric field is expected to infl uence the adsorption of the tripledecker complexes. As described previously, we use a strategy of attaching substituents of different polarity to the top and bottom phthalocyaninato moieties to generate an intrinsic molecular dipole [10]. Due to the high electronegativity of the oxygen atoms, the attachment of 15-crown-5 moieties to the Pc ring is expected to cause charge separation in the molecule, where the upper 15-crown-5 substituted Pc (Pc[15C5] 4 ) moiety (Scheme 1(b)) is negatively charged and the lower 2,3,9,10,16,17,23,24-octakis(octyloxy)-phthalocyanine (PcOC8) one (Scheme 1(c)) is positively charged. Theoretical calculations using DFT provided by the DMo13 code revealed a large intrinsic molecular dipole reaching 17.5 D along the axial direction of Lu 2 Pc 3 [24,25]. It is expected that the positively polarized PcOC8 moiety will face toward the substrate when the sample bias is negative, and the negatively polarized Pc[15C5] 4 will face to the surface when positive bias is applied. Our STM observations, similar to those reported earlier [10], confirmed the above conjecture. They revealed that Lu 2 Pc 3 molecules packed with different symmetries according to the applied electric field between the STM tip and the substrate. Large areas of ordered assembled structures with fourfold symmetries were observed for Lu 2 Pc 3 , molecules at the 1-phenyloctane/graphite interface when the applied sample bias was varied from 300 to 1500 mV ( Fig. 1(a)). The intermolecular distance estimated from the STM image is about 2.6 nm, in good agreement with the earlier value for a monophthalocyanine CuPcOC8 assembly [37,38], indicating that Lu 2 Pc 3 has adopted a face-on adsorption configuration with the PcOC8 moiety facing towards the graphite surface and the octyloxy groups fully interdigitated with those of the neighboring molecules. In contrast, a disordered assembly was observed at the interface when a positive bias was applied, as shown in Fig. 1(b). Due to the lack of long range ordering, precise determination of the intermolecular distance was diffi cult in this assembly. In the disordered assembled structure the molecule can be considered to be adsorbed with the Pc[15C5] 4 moiety oriented towards the surface, with PcOC8 pointing away from the By changing bias polarity, the assembled structures of Lu 2 Pc 3 could be reversibly switched between ordered and disordered. A typical switching process is shown in Fig. 2. Initially an ordered monolayer of Lu 2 Pc 3 molecules was revealed under negative bias ( Fig. 2(a)). As the bias polarity was suddenly changed to positive, deconstruction of the ordered domain was immediately induced (Fig. 2(b)), and the assembled layer gradually became disordered (after around 1 min) (Fig. 2(c)). The locations of the Lu 2 Pc 3 molecules in Fig. 2(c) are similar, but not identical, to those in Fig. 2(b). This suggests that although the molecules could not return to their original location after switching, they tended to stay as near as possible to their previous locations. When the bias was changed from positive back to negative, an immediate phase transition occurred and the disordered assembly reverted to an ordered one (Fig. 2(d)). The most intriguing result was that some abrupt peaks appeared in the dI/dV spectrum when the STS method was employed to estimate the electronic properties of the adsorbed Lu 2 Pc 3 molecules. Figure 3 shows the typical dI/dV versus V curves obtained by locating the STM tip on top of individual Lu 2 Pc 3 molecules. The spectrum of Lu 2 Pc 3 shows a characteristic energy gap, revealing probable semiconductor behaviour. It should be noted that the experimentally determined apparent energy gap (about 2.4 eV) obtained from STS results is somewhat wider than those of single-decker metallophthalocyanines (MPc), such as copper phthalocyanine [39] and titanyl phthalocyanine [40] whose apparent gaps are about 2 eV. The value of the apparent gap for Lu 2 Pc 3 is also different from the results obtained from electronic adsorption Figure 2 Sequential STM images obtained for the same area illustrating the bias-polarity-induced phase transition. At the beginning (a), bias is set to 970 mV (the tunneling current is maintained 363 pA through out the process). At the place marked by the blue arrows in image (b), the bias was suddenly changed to 970 mV, which immediately induced deconstruction of the ordered domain and the assembled layer fi nally becomes disordered lacking long range order after 1 min (c). In image (d) the bias has been changed back to 970 mV at the place marked by the blue arrows. This change of sample bias immediately induced a phase transition from disordered to ordered in the assembled layer. During the scanning under this negative bias, a consecutive growth of the fourfold ordered domains could be observed (d) to (a) spectroscopy and cyclic voltammetry, which give a smaller gap for triple-decker complexes, ascribed to intrinsic face-to-face stacking and strong interactions between the phthalocyanine rings [41 44]. Here, we would like to focus on the abrupt jumps in the dI/dV spectra during the bias scanning through the designated voltage range, which has never been reported in STS observations of single-decker MPc molecules [11,12,39,40]. Abrupt peaks appeared around 1.1 V when the bias was changed from 2.5 V to 2.5 V, while the peak shifted to around 1.3 V when the bias was scanned from negative to positive; the results exhibited good reproducibility, which excludes random fluctuations or noise as possible explanations. The abrupt peaks in the dI/dV spectra of the Lu 2 Pc 3 triple-decker complex could be associated with the fl ipping process of Lu 2 Pc 3 on HOPG induced by changing of the bias polarity. For molecular fl ipping, a desorption process from the surface is necessary, which would reduce the adsorbate-substrate interactions and might affect the electron tunneling behaviour; we suggest this as a possible reason for the abrupt jumps in tunneling current observed in the STS spectra. When the bias was changed from positive to negative, Pc[15C5] 4 moieties could be destabilized from the surface driven by the external electric field. The solid/liquid interface provides a free space for Lu 2 Pc 3 molecules to flip. Due to the high affinity of the octoxyl groups for the graphite surface and the strong interactions induced by octoxyl interdigitation, the other assembled structure is expected to be rather more stable and require more energy (or higher bias voltage) to destabilize the molecules when the bias is scanned from negative to positive. The above STS results combined with STM observations further confirm that the asymmetric triple-decker complex Lu 2 Pc 3 can act as a molecular switch when stimulated by an electric fi eld. Conclusions In summary, attaching substituents of different polarities can introduce the capability of controlling the adsorption configuration of organic molecules at interfaces by an external electric field. The as-prepared asymmetric triple-decker complex Lu 2 Pc 3 had a strong intrinsic molecular dipole and packed in different ways according to the applied electric field between the STM tip and substrate. The STM observations combined with STS results revealed that Lu 2 Pc 3 molecules can act as a molecular switch by exhibiting a voltage induced flip at the solid/ liquid interface. Our results provide a possible way to precisely control the ordering and orientation of organic species by means of external stimuli.
2019-04-05T03:30:55.904Z
2009-03-08T00:00:00.000
{ "year": 2009, "sha1": "914578eb88b0551431fdb526f5cb676b9cd1a417", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12274-009-9021-z.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "1c03560a63010f38a79b7b797f121dc58ea68799", "s2fieldsofstudy": [ "Physics", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
16696661
pes2o/s2orc
v3-fos-license
Continental estimates of forest cover and forest cover changes in the dry ecosystems of Africa between 1990 and 2000 Aim This study provides regional estimates of forest cover in dry African ecoregions and the changes in forest cover that occurred there between 1990 and 2000, using a systematic sample of medium-resolution satellite imagery which was processed consistently across the continent. Location The study area corresponds to the dry forests and woodlands of Africa between the humid forests and the semi-arid regions. This area covers the Sudanian and Zambezian ecoregions. Methods A systematic sample of 1600 Landsat satellite imagery subsets, each 20 km × 20 km in size, were analysed for two reference years: 1990 and 2000. At each sample site and for both years, dense tree cover, open tree cover, other wooded land and other vegetation cover were identified from the analysis of satellite imagery, which comprised multidate segmentation and automatic classification steps followed by visual control by national forestry experts. Results Land cover and land-cover changes were estimated at continental and ecoregion scales and compared with existing pan-continental, regional and local studies. The overall accuracy of our land-cover maps was estimated at 87%. Between 1990 and 2000, 3.3 million hectares (Mha) of dense tree cover, 5.8 Mha of open tree cover and 8.9 Mha of other wooded land were lost, with a further 3.9 Mha degraded from dense to open tree cover. These results are substantially lower than the 34 Mha of forest loss reported in the FAO's 2010 Global Forest Resources Assessment for the same period and area. Main conclusions Our method generates the first consistent and robust estimates of forest cover and change in dry Africa with known statistical precision at continental and ecoregion scales. These results reduce the uncertainty regarding vegetation cover and its dynamics in these previously poorly studied ecosystems and provide crucial information for both science and environmental policies. INTRODUCTION The 2010 Global Forest Resources Assessment (FRA) report published by the Food and Agriculture Organization of the United Nations (FAO, 2010a) estimates the global extent of forests and other wooded land to be 31% of the total land area. At tropical and subtropical latitudes, the percentage of open and closed forest is around 40%, of which 42% is covered by dry forest, 33% by moist forest and 25% by wet and rain forest (Murphy & Lugo, 1986). The world's largest proportion of dry forest ecosystems is in Africa, where they account for 70-80% of forested areas (Murphy & Lugo, 1986). In Africa, this ecosystem is home to more than half of the continent's populationpredominantly rural people whose livelihoods depend largely on natural resources (Chidumayo & Gumbo, 2010). The main uses of African dry forests and woodlands are subsistence farming, livestock grazing, timber production and the extraction of fuel wood. The majority of these uses, if not managed sustainably or controlled through appropriate forestry policies and/or management practices, lead to forest degradation and/or destruction. These changes have impacts on the global environment, particularly carbon emissions to the atmosphere and biodiversity loss. These impacts result inevitably in the loss of environmental goods and services, reduced land productivity and threats to human livelihoods. Brink & Eva (2009) noted that the greatest amount of deforestation in Africa is taking place in dry forests, accounting for about 70% of the forest loss between 1975 and 2000 (the humid tropical forests accounted for 16% of the total forest loss in sub-Saharan Africa). Mertz et al. (2007) identified African dry forests and woodlands as the most threatened and least protected ecosystem on the continent, largely as a result of population increase, climate change and poor environmental governance and policy frameworks (FAO, 2010b). Moreover, human responses to changing economic opportunities and/ or policies (at local, national and global scales) have been highlighted as one of the most important determinants of forest cover change (Geist & Lambin, 2002). Despite their extensive coverage and importance, Africa's dry tropical forests have received little attention compared to its tropical rain forests, and changes in this ecosystem are still poorly documented at the global scale (Lambin et al., 2003). When studying tropical forests, dry forests are frequently either excluded from the study area (Achard et al., 2002;Hansen et al., 2008) or excluded from the reporting due to high uncertainty (DeFries et al., 2002;Ramankutty et al., 2007). Recently, Hansen et al. (2010) quantified the global gross loss of forest cover between 2000 and 2005 with a combination of coarse-resolution imagery and a stratified sampling of Landsat imagery. The authors estimated that, at the global scale, the dry tropical biome represented 20% of the total forest cover in 2000, with a gross loss of forest cover of 2.9% between 2000 and 2005. However, the status of Africa's dry forests has not been reported. In Africa, FAO national forestry statistics remain the only available source of information (Houghton, 2005). These statistics are collected from national authorities, where available, and inevitably suffer from a lack of consistency and completeness both in time and coverage. They are highly aggregated and of dubious accuracy for some regions (Matthews, 2001;Mayaux et al., 2005;Grainger, 2008). Biome-scale estimates are important to accurately characterize and monitor relative variation within, and potential displacement between, countries in a similar ecological zone (Hansen et al., 2008). This information responds to a pressing need for scientific research and support for policy formulation and implementation at national and international levelsin particular the United Nations Framework Convention on Climate Change (UNFCCC) process for the Reduction of Emissions from Deforestation and Forest Degradation (REDD+) and the protection of habitat for biodiversity conservation, as defined in the Convention on Biodiversity (CBD) Aichi Biodiversity Targets (especially Targets 5, 15 and 19). This study provides estimates of land cover and land-cover change that occurred at the landscape level from 1990 to 2000 in African dry forests and woodlands, with better global consistency and higher accuracy than previously available. Data and processing are based on a systematic grid of Landsat data as part of the global TREES-3 project implemented by the Joint Research Centre of the European Commission to monitor tree cover and its changes across the tropics. The results will be used for the forthcoming FRA 2010 Remote Sensing Survey. The intended outcome of this new survey is to improve the consistency and comparability of forest area and change statistics at regional, ecozone and global levels. Once finalized, results including imagery and land-use classifications at each site will be made available on the FAO FRA web-based portal. Study area Our study focuses on the dry forests and woodlands of Africa, defined by Chidumayo & Gumbo (2010) as 'vegetation dominated by woody plants, primarily trees, the canopy of which covers more than 10 per cent of the ground surface, occurring in climates with a dry season of three months or more'. The bioclimatic regions analysed in this paper correspond to the warm subhumid dry forests of the Guinea-Congolia/Sudanian and Guinea-Congolia/Zambezian transition zones and the warm dry woodlands of the Sudanian and Zambezian ecoregions (White, 1983) (Fig. 1a). In order to consider similar climatic regions, we excluded the semi-arid dry woodlands and grasslands occurring in the Somali-Masai region and the Sahelian and Kalahari-Highveld transition zones. Also excluded are the moist evergreen forests of the Congo basin, the rain forests of West Africa, mangroves, montane and submontane forests and the coastal forest mosaic regions. The term 'dry forest' covers an extensive vegetation type; it includes all the deciduous or seasonal forests between the tropical forests and woodlands to the north and south of the equator. Woodlands range from open woodland to wooded savannas. In West Africa, the subhumid dry forests and dry woodlands correspond to the Guinean and Sudanian savannas. The Guinean subhumid dry forest is characterized by open cover of deciduous trees, but can sometimes form a dense forest. To the north, the Sudanian region is dominated by deciduous shrubland, grass and woody trees interspersed with cropland. In southern Africa, the Guinea-Congolia/ Zambezian zone comprises forests or dense woodlands, most of which are highly fragmented by humans or fire. The deciduous miombo woodland (Brachystegia-Julbernardia) is the most extensive type of vegetation present in the Zambezian region (Chidumayo & Gumbo, 2010). It stretches from Angola almost to the eastern African coast in Mozambique and Tanzania and includes Zambia, southern Democratic Republic of Congo and part of Zimbabwe (White, 1983). In this study, we group the Guinea-Congolia/Sudanian and Sudanian ecoregions under the term Sudanian, and the Guinea-Congolia/Zambezian and Zambezian ecoregions under the term Zambezian. Sampling strategy and satellite imagery Remote-sensing imagery offers the advantages of repetitive data acquisition, a synoptic view of inaccessible areas and consistent image quality over time. Statistical sampling of medium-resolution imagery provides a cost-effective and accurate approach to derive area estimates of land cover and land-cover changes at pan-tropical (Richards et al., 2000;FAO, 2001;Achard et al., 2002;Hansen et al., 2008;Gibbs et al., 2010) and continental scales (Brink & Eva, 2009). The sampling design selected for our study consists of a rectilinear grid based on integer degrees of geographical latitude and longitude (Mayaux et al., 2005). Subsets of 20 km 9 20 km Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper (ETM+) images were used to cover the sample sites for the reference years 1990 and 2000. A total of 796 sample sites were processed to estimate the areal extent of dry forests and woodlands and the change in each over time. This corresponds to a sampling rate of approximately 3%. Fig. 1b shows the spatial distribution of the selected sample sites. The Sudanian and Zambezian ecoregions are fully covered. For the transition zones within the central Guineo-Congolian ecoregion, the global land-cover map of the African continent (Mayaux et al., 2004) for the year 2000 (GLC 2000) was used to mask out the sample sites covered by evergreen forests. The following land-cover classes of the GLC 2000 map were aggregated to represent humid forests: closed evergreen lowland forests; degraded evergreen forests; mangroves; and mosaic forests/croplands. This definition of the African humid forest domain corresponds to the study area of Duveiller et al. (2008) over the Congo River basin and the region analysed by Achard et al. (2002) of the world's humid tropical forests. Landsat data were downloaded from the United States Geological Survey's National Centre for Earth Resources Observation and Science (http://glovis.usgs.gov/) at full spatial and spectral resolution (30 m). To find satisfactory imagery in terms of cloud cover and seasonal/radiometric characteristics, Landsat scenes were screened visually and the best available images were selected for each sample site . From the total of 796 sites, only 12 sites in Central Africa were excluded from the data set because of poor quality imagery either in 1990 or in 2000. Images taken at the end of the rainy season were given priority because then the discrimination between cropland and natural vegetation is greatest, the cloud coverage is reduced and the impact of fire is limited. The dates of image acquisition were chosen to be as close as possible to the reference years 1990 and 2000. The selected images passed through a preprocessing chain, including the assessment and correction of spatial registration, cloud masking, the conversion to top-of-atmosphere reflectance, haze correction and normalization. More details about these preprocessing steps and their impacts on the results can be found in Bodart et al. (2011). Data processing The processing approach combines multidate segmentation, supervised classification and visual checks and refinement of class White's (1983) ecoregions, the distribution of sample sites and excluding the African humid forest domain. (a) Grey-shaded regions correspond to our study area (Sudanian, Guinea-Congolia/ Sudanian, Guinea-Congolia/Zambezian and Zambezian ecoregions). (b) A total of 796 sample sites cover the study area. The GLC 2000 evergreen forest classes in the background were used to mask out the sample sites covered by humid forests. Subsets of 20 km 9 20 km Landsat images were extracted at each sample site for the reference years 1990 and 2000. labels by forestry experts. The full classification and change-detection approach is described in Ra si et al. (2011). Object-based classification was performed to produce land-cover maps at each sample site. Labels of final objects (minimum mapping unit of 3 ha) were based on the proportional area of the different landcover categories contained in these objects. Classes were assigned according to the following aggregation rules: • Dense tree cover ( ! 70% tree cover portion in segment) • Open tree cover (30-70% tree cover portion) • Other wooded land ( ! 70% shrubs, forest regrowth) • Other land cover (including croplands, herbaceous cover and bare land) The 'tree cover' portion is based on the FAO forest definition criteria for canopy density ( ! 10%) and tree height ( ! 5 m). Natural forests and forest plantations are included in the 'tree cover' class, as is tree cover outside forests, such as in parks or on agricultural land. The 'open tree cover' class includes both a mosaic of dense tree cover (patches of dense forest with cleared areas) and open tree cover fragmented by small crop fields (< 3 ha) or highly degraded by wood harvesting or burning. The term 'other wooded land' is used for any woody vegetation layer less than 5 m high, which includes mainly shrubland, but also shrub-like agricultural crops (such as coffee and tea), vegetation regrowth or plantations with small trees. 'Other land cover' groups together all non-woody vegetation land covers (e.g. herbaceous cover, pastures, non-woody crops, bare soils and settlements), with the exception of 'inland water'. Clouds and their shadows were masked during the preprocessing steps. Statistics on forest cover and changes in forest cover between the two dates were extracted from the multidate objects. In order to generate the same sampling probability for each site and account for the variation in acquisition date, three successive correction steps were applied to account for: (1) linear adjustment of change matrices to reference dates; (2) replacement of missing data; and (3) weighting of sample sites, based on the curvature of the Earth, for handling unequal sampling intensity. Details and formulae can be found in Eva et al. (2012) and Appendix S1 of the Supporting Information. For the study area, the total land-cover area can be extrapolated from the average proportion using the Horvitz-Thompson direct expansion estimator (see S€ arndal et al., 1992, for a general discussion of Horvitz-Thompson estimators). The application and utility of the direct expansion method has been used in various studies (Brink & Eva, 2009;Eva et al., 2010). The total class area Z c is obtained from Z c ¼ D y c , where D is the total area of the study region and y c is the average proportion of land cover for a particular class c. The estimation of the variance of the mean was based on a local estimation of the variance (Eva et al., 2012). The annual change rates were calculated by dividing the total area of change by the time period and by the average total area of cover over the two dates. The sampling approach (distribution, sample and unit size) has been designed to provide statistically valid (i.e. with a relatively small standard error) estimates at continental to regional scales . Statistics were therefore reported only at the continental and ecoregion levels. Accuracy assessment A thorough assessment of accuracy would require independent reference data which could be considered more reliable and accurate than the data set to be assessed. In the current case, very high-resolution data or field observations would provide appropriate information to assess the accuracy of maps derived from Landsat imagery (Strahler et al., 2006). However, no such reference data exist for Africa, especially for the years 1990 and 2000. Therefore, an accuracy analysis with a more limited scope was conducted through an independent analysis of a subsample of 338 randomly selected Landsat subsets (out of the total of 2043 sample units for Africa). For each of the 338 primary sample units, five points were systematically selected within the 20 km 9 20 km subset. The objects falling on these 338 9 5 points were reinterpreted carefully by an independent expert at two dates (1990 and 2000) to create an independent data set for assessing the 'consistency' of the land-cover class labels. The systematic selection resulted in 1552 labelled objects (a few sample units are missing). As a large majority of these systematically selected objects did not show any change in land cover between 1990 and 2000, we also selected all the changed objects (i.e. objects showing changes according to our interpretation) falling on a denser systematic grid of 9 9 9 points with 1 km spacing for assessing the accuracy of changed objects. A total of 1194 changed objects were selected in this manner (from a population of 25,688 systematic points) and interpreted by the independent expert together and nondifferentially with the 1552 objects. When analysing the 1552 systematic polygons used for land-cover accuracy, there is an overall agreement of 87% for the six land-cover classes and 94% for the tree-cover classes. If we were to use these points to estimate areas, the relative difference between areas from our interpretation and from the independent assessment would be 7.9%. When comparing our change results to the independent assessment of 1552 systematic points complemented by a second set of 1194 'change' points, the agreements for the changes between the tree cover classes and other classes are 97.5% and 90.2% for the two sets of points, respectively. In addition, 7.2% of the objects mapped as unchanged were considered to have changed by the independent expert, and conversely, 17.3% of the changed objects were unchanged according to the independent assessment. IDL codes have been used for the random selection of the Landsat subsets and the systematic selection of objects inside each subset. The same tool was used for the interpretation of the selected objects as that used for the visual checking of classification results . Statistics of change were extracted from the objects and confusion matrices were produced in Excel. Status of dry forests and woodlands Our results show that in the year 2000, the study area was covered by 263.3 million hectares (Mha) of tree cover, of which 118.2 Mha were dense tree cover and 145. Fig. 2). Other wooded land covered 31% of the Zambezian ecoregion and 50% of the Sudanian ecoregion. Changes in forested area We estimate that between 1990 and 2000, 3.6 Mha of dense tree cover was converted to other wooded land or other land cover, and 8.7 Mha of open tree cover and 18.7 Mha of other wooded land were converted to other land cover (Table 2). During the same period, the region gained 0.3 Mha of dense tree cover, 2.9 Mha of open tree cover and 9.8 Mha of other wooded land, leading to a total net loss of 3.3 Mha of dense tree cover, 5.8 Mha of open tree cover and 8.9 Mha of other wooded land. In addition, net changes from dense to open tree cover totalled 3.9 Mha. The annual net rates of loss of dense and open tree cover were estimated at 0.28% and 0.39%, respectively, while 0.32% of the study area was, on average, converted annually from dense to open tree cover. The annual net rate of other wooded land loss was estimated at 0.20%. The net loss of total tree cover (dense and open) was estimated to be 9.1 Mha. This total tree cover was converted to other wooded land and other land cover in almost the same proportions as the net changes (4.6 Mha and 4.5 Mha, respectively). The gross gain to dense and open tree cover, however, occurred mainly from other wooded land, especially in the open tree cover, where conversion from other wooded land accounted for 83% of the total increase from 1990 to 2000 (2.4 Mha vs. 0.5 Mha from other land cover) ( Table 2). Analysis of the distribution of changes in forest cover by ecoregion shows that the Zambezian region accounts for more than 80% of the total change in tree cover (Table 3, Fig. 3). In this ecoregion, 7.6 Mha of total tree cover was lost between 1990 and 2000 (annual net rate of 0.37%), predominantly from open tree cover, while the Sudanian ecoregion lost 1.5 Mha of total tree cover (annual net rate of 0.25%). Net changes from dense to open tree cover follow the same pattern, with 3.6 Mha in the Southern Hemisphere and 0.3 Mha in the Northern Hemisphere. In terms of other wooded land, the Sudanian ecoregion represents almost 70% of the total area lost (net and gross). In this ecoregion, 5.9 Mha was converted from other wooded land to other land cover, while 3 Mha was lost in the Zambezian ecoregion. High dynamism in both gains and losses between other wooded land and other land cover has been observed in the same proportion in both ecoregions. Over this period of 10 years, about half of the total area of other wooded land converted to other vegetation cover has been offset by gains from other vegetation cover to other wooded land. Moreover, 75% of the sample sites where changes in other wooded land cover were identified exhibited land conversion in both directions (loss and gain), from and to other land cover. In some sample sites, this intra-site variability results in a very small or zero net change (Fig. 4 gives examples for the Sudanian ecoregion). Comparison at continental level For the first time, changes in land cover in the African dry ecosystems were estimated at the continental level using a consistent processing of medium spatial resolution data. To the best of our knowledge, no previous study has provided comparable results at this level of accuracy and consistency. According to our sampling of Landsat observations, the Sudanian and Zambezian ecoregions of Africa are covered by about 700 Mha of dry forests and woodlands (263 and 445 Mha, respectively), i.e. approximately 20% of the continental land is covered by this biome. Compared to the GLC 2000 map for the same study area, this result is slightly superior to the total area covered by deciduous forests, dry woodlands and shrublands land-cover categories (659 Mha), Table 1 Estimates of tree cover and other wooded land cover for the year 2000 for the whole dry African region and the two sub-ecoregions (in 10 6 ha and percentage of the total area, both given AE standard error). The distribution of land cover (%) per ecoregion is given in italics. Inland water and other land cover have not been reported. The total area of land-cover change has been estimated from and to dense tree cover, open tree cover and other wooded land. Between 1990 and 2000, the annual net losses of tree cover and other wooded land were estimated at 0.91 Mha and 0.89 Mha, respectively, with an additional 0.39 Mha converted from dense to open tree cover. Based on the most recent national forestry reports (FAO, 2010a) for the same set of 30 countries and excluding the humid domain, the annual net area of deforestation between 1990 and 2000 is estimated at 3.4 Mha (see Appendix S2 for more detail). This total loss of forest area is significantly higher than our estimates for tree cover and other wooded land losses. The difference is even bigger when compared to the country data reported in the FRA 2000 report (FAO, 2001) for the same time period (see Appendix S2 for a comparison of the two reports). In this older report, the annual net forest loss between 1990 and 2000 was estimated at 4.2 Mha. Sudan and Zambia alone accounted for 0.9 and 0.8 Mha, respectively. In the FRA 2010, these two countries revised their estimates downward to 0.6 and 0.3 Mha annual net forest loss, respectively, while Tanzania and Mozambique increased them to 0.4 and 0.2 Mha, respectively. Côte d'Ivoire changed from a net loss of 1.2 Mha (FAO, 2001) to a net gain of 50 Mha between 1990 and 2000 in the last FRA report (FAO, 2010a). In terms of total forest cover in 1990, the estimates have been revised upward from 498 Mha in the FRA 2000 to 542 Mha in the FRA 2010. These discrepancies highlight, once again, the pressing need for better estimates in these regions and emphasize the advantage of our harmonized methodology for forest monitoring in the dry Africa biome. Compared to the humid biome, our estimates of forest loss (0.34% annual net loss of total tree cover and 0.32% annual net loss of dense to open tree cover) are significantly higher than the rates estimated by Duveiller et al. (2008) for Central Africa (0.16% and 0.09%, respectively). The method used by Duveiller et al. (2008) is comparable to the one used in this study in terms of time period (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000), image processing (object-based classification) and sampling approach (systematic sampling every 0.5 degrees). However, their study area did not cover the humid forests of West Africa. For the humid tropical forests of Africa, covering the whole Guineo-Congolian zone and Madagascar, Achard et al. (2002) estimated the annual net deforested area between 1990 and 1997 at 0.71 Mha, plus a further 0.39 Mha of degradation, leading to annual average net rates of 0.36% and 0.21%, respectively. The authors also emphasized the very high local rates found in Madagascar and Côte d'Ivoire. This study was based on visual interpretation of 19 subsets of Landsat scenes located over 'hotspots' of deforestation in the humid tropical forest. Comparison at regional and local level According to our study, 84% of the total deforested area occurred in the Zambezian ecoregion. This amount rises to 92% when considering the net loss from dense to open tree cover. Large spatially concentrated areas of loss can be found in Mozambique, Tanzania, Zambia and Angola, while isolated hotspots are present in Nigeria and the border of the humid forest in Ghana. Several local studies based on Landsat data over the same time period have identified decreases in forests and woodlands belonging to the miombo ecosystem in southern Africa for example, Mozambique (Jansen et al., 2008), Angola (Cabral et al., 2011) and Malawi (Palamuleni et al., 2011), where results indicate annual local deforestation rates generally higher than 2%. Agricultural expansion was highlighted as the main driver of deforestation, although fuel wood extraction (firewood and charcoal for urban areas) is also contributing considerably to forest degradation and deforestation, especially when the wood is cut by a 'clear-felling system' (Chidumayo & Gumbo, 2010) and the sites are consequently converted to croplands. These local studies usually concentrate on landscapes with fast changes or those under threat, which can explain the higher rates of change measured in these areas. Table 3 Estimates of gross loss and gain of forest cover and net change (in 10 6 ha, AE standard error) from 1990 to 2000 for the whole dry African region and the two sub-ecoregions. The annual rates of change are given in italics (total area of change divided by the time period and by the average total area of cover over the two dates). In terms of deforestation in wooded land, the Sudanian ecoregion accounted for 67% of the total gross loss. These changes correspond mainly to changes in land use from and to agriculture. Four intensive zones of change can be distinguished: (1) in north-eastern Nigeria and Cameroon; (2) in northern Benin, Togo and Ghana; (3) in southern Mali and Burkina Faso;and (4) in southern Senegal. While no change could be detected in some areas, we noted that some sites were already entirely covered by agriculture in 1990. This is particularly the case in southern Niger, north-western Nigeria and central and northern Burkina Faso. Considerable research has been conducted over the Sudano-Sahelian region since the 1960s (Anyamba & Tucker, 2005;Olsson et al., 2005). These studies used data at much lower resolution and have shown that the region has experienced an increase in vegetation greenness over recent decades. Fensholt et al. (2012) demonstrates that this greening trend of the Sudano-Sahelian region is primarily driven by precipitation and that higher precipitation values occurred mainly during the summer months of the observed period 2002-2007, compared to the 1980s. However, even if highlighting a coupled increase in precipitation and normalized difference vegetation index (NDVI), these low-resolution-based studies fail to provide more detailed information on the qualitative changes in vegetation structure and cover. Very few local studies exist where detailed vegetation dynamics are monitored based on medium-resolution imagery in these regions. Some examples can be found in Ghana (Pabi, 2007) and Mali (Tappan & McGahuey, 2007). Our results show an annual net loss of other wooded land of 0.6 Mha (or 0.21% annual loss) in the Sudanian ecoregion, but also reveal a very dynamic landscape with changes, both positive and negative, occurring in most cases within the same sample site. This high variability reflects the agricultural system of shifting cultivation employed by the majority of farmers in West Africa. In West Africa in particular, a major phase of conversion of natural vegetation such as forests and other wooded land to agriculture occurred before the 1990s. Brink & Eva (2009) and Gibbs et al. (2010) both Landsat-based studiesdocument increases in agricultural areas of nearly 60%, largely at the expense of forests, over sub-Saharan Africa, beginning in the mid 1970s, with the greatest agricultural expansion occurring in West Africa. However, with current global policies and investments such as the growing bioenergy market (Amigun et al., 2011), the increased incidence of 'land-grabbing' (Cotula et al., 2009) and the 'leakage' or displacement of deforestation to other countries (Meyfroidt et al., 2010), this 'slowing' deforestation trend could again accelerate due to new land-resource needs. Continued observations of forests are required to monitor and assess the status of, changes to and pressures on this valuable resource. Limits of digital classification and data If the dry forest areas of Africa are poorly studied, it is not only due to a lower level of interest in these regions but also because of the considerable difficulties in mapping seasonal forests and detecting changes. Mapping vegetation changes in these ecosystems is a challenging task and the risk of misestimation is high. The same type of deciduous vegetation shows very different spectral response during the dry leafless season and the (green) wet season. Therefore, images of the same land cover acquired during different seasons can lead to two different automatic land-cover classifications although there is no change in land cover. During the data selection, pairs of images from the same month were given preference in order to reduce overestimation of change due to different vegetation phenology. Nevertheless, imagery at a similar seasonal stage could not be found for over 30% of the data set. The difference in acquisition date and interannual variability resulted in relatively poor-quality outputs coming from the automatic post-classification change detection. Visual interpretation by regional experts was therefore a mandatory and relatively prolonged process in accurately mapping forest cover and changes in forest cover in these ecosystems. Efforts have been made to visually check every polygon of vegetation change using a consistent and harmonized interpretation across countries and over time. In order to reduce the time needed to achieve a similar quality, a more advanced methodology could be investigated, including textural analysis or automatic multitemporal analysis of biannual imagery complemented by the use of annual MODIS (Moderate Resolution Imaging Spectroradiometer) data time series to identify appropriate acquisition dates within the year. Landsat data allow for the detection of most changes occurring at the landscape level, but the spatial resolution does not permit the assessment of differences in species composition or fine-scale degradation of canopy cover. Moreover, for a cost-effective approach, we decided not to map changes smaller than 3 ha. Our study considered two classes of tree cover (dense and open), based on the proportion of tree cover inside of an object. The 'open tree cover' class could be composed of patches of dense tree cover with other land cover in between or it could correspond to tree cover fragmented by small crop fields or highly degraded by wood harvesting or burning. The transition between dense and open tree cover can therefore be interpreted as forest degradation or deforestation, depending on the definition used for these processes. CONCLUSIONS African dry forests and woodlands are characterized by relatively poor forest protection and unsustainable management practices caused largely by insufficient knowledge of resources. In addition, the increasing human population with a strong dependence on the environmental services provided by forests represents one of the major challenges to forests and the forestry sector, particularly in the drylands of sub-Saharan Africa (FAO, 2010b). For the first time, a detailed and consistent analysis of changes in land cover has been conducted over the whole African dry region using a global sampling of Landsat imagery and a region-specific harmonized approach. The robust methodology adopted reduced considerably the uncertainties in vegetation cover and its dynamics and enables consistent monitoring over time and space. Between 1990 and 2000, the study indicates that 3.3 Mha of dense tree cover, 5.8 Mha of open tree cover and 8.9 Mha of other wooded land were lost, with a further 3.9 Mha degraded from dense to open tree cover. These results are substantially lower than the 34 Mha previously reported by the national authorities in the FRA 2010 over the same study area (FAO, 2010a). However, the large surface area of African dry forests and woodlands (three times the area covered by the African humid dense forest), the high vulnerability of millions of people living in and around the forests and the high annual rate of deforestation (0.34%) lend a global significance to the loss of vegetation cover in these ecosystems. This study provides accurate estimates of deforestation and degradation rates at continental and ecoregion levels, and a global picture of hotspots of deforestation. These results will enable improved formulation and effective monitoring of international environmental policies. In particular, such data provide crucial information to accurately estimate forest carbon fluxes and assess the potential opportunity/impact of REDD+ mitigation activities and other sustainable forest management policies in African dry forests and woodlands. The results open the way to more research quantifying the impact of biodiversity conservation law and the achievements set in the Aichi Targets as defined in the CBD at the global scale, or locally by intensifying the number of samples inside and outside protected areas for detailed assessments. New threatened areas can be identified by overlaying our results on the loss of natural vegetation and hence habitat loss with maps of species occurrence and abundance or biodiversity value. Better input on land-cover change will improve land-cover modelling and enhance our understanding of the causes and impact of land-cover change processes at continental scales. By highlighting the magnitude and intensity of changes that occurred in dry African forests and in view of their potential socio-economic and environmental impacts, this study reinforces the need to direct more attention and resources to this threatened but previously poorly studied ecosystem.
2018-04-03T02:55:56.655Z
2013-03-04T00:00:00.000
{ "year": 2013, "sha1": "686cbb990f1da16eec3ea430b14264b37c167e3d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jbi.12084", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "bafd3d08bb90e31966621a45fd73d792cbc6ba33", "s2fieldsofstudy": [ "Environmental Science", "Mathematics" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14438617
pes2o/s2orc
v3-fos-license
Spiro annulation of cage polycycles via Grignard reaction and ring-closing metathesis as key steps Summary A simple synthetic strategy to C 2-symmetric bis-spiro-pyrano cage compound 7 involving ring-closing metathesis is reported. The hexacyclic dione 10 was prepared from simple and readily available starting materials such as 1,4-naphthoquinone and cyclopentadiene. The synthesis of an unprecedented octacyclic cage compound through intramolecular Diels–Alder (DA) reaction as a key step is described. The structures of three new cage compounds 7, 12 and 18 were confirmed by single crystal X-ray diffraction studies. Introduction Design and synthesis of architecturally intricate cage molecules is a worthwhile challenge. The unique properties associated with the carbocyclic cage frameworks are the main reasons for pursuing their synthesis [1,2]. They are valuable synthons to assemble natural as well as non-natural products [3,4]. In addition, the cage molecules are interesting targets because of their unusual structural features such as the deformation of the ideal C-C bond angles, high degree of symmetry and the enhanced ring strain etc. [5][6][7][8][9][10][11][12][13][14][15][16][17][18]. Although, several methods are available for the construction of cage compounds [7,[23][24][25][26][27][28][29][30][31][32][33], the synthesis of symmetrical spirocage molecule 7 seems to be a synthetic challenge due to the proximity of the two carbonyl groups in dione 10 which provides a hemiketal with various nucleophiles [34][35][36][37][38][39]. In view of various applications of cage molecules and the documented difficulties in their synthesis, we conceived a short synthetic route to C 2 -symmetric bis-spiro-pyrano cage compound 7. To this end, the Grignard addition and ring-closing metathesis (RCM) are considered as viable options. The retrosynthetic analysis to the target bis-spiro-cage compound 7 is shown in Figure 2. The target compound 7 could be obtained from O-allylation of the Grignard addition product 11 followed by the two-fold RCM sequence. The required cage dione 10 could be constructed in two steps from readily available starting materials such as 1,4-naphthoquinone (9) and cyclopentadiene (8) [40,41]. Results and Discussion In connection with the synthesis of new cage molecules, we reported a new approach to the hexacyclic dione 10 and related systems via Claisen rearrangement and RCM as key steps [21,30]. Here, we have prepared the cage dione 10 by the known route involving two atom-economic protocols such as Diels-Alder reaction and [2 + 2] photocycloaddition [42][43][44][45] (Scheme 1). Later, the hexacyclic cage dione 10 was subjected to a Grignard reaction with comercially available allylmagnesium bromide in diethyl ether. Under these conditions, we realized the formation of hemiketal 12 in 84.7% yield instead of the expected diallylated product 11 (Scheme 2). In similar fashion, the cage dione 10 was treated with comercially available vinylmagnesium bromide and the hemiketal 13 [46,47] was obtained in 89.2% yield instead of the desired divinylated compound 14 (Scheme 2). The proximity of the carbonyl groups may be responsible for the formation of hemiketals. The structures of both these heptacyclic hemiketals 12 and 13 have been confirmed by 1 H and 13 C NMR spectral data and further supported by HRMS data. Finally their structures have been unambiguousily established by single crystal X-ray diffraction studies [48] (Figure 3). Since our goal was to synthesize the diallylated compound 11, we screened various reaction conditions and finally, we found that the addition of the etheral solution of the hexacyclic dione 10 to a freshly prepared allyl Grignard reagent at 0 °C gave the expected diallylated compound 11 in 88% yield (Scheme 3). The Grignard reagent at higher concentration (1.0 M solution) exists as a mixture of dimer, trimer and polymeric components. However, the home-made Grignard reagent at low concentration (0.1 M solution) exists mostly in the monomeric form. So, we speculate that the difference in the concentration may be responsible for the formation of diol 11 [49][50][51]. Alternatively, when the diketone was reacted with an excess amount of Grignard reagent, the carbonyl groups are attacked simultaneously by the Grignard reagent and resulted in the formation of diol 11. When an excess amount of substrate containing carbonyl group was reacted with a limited amount of Grignard reagent, the oxyanion formed by the Grignard reagent attacks the other car-bonyl group in a transannular fashion to generate hemiketal derivatives 12 and 13. Later, the diallyldiol 11 was subjected to an O-allylation sequence under NaH/allyl bromide conditions in DMF to deliver the desired tetraallyl compound 15 (53%) along with the triallyl compound 16 (34.3%) (Scheme 4). Subsequently, the tetraallyl compound 15 was subjected to an RCM sequence with the aid of Grubbs' first generation catalyst (G-I) in dry CH 2 Cl 2 . Surprisingly under these conditions the reaction was found to be sluggish. Therefore, various other reaction conditions were screened to optimize the yields. Finally, we found that the Grubbs' first generation catalyst (G-I) in refluxing toluene gave the desired RCM product 7 in 85% yield. Along similar lines, the triallyl compound 16 gave the RCM product 17 in 66% yield (Scheme 4). The structures of the annulated cage compounds 7 and 17 have been confirmed by 1 H and 13 C NMR spectral data and also supported by HRMS data with a molecular weight of 355.16 for 7 and 343.16 for compound 17, respectively. Furthermore, the structure of the bis-spiro pyrano cage compound 7 was confirmed by single crystal X-ray diffraction studies [52] ( Figure 4). Fortunately, we observed that the liquid compound 16 kept at room temperature for a long time converted into a solid material. Therefore, we were keen to investigate the reason for this observation. In this context, the 1 H and 13 C NMR spectra of this compound were again recorded, indicating the occurence of an intramolecular DA reaction. Later, it was confirmed by single crystal X-ray diffraction studies [53] ( Figure 4). Next, the formation of compound 18 has been confirmed by an independent synthesis. To this end, triallyl compound 16 was subjected to intramolecular DA reaction in refluxing toluene to deliver the DA adduct 18 in 80% yield (Scheme 5). Surprisingly the related system 19, prepared from 12 did not undergo DA reaction to produce the intramolecular DA adduct 20. Even under prolonged toluene reflux reaction conditions, we did not realize the formation of the required DA product 20 (Scheme 6). Conclusion In summary, we have demonstrated a new approach to intricate C 2 -symmetric cage bis-spirocyclic pyran derivative 7 through an allyl Grignard reaction and an RCM sequence. The strategy demonstrated here involves an atom economic process. The synthetic sequence demonstrated here opens up a new route to complex cage targets. Additionally, intramolecular DA reaction opens up a new strategy for the synthesis of highly complex cage compounds that are inaccessible by other routes. Studies to extend the scope of the intramolecular as well as intermolecular DA reaction for the synthesis of interesting cage molecules are in progress. Supporting Information Supporting Information File 1 Detailed experimental procedures, characterization data and copies of 1 H and 13 C NMR spectra for all new compounds.
2017-04-02T15:44:00.881Z
2015-08-05T00:00:00.000
{ "year": 2015, "sha1": "7ad730c67282e0f52154b60c65b147aaa3592ac4", "oa_license": "CCBY", "oa_url": "https://www.beilstein-journals.org/bjoc/content/pdf/1860-5397-11-147.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "525d39121218505cb8587badceeabf6d4b4e420c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
3618241
pes2o/s2orc
v3-fos-license
Melatonin inhibits the proliferation of breast cancer cells induced by bisphenol A via targeting estrogen receptor‐related pathways Background Background: Bisphenol A (BPA) is an estrogen‐like chemical widely contained in daily supplies. There is evidence that environmental exposure to BPA could contribute to the development of hormone‐related cancers. As is reported in numerous studies, melatonin, an endogenous hormone secreted by the pineal gland, could markedly inhibit estrogen‐induced proliferation of breast cancer (BC) cells. In this study, we intended to reveal the effects of melatonin on BPA‐induced proliferation of estrogen receptor‐positive BC cells. Methods Methods: We used methyl thiazolyl tetrazolium, luciferase reporter gene and western blotting assays to testify the effect of melatonin on BPA‐mediated proliferation of MCF‐7 and T47D cells. Results Methyl thiazolyl tetrazolium and colony formation assays showed that melatonin could significantly abolish BPA‐elevated cell proliferation. Meanwhile, BPA‐upregulated phosphorylation of ERK and AKT was decreased by melatonin treatment. Mechanistically, we found that BPA was capable of upregulating the protein levels of steroid receptor coactivators (SRC‐1, SRC‐3), as well as promoting the estrogen response element activity. However, the addition of melatonin could remarkably block the elevation of steroid receptor coactivators expression and estrogen response element activity triggered by BPA. Conclusion Conclusions: Therefore, these results demonstrated that melatonin could abrogate BPA‐induced proliferation of BC cells. Therapeutically, melatonin could be regarded as a potential medication for BPA‐associated BC. Introduction Bisphenol A (BPA) is a carbon-based synthetic compound that has been widely used in many daily supplies, including dental sealants, food packaging, and plastics polycarbonate polyvinyl chloride. 1,2 Under heat, acidic and basic conditions, or constant use, these products could release BPA to the environment. 3,4 Exposure to BPA would jeopardize the human immune system and the female reproductive system. 1,5 Breast cancer (BC) is a malignant carcinoma that severely threatens women's health. Many studies have reported that BPA shows estrogen-like properties and has a link with BC development. 6,7 Estrogen and estrogen-like compounds normally interact with estrogen receptor (ER), and then modulate BC progression through two mechanisms: (i) directly bind with estrogen response element (ERE) to regulate gene expression; and (ii) work through a rapid non-genomic mechanism including activating the MAPK and PI3K/AKT pathways. 8,9 As an estrogen-like chemical, BPA could imitate estrogen to interact with ER, resulting in abnormal cell proliferation, migration, and apoptosis. 6,10 Melatonin, a secretion from the pineal gland, is an important endogenous hormone in regulating circadian rhythm. 11 The synthesis and secretion of melatonin could be disrupted by exposure to light at night. 12 However, with the compact modern lifestyle, more and more people are becoming sleep deprived, which leads to disorder of melatonin's serum level. Several studies have substantiated that disruption of the normal circadian rhythm could lead to a higher risk of hormone-dependent cancer occurrence including BC. 11,13,14 Multiple in vivo and in vitro studies have demonstrated the effects of melatonin on reducing the incidence and growth rate of BC. 15,16 Further studies show that melatonin could effectively inhibit cell viability and proliferation, and induce apoptosis in ER-positive (ER + ) breast tumors. 17 However, whether melatonin could affect BPA-induced BC cell proliferation remains unknown. To explore the effect of melatonin on BPA-mediated survival and proliferation of BC cells, we used MTT, colony formation, and western blotting assays in this study. We found for the first time that melatonin could inhibit BPA-elevated proliferation of MCF-7 and T47D cells, through suppressing ERE activity via declined expression of steroid receptor coactivators (SRCs), and downregulating AKT and ERK phosphorylation. MTT assay Cell proliferation ability was determined by MTT assay. MCF-7 and T47D cells were seeded 3000 cells per well in 96-well plates with five replicates. The cells were incubated for 24 hours to form a monoplayer. Then, DMEM containing no FBS and no phenol-red was used to substitute for the culture media to starve for 24 hours. Indicated concentrations of BPA (100 nM), E2 (10 nM) or melatonin (1 nM) were added for another four days after starvation. The media were changed every 48 hours during the treatment. Then 10 μL MTT was added to each well for four hours' incubation. Later, the media were discarded and 150 μL dimethyl sulfoxide was added in each well to solve formazan. The absorption values were determined at OD 490nm by use of an absorbance reader (Enspire 2300 multimode reader; Perkin Elmer, Hopkinton, MA, USA). Colony formation assay A total of 500 cells for each well were seeded in 12-well plates containing DMEM with 10% FBS. The media were replaced by FBS and phenol red-free DMEM with the indicated dose of BPA, E2, and/or melatonin 48 hours later. Cells were Thoracic Cancer 9 (2018) 368-375 maintained in the incubator for 15 days. Distinguishable colonies were stained by crystal violet and calculated. Protein preparation and western blotting assay Expression of signaling pathway proteins were quantified by western blotting. We planted indicated cells at a density of 3 × 10 5 cells for each well. The next day, the media were replaced by DMEM with no phenol red and FBS to starve for 24 hours. Then, indicated BPA and melatonin were added in for another 48 hours. The protein extraction and sodium dodecyl sulfate-polyacrylamide gel electrophoresis were carried out as described previously. 18 Transient transfection assay Cells were seeded in 12-well plates and allowed to grow for 24 hours at 37 C. Later, cultural media were substituted by phenol red-free DMEM containing no FBS. After 12 hours starving, we transfected the cells with 1 μg of pERE-E1bluc plasmid and pCMV-[beta]-gal packaged by Lipofectamine 2000 Reagent (Invitrogen, Carlsbad, CA, USA) as the manufacture's protocols instructed. Twelve hours later, media were renewed by which containing BPA or melatonin as indicated in previous experiment assays. Luciferase reporter gene assay To implement luciferase assay, we planted MCF-7 and T47D cells in 12-well plates, and then transfected them with pERE-E1b-luc reporter plasmid and pCMV-[beta]-gal (as control). After transfection for 12 hours, the media were replaced by DMEM with no FBS and no phenol red to starve for 12 hours. Then cells were treated with dimethyl sulfoxide, BPA, E2, and/or melatonin for 24 hours. The reporter gene activity was detected by a Luciferase Reporter Assay Kit (K801-200; BioVision, Mountain View, CA, USA) according to the manufacturer's specification. The fluorescence signal was measured using Enspire 2300 multimode reader (Perkin Elmer). The assay was processed in triplicate and at least three independent assays were carried out. Statistical analysis Student's t-test or one-way ANOVA were used to process the results in this study by SPSS13.0 software (SPSS, Chicago, IL, USA), among which the P-value <0.05 was regarded as significant. Melatonin could block the survival and proliferation of ER + BC cells induced by BPA To investigate whether melatonin could abolish the survival and proliferation of ER + cell lines induced by BPA, we used DMEM media with phenol red-free and FBS to perform MTT assay. As shown in Figure 1a, the administration of 100 nM BPA could enhance the survival ratio of MCF-7 and T47D cells, which was similar to the effect reached by 10 nM E2. Another finding worth noting was that this rise could be significantly inhibited by the addition of 1 nM melatonin (Fig 1a). We also performed colony formation assay to testify the effect of melatonin on MCF-7 and T47D cell survival under BPA treatment. Cells exposed to BPA were able to form larger and more colonies, and melatonin could reverse this change (Fig 1b,c). Thus, we conclude that melatonin could block the survival and proliferation of ER + BC cell lines induced by BPA. Melatonin is able to modulate the levels of ER-related key proteins under treatment of BPA To further investigate the mechanism by which melatonin inhibits BPA-induced BC cell proliferation, we detected the levels of several proteins reported to be involved in ERrelated cell proliferation. Phosphorylation of ERK and AKT were both obviously elevated when treated by BPA in MCF-7 and T47D cells (Fig 2a,b). However, melatonin could significantly abolish the upregulation of phosphorylated ERK and AKT mediated by BPA. Meanwhile, the effect of BPA on p21 could also be abrogated under melatonin treatment (Fig 2a,b). To explore whether the impacts of melatonin on ER + cells are related to melatonin receptor 1 (MT1), we detected the protein level of MT1.The result showed an upregulation of MT1 under treatment of melatonin, indicating that MT1 might be connected with melatonin-mediated abrogation of BC cell proliferation. Melatonin is capable of inhibiting BPAelevated SRC-1 and SRC-3 It was reported that BPA induced cell proliferation via its estrogen-like property. 6 Based on this property, we next investigated whether melatonin treatment affected BPAinduced proliferation by changing the expression of ER coactivators, SRC-1 and SRC-3. As shown in Figure 3, the upregulation of SRCs induced by BPA could be counteracted by addition of melatonin. Melatonin decreases the activity of ERE stimulated by BPA The change of ER coactivators indicates the alteration of ERE activity. 19 Thus, we considered whether ERE activity was also involved in the modulation of melatonin on BPAinduced proliferation. To verify this hypothesis, we performed luciferase reporter gene assay in MCF-7 and T47D cells (Fig 4). The cells were transiently transfected with pERE-E1B-luc plasmid. Luciferase reporter activity demonstrated that melatonin could effectively inhibit ERE activity promoted by BPA. Discussion BPA is a food contact material that is used as composition of plastics for the manufacture of food packaging, beverage bottles, kitchenware, wall of cans, and so on. Exposure to BPA could be constant in daily life, and severely threatens female and male health. 1,6,10 In vitro and in vivo investigations have revealed the connection between BPA with BC, which was found to be caused by the estrogen-like properties of BPA including interacting with ER and activating ERE. [20][21][22] Notably, BPA is competent to raise the levels of ERα and progesterone receptor, recruiting ERα to the Figure 1 Melatonin could block the survival and proliferation of estrogen receptor-positive breast cancer cells induced by bisphenol A (BPA). (a) MTT assay was performed to evaluate cell proliferation. Cells were incubated for 96 hours and the value of optical density 490 nm was measured as described in the MTT method. The bars represent the average value and the standard deviation of at least three independent experiments; *P < 0.05. The survival and proliferation of MCF-7 (b) and T47D (c) cells were detected by colony formation assay as described above, the proliferative ability was enhanced by BPA (100 nM) and E2 (10 nM), while this enhancement could be blocked by melatonin (1 nM promoter of estrogen-responsive genes in a dosedependent pattern, which was similar to the effect attained by E2 treatment. 23 It is reported that oral exposure to BPA could induce mammary carcinoma in rats through the activation of the AKT pathway. 24 However, whether there exists a potential agent that suppresses BPA-induced cell proliferation in BC remains unknown. As an endogenous hormone, melatonin not only works as a regulatory factor for circadian rhythm, but is also involved in angiogenesis and tumor growth. [25][26][27] Amongst the characteristics of melatonin, we focus on its function as an anti-neoplastic substance on hormone-associated tumors. Our data demonstrated that melatonin was capable of suppressing BPA-induced proliferation in ER + BC cells. We have known that BPA could mimic estrogen to form a complex with ER and then activate ERE, as well as the MAPK and PI3K/AKT signaling pathways. [28][29][30][31][32] Here, we found that melatonin could abolish BPA-elevated phosphorylation of ERK and AKT. Numerous studies have shown the crucial role of melatonin receptor MT1 in melatonin-induced anticancer events. [33][34][35][36] In the present study, we observed an obvious elevation of MT1 after treatment of melatonin, which may be involved in the abolishment of BPA-associated cell proliferation. Given the estrogen-like properties of BPA, we are concerned as to whether BPA promoted BC cell proliferation through ER coactivators, SRCs. Strikingly, we for the first time found that BPA was capable of elevating the expression of SRC-1 and SRC-3. Furthermore, the activity of ERE could be efficiently increased by BPA treatment. However, melatonin could significantly disrupt the BPA-elevated SRCs expression and ERE activity, which could be regarded as the mechanisms of melatonin in the suppression of BPA-induced BC cell proliferation. In the present study, we find that melatonin could reverse BPA-induced proliferation of BC cells via reducing the phosphorylation of ERK and AKT, as well as upregulating the level of cell cycle progression blocker, p21. Most importantly, melatonin blocks the activation of ERE triggered by BPA, possibly through downregulating ER coactivator, SRC-1 and SRC-3. Thus, we propose that melatonin could be used as a promising medication for BPAassociated BC progression.
2018-04-03T01:36:40.946Z
2018-01-13T00:00:00.000
{ "year": 2018, "sha1": "98013ceb53000e22ce5f9774201996174eace72f", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.12587", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "98013ceb53000e22ce5f9774201996174eace72f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252681208
pes2o/s2orc
v3-fos-license
Underlying Causes and Treatment Modalities for Neurological Deficits in COVID-19 and Long-COVID With reports of diverse neurological deficits in the acute phase of COVID-19, there is a surge in neurological findings in Long-COVID—a protracted phase of SARS-CoV-2 infection. Very little is known regarding the pathogenic mechanisms of Neuro-COVID in the above two settings in the current pandemic. Herein, we hint toward the possible molecular mechanism that can contribute to the signs and symptoms of patients with neurological deficits and possible treatment and prevention modalities in the acute and chronic phases of COVID-19. A. INTRODUCTION Back during the early days of the pandemic, patients admitted to hospitals with COVID-19 exhibited a syndromic picture of neurological deficits caused by SARS-CoV-2. Later, a large group of patients with acute COVID-19 started emerging who continued to experience the effects of neurological damage caused by SARS-CoV-2 to the brain and spinal cord for periods ranging from months to over a year after the acute phase. 1,2 Astonishingly, newer symptoms have been reported to appear, which with or without the symptoms suffered in the acute phase of COVID-19 have become the basis of the diagnosis of Long-COVID. 2 Very little is known about the molecular mechanism(s) that could explain the neurological signs and symptoms of Long-COVID. For the acute phase of COVID-19, the contributing mechanisms have been coined, which include a combination of direct neuronal injury coupled with the damaging effects of the neuroinflammatory cytokines. 1 The exploration into the pathological basis of the neurological signs and symptoms of Long-COVID remains a pivotal area of current research, as delays in knowing and targeting them could result in permanent disabilities in patients suffering from Long-COVID. 1 Because the brain and spinal cord are rarely biopsied, and such an attempt can itself damage the neurological tissues, delays are expected to unravel the exact molecular and biochemical basis of the ongoing cellular injury causing the neurological signs and symptoms exhibited in Long-COVID. It is important to remark here that, though microscopic findings of the autopsied brains in Long-COVID are expected to clue toward the end lesions that caused fatality, like zones of gliosis and microinfarcts, they can contribute very little toward the mechanisms that had led to these changes. Likewise, imaging of the central nervous system (CNS) is important but could reveal only gross morphological changes and not mechanisms at the molecular level that contribute to worsening neurological features seen in Long-COVID. The factors contributing to the chronic phase neurological features in Long-COVID appear to be most likely cascades that had their origins in the acute phase and had continued into chronicity due to inadequate removal of SARS-CoV-2 and/or its proteins that continue neuronal injury in Long-COVID ( Figure 1A,B). Ongoing inflammation and slowly evolving neurodegenerative changes due to SARS-CoV-2-mediated ( Figure 1B3) direct neuronal and glial damage appear to be the underlying cause 1 as it has been seen that neurological signs and symptoms ( Figure 1A1) in Long-COVID continue to worsen with time ( Figure 1A2) with findings of hypometabolism in neurons. 3 No treatment is available at present for neurological damage reported in Long-COVID. A delay in recognition of this neurological damage is considered by many to be because of the myth it was a psychosomatic disorder without any organic basis for the syndromic presentation. B. HINTS FROM PATIENTS WITH ACUTE COVID-19 Changes like the deposition of amyloid and tau protein within the neurons and features consistent with demyelination are significant clues in patients who have died of COVID-19 ( Figure 1B3). It has been postulated that the viral components, like the E protein, of SARS-CoV-2 in the brain can induce the activation of microglial TLR2, thereby increasing the susceptibility of Aβ and α-syn deposition in patients. Preliminary postmortem analysis in studies has revealed that the accumulation of phosphorylated α-syn, one of the pathogenic forms of α-syn, was increased in the brains of SARS-CoV-2-infected patients. 4 In addition, the levels of total tau, phosphorylated tau, and glial fibrillary acidic protein, all biomarkers for AD, were elevated in SARS-CoV-2-infected patients with severe symptoms, suggesting a potential correlation between AD and SARS-CoV-2 infection severity. Dysregulation of other proteins like cereblon, autophagy regulating proteins and their role in the deposition of Aβ, and deposition of Aβ and α-syn in Long-COVID need to be investigated to target them to slow down the neurodegeneration in these patients. C. CIRCULATORY BASIS OF NEUROLOGICAL DAMAGE IN COVID-19 AND LONG-COVID Factors that can cause generalized hypoxia and ones that can cause hypoxia at the cellular level leading to progressive neuronal injury ( Figure 1C−C1) in the brain and spinal cord appear to be at play in COVID-19 and Long-COVID patients. 1 Progressive pulmonary fibrosis coupled with thrombotic microangiopathy is a known feature in Long-COVID patients. The former causes generalized hypoxia resulting from an inadequate gaseous exchange and later on nonocclusive microthrombi causing ischemic hypoxic injury. This can best explain the decline in cognitive functions and focal progressive neurological deficits as has been reported in Long-COVID patients. An excessive prothrombotic tendency and defective thrombolysis are now been being reported in Long-COVID. D. CSF AND SERUM BIOMARKERS AS CLUES TOWARD NEURONAL DAMAGE IN COVID-19 AND LONG-COVID The evidence of ongoing neuronal injury hinted by the chemical and structural nature of the leaking biomarkers in CSF and serum can provide valuable clues toward the nature of the neuronal and glial injury and target the underlying pathways. Neurofilament light chain, protein S-100B, neuronspecific enolase, lactate dehydrogenase, creatinine kinase, and glial fibrillary acid protein, which is of astrocytic origin, are a few examples that can hint toward an ongoing cellular injury in the neuronal milieu. Research on the discovery of newer biomarkers is needed that can provide more specific information on the nature of the injury to the neurons and glia in the CNS. Inadequate drainage of the CSF in Long-COVID with disturbances in sleeping hours can also lead to Circulatory causes (C) like chronic brain hypoxia after COVID resulting from thrombosis in microcirculation can also cause neuronal atrophy (C1−C2). With neuroinflammation and direct neuronal injury accelerating neurodegeneration, diverse treatment modalities (D) should be tested in clinical trials to prevent neurological disabilities in Long-COVID. ACS Chemical Neuroscience pubs.acs.org/chemneuro Viewpoint the accumulation of excessive metabolites in the CSF that can serve as biomarkers in Long-COVID. E. ELECTROPHYSIOLOGICAL STUDIES IN COVID-19 AND LONG-COVID Electroencephalography (EEG), electrocorticography (ECoG), electromyography, and electro-olfactography if possible can provide information on their discharge patterns in Neuro-COVID affected patients. As reversible cell injury causes neuronal and muscle cell membrane permeability changes to ions, abnormal discharge patterns compared to controls (healthy subjects) can hint at an ongoing damaging influence in effect in COVID-19 and Long-COVID patients. Neuropsychological test findings on 18 F-FDG PET scans should be performed in a larger sample size, and longitudinal studies in the near future are expected to provide clues toward the ongoing neuronal and glial damage in Long-COVID, as it has been shown that Long-COVID patients exhibited brain hypometabolism in the right parahippocampal gyrus and thalamus 3 F. ROLE OF THE BREACHED BLOOD−BRAIN BARRIER IN COVID-19 AND LONG-COVID Damage caused by inflammatory cytokines or resulting from SARS-CoV-2, 1 the Spike protein in particular in inducing endothelial injury, can breach the integrity of the blood−brain barrier (BBB) in the acute phase of COVID-19. A continued infection and reinfection cycle or persistence of the virus causing episodes of viremia can be a source of entry of serum factors across the BBB to the CNS, which are normally not allowed to incite glial and neuronal damage. Also, the leak of growth factors that maintain the healthy state of the nervous system into the circulation across a breached BBB can evoke neuronal damage by impairing the ability of the glia, neurons, and their sheath to repair, leading to neuronal damage in acute and chronic phases of Long-COVID. G. TREATMENT PROPOSALS FOR NEUROLOGICAL DAMAGE IN COVID-19 AND LONG-COVID Based on the possible underlying causes that incite neurological damage in COVID-19 and conceivably Long-COVID, treatment plans can be crafted and tested in human clinical trials to make them available as early as possible. The use of existing anti-neuroinflammatory drugs and chemical compounds that have been used in clinical practice can be repurposed for their use in COVID-19 in patients suspected of having neuronal injury caused by ongoing inflammatory processes within the CNS. The viral persistence due to inadequate clearance by the immune system, the ability of SARS-CoV-2 to conceal itself in body spaces to evade the immune response, and reinfections with SARS-CoV-2 are emerging as the possible central mechanisms behind Long-COVID. The latter can be best managed by antiviral agents that can either exert virucidal effects or paralyze the ability of SARS-CoV-2 to infect the host cells. It is worth mentioning that some natural compounds, herbs, and nutraceuticals are known for their anti-inflammatory and virucidal effects and therefore can be candidate drugs for slowing down the progression of neuronal damage in COVID-19 and Long-COVID. Of the many examples of these drugs, agents like hesperidin, cinnamon, baicalin, curcumin, rutin, glycyrrhizin, selenium, epigallocatechin gallate, and quercetin have been reported in published studies to be tested in COVID-19 and have been recently considered for treatment in Long-COVID. These agents exert the aforementioned anti-inflammatory, antioxidant, cytoprotective, and antiviral effects with some of them having the added benefits of antithrombotic actions that can limit the thromboembolic complications in both acute and Long-COVID. To avoid the adverse effects of these drugs when given individually, the rationale of drug synergism can be implemented, where a single preparation of a mixed formulation of these compounds in reduced doses can be tested in human clinical trials in Long-COVID patients to determine its efficacy. Preclinical studies on immunomodulatory imide drugs (IMiDs), particularly the ones that are adamantly derivatives shown to inhibit TNF-α in animal models of diseases such as ALS, Parkinson's disease (PD), and Alzheimer's disease (AD), have shown promise, which indicates their potential for the advancement from the bench to the bedside of patients of neurological diseases in need of treatment, after clinical trials. 5 In Long-COVID, selective IMiDs that cross the BBB and achieve an effective concentration in CNS, with fewer or tolerable side effects, can be tested for their neuroprotection, antioxidative action, and inhibition of cognitive decline. Drug synergism with agents that exert direct virucidal effects like nirmatrelvir, which inhibits a SARS-CoV-2 protein to stop the virus from replicating, and ritonavir, which slows down nirmatrelvir's breakdown to help it remain in the body for a longer period at higher concentrations, is now proposed for treatment in Long-COVID to address the removal of viral persistence mentioned above. Drugs with avid binding to the receptor-binding motif (RBM) of the Spike protein in SARS-CoV-2 are also promising agents, as by preventing the entry of the virus into host cells, 6 these drugs can minimize the syndromic manifestations of Long-COVID in general and, if they reach the CNS, the neurological damage in particular. 1 Treatment or cures by stem cell replacement therapy are limited, as transplantation of embryonic stem cells into the core areas of the CNS would prove challenging. Clinical trials would show the advantage of stem cell therapy if done in the future on patients suffering from Neuro-COVID with focal neuronal loss. H. DISCUSSION The neuron is among one of the few cells in the human body that is very sensitive to injurious stimuli and has a limited ability to regenerate by self-renewal. A consensus has developed on the fact that neurological deficits do occur in COVID-19 and that it could be due to the effect of either direct neurotrophic effects of the virus, a consequence of the accompanying inflammation, or both of them. 1 The evidence that a very substantial number of the population worldwide with lingering symptoms of the initial infection of COVID-19 do continue to suffer the symptoms for months 2 to years after the year 2020 is alarming. Long-COVID patients experience bothering symptoms mostly related to neurological issues. A list of over 20 complaints in Long-COVID can be linked to the nervous system due to possible ongoing injurious processes involving neurons and glia. Factors that could be contributing to this frightening syndromic picture are detailed above, but questions remain: (a) Can the neurological damage be reversed? (b) Can it be slowed down? (c) Would stem cell therapy be any good in neurological deficits for the possible recovery of the lost neurons and cognitive function? More ACS Chemical Neuroscience pubs.acs.org/chemneuro Viewpoint troubling is the finding of neurodegenerative changes in COVID-19 and therefore expectedly in Long-COVID, which are now reported in an age group that should not have the type of neuronal degeneration that is seen in Alzheimer's 4 and other related neurodegenerative conditions. This, if not addressed and treated in time, is expected to result in a surge of large numbers of humans with permanent cognitive disabilities�the count of which is feared to cross into millions. Is it time to prescribe neuroprotective drugs at this point? Are drugs that target the neuroinflammation of any use in Long-COVID patients? Is the disease feared to progress to conditions like myalgic encephalomyelitis/chronic fatigue syndrome (ME/ CFS)? These are related questions that need quick and effective answers if we want to expeditiously avoid a surge of disabled individuals. The problem can be addressed by making specialized clinics operational where neurocognitive assessments of the Long-COVID patients are done with the recognition of the fact that this is a disease entity with an organic cause and not a psychological disorder. The more we delay in planning effective measures to combat Long-COVID the more chances there are that neurological damage could become irreversible. Research into the dilution or removal of the underlying causes is needed and needed urgently, as are treatment approaches ( Figure 1D) that can slow down neuroinflammation and combat the residual virus in the body in Long-COVID. I. CONCLUSION AND FUTURE DIRECTIONS As the two known factors, (a) neuroinflammation and (b) direct neuronal damage by SARS-CoV-2, that initiate Neuro-COVID in the acute phase of the infection are known, there should be attempts to target them without any delay. Drugs that are known and have proven to be efficacious antineuroinflammatory agents and antiviral to SARS-CoV-2 mentioned above in our discussion should immediately undergo human clinical trials for their suitability for Neuro-COVID. Selective neuroprotective IMiDs, known herbs, and plant products with previously proven neuroprotective and antineuroinflammatory effects, like baicalin, quercetin, diosmin−hesperidin, curcumin, and piperine, should be given a chance in human clinical trials. If efficacious and safe as per their clinical trials, an emergency use authorization (EUA) for their use in Long-COVID and COVID-19 may provide increased chances to slow or stop progression into neurological deficits. ME/CFS, like Long-COVID, is a complex disease with a spectrum of symptoms including neurological ones like impaired memory, unexplained fatigue, postexertional malaise, and sleep disturbance. Understanding Long-COVID pathogenesis and treatment outcome can pave the way toward the understanding of and possibly treatment clues for ME/CFS as well.
2022-10-04T06:17:52.423Z
2022-10-03T00:00:00.000
{ "year": 2022, "sha1": "c7bc3328002f368c03b537747733fd104096b400", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7db8d141d393f984f1fb53d4a3362efc5b357c4a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119193300
pes2o/s2orc
v3-fos-license
Combining symmetry breaking and restoration with configuration interaction: a highly accurate many-body scheme applied to the pairing Hamiltonian Background: Ab initio many-body methods have been developed over the past ten years to address mid-mass nuclei... As progress in the design of inter-nucleon interactions is made, further efforts must be made to tailor many-body methods. Methods: We formulate a truncated configuration interaction method that consists of diagonalizing the Hamiltonian in a highly truncated subspace of the total N-body Hilbert space. The reduced Hilbert space is generated via the particle-number projected BCS state along with projected seniority-zero two and four quasi-particle excitations. Furthermore, the extent by which the underlying BCS state breaks U(1) symmetry is optimized in presence of the projected two and four quasi-particle excitations... The quality of the newly designed method is tested against exact solutions of the so-called attractive pairing Hamiltonian problem. Results: By construction, the method reproduce exact results for N=2 and N=4. For N=(8,16,20) the error on the ground-state correlation energy is less than (0.006, 0.1, 0.15) % across the entire range of inter-nucleon coupling defining the pairing Hamiltonian and driving the normal-to-superfluid quantum phase transition. The presently proposed method offers the advantage to automatically access the low-lying spectroscopy, which it does with high accuracy. Conclusions: The numerical cost of the newly designed variational method is polynomial (N$^6$) in system size. It achieves an unprecedented accuracy on the ground-state correlation energy, effective pairing gap and one-body entropy as well as on the excitation energy of low-lying states of the attractive pairing Hamiltonian. This constitutes a strong enough motivation to envision its application to realistic nuclear Hamiltonians in view of providing a complementary, accurate and versatile ab initio description of mid-mass open-shell nuclei in the future. I. INTRODUCTION Methods to solve the N -body Schroedinger equation must cope with two specific attributes of inter-nucleon interactions that are responsible for the non-perturbative character of the nuclear many-body problem [1,2]. The first trait relates to the strong inter-nucleon repulsion at short distances that translates into large off-diagonal cou-pling between states characterized by low and high (relative) momenta, i.e. the first element of non-perturbative physics is of ultra-violet nature and manifests itself in all nuclei independently of the detail of their structure. The second trait relates to the unnaturally large nucleon-nucleon scattering length in the S-wave/spinsinglet channel and with the tendency of inter-nucleon interactions to induce strong angular correlations between nucleons in the internal frame of the nucleus. This second element of non-perturbative physics is of infra-red character and only manifests itself in sub-categories of nuclei, i.e. in singly open-shell and doubly open-shell nuclei. Off-diagonal coupling between low and high (relative) momenta can be tamed-down, at the price of inducing (hopefully weak) higher-body forces, by pre-processing arXiv:1610.04063v1 [nucl-th] 13 Oct 2016 the nuclear Hamiltonian via, e.g., a unitary similarity transformation [3]. Based on the transformed Hamiltonian, dynamical correlations 1 can be dealt with at a polynomial cost via standard many-body techniques typically based on systematic particle-hole-type expansions. Corresponding ab-initio methods, i.e. many-body perturbation theory (MBPT) [4], coupled cluster (CC) theory [5], self-consistent Green's function (SCGF) theory [6,7], in-medium similarity renormalization group (IMSRG) theory [8], have been developed and implemented with great success in the last ten years to deal with doubly-(sub)closed shell nuclei and their immediate neighbors. Strong, i.e. non-dynamical, correlations induced in singly and doubly open-shell nuclei are of different nature and require specific attention. Several routes are possible, including full configuration interaction (CI) techniques [9,10]. To proceed on the basis of methods whose cost scales polynomial with the number interacting nucleons, one option consists in exploiting the spontaneous breaking of symmetries induced by non-dynamical correlations at the mean-field level. This rationale allows one to incorporate a large part of the non-perturbative physics into a single product state that can serve as a reference for many-body expansions dealing efficiently with dynamical correlations. While traditionally developed within the frame of effective nuclear mean-field (i.e. energy density functional) approaches [11][12][13], this idea has been recently embraced to develop and implement ab initio Gorkov SCGF [14][15][16], multi-reference IMSGR [17,18] and Bogoliubov CC [19] many-body techniques to tackle pairing correlations 2 . This is achieved by allowing the reference state to break U (1) global gauge symmetry associated with particle-number conservation. While the restoration of the broken symmetry, eventually necessary in any finite quantum system, has been formulated for MBPT [24,25] and CC techniques [25], it has only been implemented so far in the context of nuclear ab initio calculations via MR-IMSGR techniques [17,18]. Methods based on a symmetry breaking reference state are currently allowing a breakthrough in the ab initio description of (singly) open-shell nuclei and are putting state-of-the-art inter-nucleon interactions to the test [16,26,27]. In the most advanced truncation schemes implemented so far, this is achieved by allowing a few percent error on the ground-state correlation energy 3 . As progress on inter-nucleon interactions is made, and as the questions one wishes to answer are refined 1 The denomination of dynamical and non-dynamical correlations is presently used in the quantum chemistry sense. 2 While the formation of cooper pairs is primarily driven by the unnaturally large nucleon-nucleon scattering length in the spinsinglet isospin-triplet channel, it is also partly due to indirect processes associated with the exchange of collective vibrations [20][21][22][23]. 3 We are only quoting here the systematic uncertainty associated with the truncation of the many-body expansion scheme. in connection with increasingly available experimental data, further efforts must be made to tailor many-body methods (with minimized numerical costs) that can reach higher precision along with more observable/quantum states/nuclei. It is the objective of the present work to design and test a new many-body scheme that has the potential to do so. In order to characterize the potential of new manybody schemes, one can test them against solutions of exactly solvable many-body Hamiltonians. To be in position to draw meaningful conclusions, the schematic Hamiltonian must be significantly non-trivial and capture enough key physics of the real system of interest. In view of the above discussion, we presently focus on the socalled attractive pairing Hamiltonian [28][29][30][31] whose main merit is to effectively model the superfluid character of finite nuclear systems or any other mesoscopic fermionic superfluid system. More specifically, the dynamic of N interacting fermions is governed by the Hamiltonian where Ω denotes the number of doubly degenerate (e k = ek) time-reversed 4 single-particle states (k,k) characterized by the creation operators (a † k , a † k ). The double degeneracy of single-particle states is meant to mimic (eveneven) doubly open-shell nuclei treated via the spontaneous breaking of SO(3) rotational symmetry, i.e. exploiting explicitly the concept of deformation. In the present study, the distance between successive pairs of degenerate levels is constant, i.e. e k+1 − e k ≡ ∆e, and the system is systematically studied at "half-filling", i.e. N = Ω. Modeling, e.g., rare-earth nuclei, one typically has ∆e ∼ 500 keV. The coupling strength g ∈ [0, +∞[ characterizes the attractive pairing interaction that scatters pairs of nucleons from any given set of degenerate single-particle states to any other set with a constant probability amplitude. As g increases, the system is known to undergo a phase transition from a normal to a superfluid system at a critical value g = g c that depends on the number of particles N . Eventually, the relevant parameter of the model is the ratio g/∆e that measures the pairing strength relative to the spacing between successive pairs of single-particle states. For rare-earth nuclei, one typically 5 has g/∆e ∼ 0.5. Particular attention must be paid to the highly accurate method recently proposed in Refs. [58,59]. Reconciling the performance of CC doubles in the normal phase with the merit of VAP-BCS in the strongly interacting regime, this method, coined as polynomial similarity transformation (PoST), reaches less than 1 % error on the ground-state correlation energy E c defined as the total energy minus the Hartree-Fock (HF) energy obtained by filling the N lowest levels, for all interaction strength and moderate particle number [59]. In view of this recent development, we presently wish to design a many-body scheme that scales polynomial with the number of interacting fermions and whose results display an error on the ground-state correlation energy that is better than 1% for any interaction strength. To reach this ambitious objective, the variational approach introduced below builds on Ref. [24] and combines two key characteristics 1. U (1) symmetry breaking and restoration (a) spontaneous (b) optimized 2. truncated CI diagonalization. While Ref. [24] displayed encouraging results by exploiting spontaneous U (1) symmetry breaking and restoration within a perturbative approach, the present work strongly improves on them by exploiting truncated CI techniques and by optimizing the extent by which the symmetry is broken prior to being restored. In addition, a strong asset of the presently proposed method is to provide a highly accurate account of low-lying excited states. The approach being based on a wave-function ansatz, observable besides the energy can easily be accessed as exemplified by the computation of the effective pairing gap and the one-body entropy. The paper is organized as follows. Section II displays the formalism in such a way that several standard methods can be easily recovered as particular cases. Sections III-VI provide extensive numerical results and compare them to exact solutions as well as to those obtained from existing approximate methods. Eventually, results for low-lying excited states are discussed. Section VII concludes the present work and elaborates on some of its perspectives. A. Basis construction We first consider the BCS solution for H(g) 7 carrying even number-parity as a quantum number. It can be written as where the coefficients (u k (g), v k (g)), satisfying u 2 k (g) + v 2 k (g) = 1 for all k, are obtained by solving standard BCS equations [60]. Quasi-particle creation operators, whose hermitian conjugates annihilate |Φ(g) , are obtained via the BCS transformation Normal-ordering H(g) with respect to |Φ(g) allows one to rewrite it under the form where the unperturbed part reads as The real number E 0 (g) denotes the approximate BCS ground-state energy whereas defines BCS quasi-particle energies, with ∆(g) the BCS pairing gap [61]. The explicit expression of the residual interaction H 1 (g) can be obtained accordingly [60]. 7 It is implicitly assumed here that the Hamiltonian is replaced by the grand potential H(g)−λA, with λ the chemical potential used to impose that the BCS solution has the right number of particles in average. The particle number operator is A = Ω k=1 (a † k a k + a † k ak). The BCS vacuum and the set of quasi-particle (qp) excitations built on top of it form a complete eigenbasis B(g) of H 0 (g) over Fock space F such that Being interested in eigenstates of even-even systems with seniority zero, the only basis states of actual interest are those involving pairs of time-reversed quasi-particle excitations for which the shorthand notation is used. Eventually, all basis states can be written as BCS vacua of the form This notation implicitly includes the BCS vacuum as a particular case when using the BCS coefficients (u m (g), v m (g)). For the excited state |Φkl ... (g) , one has ukl ... m (g) ≡ u m (g) and vkl ... While the eigenstates of H 0 (g) form a complete orthonormal basis of Fock space, they break U (1) symmetry associated with particle number conservation, i.e. they are not eigenstates of the particle number operator A. In order to recover states belonging to the Hilbert space H N associated with the physical number N of nucleons in the system, a projection operator can be applied to generate the set of projected qp excitations forming a non-orthogonal overcomplete basis B N (g) of H N . While |Φkl ... N (g) directly originates from |Φkl ... (g) , it is worth noting that the former is not an eigenstate of H 0 (g). For g > g c , each basis state defined through Eqs. 3-12 builds in the breaking of the U (1) symmetry prior to performing its exact restoration. As such, each state |Φkl ... N (g) is a complex entanglement of 0p-0h, 2p-2h, · · · , Np-Nh excitations with respect to the HF reference state, as nicely illustrated by Eq. (5) of Ref. [58]. For g < g c , however, the BCS vacuum actually reduces to the HF reference state such that each state |Φkl ... N (g) identifies with one n-particle/n-hole (np-nh) excitation on top of it belonging to H N 8 . It is worth noting that certain combinations of qp excitations do not actually have any npnh counterpart in H N for g < g c . For example, a 2qp excitation of time-reversed states tend towards a Slater determinant belonging to H N ±2 below g c . B. Truncated CI method We wish to approximate eigenstates of H(g), starting with its ground state, via an exact diagonalization within the subpace of H N spanned by a subset of states of B N (g). A similar idea was used in a different context on the basis of projected QRPA states [62] to estimate transfer probabilities between many-body states with different particle numbers. In the present case, eigenstates are approximated by the ansatz 9 i.e. it mixes the particle-number-projected BCS vacuum with projected 2qp and 4qp excitations. The number of states in the linear combination is with N = Ω in the present application. The many-body state is determined variationally where the minimization is performed with respect to the set of coefficients {c * α } ≡ {c * , c * k , c * lm }, where α scans all states in the linear combination defining |Ψ N (g) in Eq. 13. This leads to n st coupled equations 10 Matrix elements of H(g) between the basis states as well as the overlap between the latter can be estimated using 8 In this case, the action of P N is superfluous such that this is already true of the unprojected basis states |Φkl ... (g) . 9 The coefficient of the projected BCS vacuum is not set a priori because we keep the possibility to remove it altogether from the variational ansatz, in which case c = 0. 10 Given that P N is a projector (P 2 N = P N ) and that H(g) commutes with it ([P N , H(g)] = 0), it is sufficient to apply the projector on only one of the two states involved in any matrix element of the overlap or Hamiltonian matrices. This is the reason why we omitted the subscript N in the bra Φ α (g)| entering Eq. 15. standard projection techniques. Explicit forms are given in the appendix of Ref. [24]. Equation 15 is nothing but the Schroedinger equation represented in a finite-size non-orthogonal basis. It is solved by first diagonalizing the overlap matrix through a unitary transformation U N (g) leading to a new set of orthonormal states that is eventually used to diagonalize H(g). The number of new orthonormal states is of course equal to n st . However, the size of the basis must actually be reduced prior to diagonalizing H(g) by removing states with eigenvalues below a chosen threshold , i.e. states that encode the redundancy of the initial non-orthogonal overcomplete basis. We will illustrate this point in Sec. III C. C. Particular cases One must note that the above scheme incorporates several existing approaches as particular cases 1. When limiting ansatz 13 to the sole first term, one recovers the particle-number projection after variation BCS (PAV-BCS) method. In this case, there is obviously no diagonalization to perform. 2. For g < g c , i.e. in the normal phase, the scheme reduces to a standard truncated CI method [50][51][52], limited to 2p-2h configurations in the present case 11 . In this case, the number of states does not comply with Eq. 18, i.e. it is replaced by which for large Ω, corresponds to essentially half of the cardinal defined in Eq. 18. The space spanned by the truncated basis is thus not continuous through g c . Consequences will be discussed in Sec. III. 3. When computing the mixing coefficients {c α } from second-order (particle-number unprojected) MBPT, the diagonalization step is avoided. The residual interaction H 1 (g) contains terms with four quasi-particle operators 12 [60]. Consequently, H 1 (g) only couples the BCS vacuum |Φ(g) to 4qp excitations |Φkl(g) at second order. As a result, coefficients ck associated with 2qp excitations are identically zero at that order. Ansatz 13 can be both implemented in absence of particle-number projection, in which case one works within a standard MBPT scheme, or in presence of the particlenumber projection, in which case one works within a particle-number projected MBPT scheme that we can coin as MBPT N 13 . Of course, standard second-order MBPT based on a HF reference state is recovered from MBPT N at g < g c . It happens that MBPT and MBPT N have been applied to the pairing Hamiltonian in Ref. [24] and serve as an inspiration for the generalizations introduced in the present work. Corresponding results will be briefly reminded in Sec. III. D. Optimized order parameter Let us introduce one additional level of improvement. At a given value of the coupling strength g, the states forming the non-orthogonal overcomplete basis B N (g) have been naturally built so far from the BCS solution |Φ(g) of H(g). Consequently, the extent by which |Φ(g) (possibly) break U (1) symmetry, as characterized by its pairing gap ∆(g), is in one-to-one correspondence with the coupling g defining the physical Hamiltonian. However, it is not at all obvious that the subpart of the resulting basis B N (g) used in the truncated CI calculation optimally captures the physics of the Hamiltonian H(g). At each "physical" value g, it is thus possible to foresee the diagonalization of the Hamiltonian H(g) in the (0qp, 2qp, 4qp) subpart of B N (g aux ) associated with an auxiliary value g aux , i.e. with the basis built from the BCS solution |Φ(g aux ) of an auxiliary pairing Hamiltonian H(g aux ) 14 . Following this line of thinking, one can scan all values g aux ∈ [0, +∞[ in order to find the optimal auxiliary coupling g opt . This extra step consists of 12 Terms with two creation or two annihilation operators are zero when |Φ(g) satisfies the BCS equations [19], i.e. in Moller-Plesset MBPT [63]. 13 The mixing coefficients {cα} are still computed from MBPT without particle number projection. Note that an alternative particle-number-restored MBPT based on a projective formula has been recently proposed [25] but not yet applied. 14 To some extent, performing standard truncated CI calculations based on a basis of np-nh Slater determinants already exploits this idea when dealing with H(g) with g > gc, i.e. it is nothing but using the basis B N (gaux) built from the reference state corresponding to gaux < gc in connection with a Hamiltonian H(g) defined by g > gc. spanning a larger manifold of states than when working at g aux = g. The method is thus of variational character, i.e. the optimal auxiliary coupling g opt is obtained at the minimum of the curve E gaux (g) produced by repeatedly applying the truncated CI calculation, i.e. by solving Eq. 15 for the Hamiltonian H(g) while varying the auxiliary coupling g aux defining the basis states. This scheme extends the so-called restricted variationafter-projection (RVAP) method designed within the frame of symmetry-restored nuclear energy density functional calculations [64]. Generically speaking, the idea is to scan the symmetry-restored energy as a function of a collective variable that monitors the extent by which the unprojected reference state breaks the symmetry. In the present case of U (1) symmetry, this order parameter is nothing but the pairing gap ∆(g aux ) associated with the BCS reference state |Φ(g aux ) . Typically, tuning the value of the gap can be done by solving BCS equations while adding a Lagrange constrain term. In the present case, however, ∆(g aux ) is a monotonic function of g aux (see Fig. 9) such that one can directly use g aux as a collective variable and solve for H(g aux ). The novelty of the presently proposed scheme is that the optimal order parameter g opt of the reference state is not only determined in presence of the symmetry restoration but also in presence of the mixing with projected 2pq and 4qp states, i.e. at the level the truncated CI calculation itself. As discussed below, this significantly impact the value of g opt and the associated quality of the variational ansatz. A. Perturbation theory For reference, we first illustrate MBPT and MBPT N methods employed in Ref. [24] and briefly introduced in Sec. II C above. Second-order results are displayed in Fig. 1 for N = 20 and compared with VAP-BCS results [58]. Three main lessons can be learnt from these calculations 1. Second-order corrections avoid systematically the collapse of the correlation energy that occurs as g decreases through g c in BCS or PAV-BCS calculations [48]. Of course, the VAP-BCS method also avoids the collapse but at the price of a significantly more sophisticated calculation. 2. Particle-number projection drastically improves over unprojected results. Given that standard Rayleigh-Schroedinger MBPT is based on a projective energy formula while MBPT N of Ref. [24] relies on a hermitian expectation value, we also display the latter in absence of the projection in order to disentangle its effect. For g < g c , the improvement solely comes from using the expectation value formula given that the reference state does not break U (1) symmetry in the first place and thus the symmetry restoration cannot have any effect. For g > g c , one sees that using the expectation value formula does not improve the results by itself and even deteriorates MBPT results obtained from a projective formula. Thus, the very significant improvement seen in the solid line does originate from the particle-number projection. 3. Except in the vicinity of g c , MBPT N results are better than VAP-BCS, both in weak and strong coupling regimes. In particular, results display the correct limit as g → 0 contrary to VAP-BCS calculations [58]. It is not surprising given that standard second-order MBPT theory is known to converge to the right limit as g tends to zero. More surprisingly, MBPT N converges very rapidly towards exact results as g increases beyond g c . As a matter of fact, for g > 0.6, results are even better than the very accurate PoST approach of Ref. [59] that, by construction, matches VAP-BCS in the strong pairing regime. The quality of these results obtained at a low computational cost over both weakly-and strongly-coupled regimes teaches us that the space spanned by the states involved, i.e. the particle-number projected BCS vacuum and particle-number projected 4qp excitations, contain key information to treat the physics of superfluid systems. Indeed, the spontaneous breaking of the symmetry, followed by its further restoration, allows one to resum non-dynamical correlations efficiently whereas corrections associated with 4qp excitations seem to capture a large part of the dynamical correlations. Still, results are significantly above the 1 % error on the correlation energy that constitutes our present objective. One natural generalization of the approach would be to include higher-order perturbative corrections. However, the rapid increase of the dimensionality of the probed Hilbert space translates in a severe augmentation of the computational cost. Alternatively, we move from a perturbative to a non-perturbative approach via a diagonalization method while keeping the dimensional of the probed Hilbert space essentially the same. B. Diagonalization At each g, H(g) is diagonalized within the space spanned by the non-orthogonal set of projected 0qp, 2qp and 4qp states built out of the BCS state |Φ(g) , as explained in Sec. II B. The calculation reduces, as discussed in Sec. II C, to a diagonalization in a truncated basis made of 0p-0h and 2p-2h configurations built out of the HF reference state for g < g c . The error on the correlation energy is displayed in Fig. 2 for N = Ω = 16. The diagonalization greatly improves the accuracy for g > g c compared to the perturbative calculation discussed above. The error is below the targeted 1 % for all coupling beyond g c and quickly drops far below it as g moves away from the BCS threshold. Contrarily, results from the truncated CI calculation are similar to the perturbative calculation below the threshold. Eventually, a discontinuity of the result occurs at g = g c . The last feature can be qualitatively understood from the discontinuity of the basis dimension as g c is ap-proached from below or from above, as already alluded to in Sec. II C. By construction, the basis contains 0p-0h and 2p-2h Slater determinants of H N below g c . While a subset of 4qp states converges towards the 2p-2h Slater determinants when approaching g c from above, others become more and more dominated by Slater determinants belonging to Hilbert spaces associated with neighboring (even) number of particles. Still, residual components corresponding to np-nh Slater determinants of H N are extracted from them by projection. Consequently, the limit of the truncated CI calculation as g c is approached from above corresponds to a standard truncated CI calculation associated with a basis containing higher-order np-nh Slater determinants beyond 2p-2h configurations, the basis size being approximately twice the one below threshold. This feature greatly improves the quality of the method and illustrates the benefit of starting from symmetry broken (and restored) basis states above g c . Eventually the proposed method is competitive with the PoST method of Ref. [59] and becomes even quickly superior as one enters the strongly coupled regime. Still, the strict reduction of the method to a truncated CI based on sole 0p-0h and 2p-2h configurations below g c is not sufficient to reach the desired accuracy across both normal and superfluid phases and to obtain a smooth description throughout the transition. In Sec. IV below, this intrinsic limitation is overcome while further improving the accuracy for all g. Before discussing this additional level of improvement, let us first focus on the redundant character of the basis and of the optimal set of qp configurations one should start from. C. Basis redundancy and qp configurations Based on unprojected MBPT, it is natural to first add 4qp excitations to the BCS reference state in the variational ansatz |Ψ N (g) . The argument that the BCS reference state is not coupled to 2qp excitations via the residual interaction H 1 (g) does not however stand once the particle-number projector is inserted; thus, our additional inclusion of 2qp excitations in the variational state |Ψ N (g) . The final set of projected 0qp, 2qp and 4qp states is not orthonormal and thus contains a certain degree of redundancy. As explained in Sec. II B, this requires the diagonalization of the overlap matrix to extract a subset n sub ≤ n st of relevant orthonormal states characterized by large enough eigenvalues n ζ N (g) ≥ . Let us now typify the relevant states depending on the original set of configurations included in the variational ansatz and characterize at the same time the quality of the associated results. We still focus on the N = 16 case. The upper panel of Fig. 3 displays the n st = 1 + 16 + 120 = 137 eigenvalues n ζ N (g) of the overlap matrix from the full set of projected 0qp, 2qp and 4qp configurations. Employing a logarithmic scale and ordering the eigenvalues increasingly, one observes that they gather in two distinct groups, i.e. one finds 17 = n 0qp + n 2qp very small eigenvalues consistent with numerical noise and 120 = n 4qp values of order unity. Very naturally, the threshold is set such that only the latter eigenstates are kept to eventually diagonalize the Hamiltonian. Naively, the observation that the number of useful orthonormal states is strictly equal to the cardinal of projected 4qp states may suggest that the latter capture from the outset the information contained in the set of 0qp and 2qp configurations. Let us now investigate this hypothesis. Removing all 2qp configurations from the calculations, the middle panel of Fig. 3 shows that only one small eigenvalue remains while 120 = n 4qp of them are still of order unity. Additionally, the upper panel of Fig. 4 testifies that the error on the correlation energy is the same as in the presence of projected 2qp configurations, which indeed appear to be redundant and can be entirely omitted from the outset. For large bases/particle number, the numerical scaling is governed by the number of 4qp configurations such that omitting projected 2qp excitations does not lead to a significant gain. Having one zero eigenvalue left, one may be tempted to conclude that the projected BCS reference state can be further removed from the linear combination. However, and as shown in the lower panel of Fig. 4, the error on the correlation energy is huge for g > g c in this case. Thus, projected 4qp configurations do not fully contain the information built in the projected BCS state such that the useful set of n st − 1 = n 4qp orthonormal states do mix in a significant fraction of the projected 0qp state that cannot be plainly omitted. Ironically, bringing back projected 2qp configurations while keeping the projected BCS state aside is sufficient to gain back the accuracy of the calculation based on projected 0qp and 4qp configurations, i.e. the set of projected 2qp configurations do bring in the mandatory information otherwise contained in the projected 0qp state. Of course, it is more efficient to do it by including 1 0qp state rather than 16 2qp configurations. To eventually confirmed that all combinations of projected states are not equivalent, let us finally keep projected 0qp and 2qp configurations while omitting projected 4qp ones. In this case, one is left with 16 = n 2qp eigenvalues of order unity and a null one as shown in Fig. 3. As for the error on the correlation energy, the results are however much inferior to the full calculation as seen in the upper panel of Fig. 4. In conclusion, the information carried by projected 4qp states cannot be brought in by lower-order projected qp configurations while the opposite is true to some extent. IV. OPTIMIZED ORDER PARAMETER As described in Sec. II D, the order parameter of the BCS reference state associated with the underlying breaking of U (1) symmetry can be optimized, for each "physical" g of interest, when applying the truncated CI method. To do so, the diagonalization of H(g) is repeated while scanning g aux (i.e. ∆(g aux )) that parametrizes the truncated basis until the minimum of the lowest eigenenergy E gopt (g) is found. A. PAV-BCS ansatz As a jumpstart, the rationale is first applied while restricting the trial state to the first term in Eq. 13, i.e. to the PAV-BCS wave-function. This strictly corresponds to the RVAP method designed within the frame of multi-reference nuclear energy density functional calculations [64]. Results as a function of g are compared in Fig. 5 to actual PAV-BCS and VAP-BCS results. By definition, PAV-BCS results are generated by setting g aux = g for each given g, i.e. by picking the order parameter obtained at the level of the BCS wave-function rather than at the level of the actual PAV-BCS wavefunction. While the results are not at the desired level because of the lack of projected qp excitations, they perfectly illustrate the gain induced by optimizing the order parameter at the level of the full calculation, i.e. after the symmetry restoration is performed in the present example rather than prior to it. It is particularly striking below threshold where (∆E/E) c decreases from 100 % to about 20-40 %. In the normal phase, not too far from the BCS threshold, it is indeed highly beneficial to allow the reference state to break U (1) symmetry while restoring it. As discussed above, this corresponds to including a specific set of np-nh configurations at a low computational cost. This reduced set provides an efficient way to partly capture correlations associated with pairing fluctuations that arise as a precursor of the phase transition. Above threshold, results are also significantly improved over the range g ∈ [g c , 0.4] by finding the optimal order parameter. For g > 0.6, no significant gain is obtained given that PAV-BCS itself becomes eventually exact. One interest of this optimization is that the associated numerical effort simply corresponds to repeating the full calculation a few number of times. At the PAV level, it makes the RVAP calculation unexpensive compared to the VAP-BCS calculation it approximates. Of course, results are significantly less accurate than the actual VAP-BCS calculation given that the optimization of the order parameter is not equivalent to exploring the complete manifold of BCS states as in the VAP-BCS calculation. This is particularly true in the very weak coupling regime where the system does not experience pairing fluctuations. B. Full ansatz The rationale is now implemented on the basis of the full ansatz of Eq. 13. Figure 6 displays the so-called potential energy surface (PES) representing the total energy E gaux (g) as a function of g aux . Results are given for three representative values of g, i.e. (a) g = 0.15 < g c , (b) g = 0.4 > g c and (c) g = 0.8 g c . In each case, the minimum of the PES indicates the position of g opt . One first notices that the minimum of the curve is typically not obtained for g aux = g. The optimal basis in presence of the configuration mixing is characterized by a symmetry breaking, i.e. a reference pairing gap ∆(g opt ), that differs from the one obtained at the (projected) BCS minimum. This is particularly striking for g < g c (upper panel of Fig. 6) where it is advantageous to employ a basis that explicitly captures features of pairing fluctuations, i.e. that benefits from the additional np-nh configurations brought about by projected 0qp, 2qp and 4qp states. Beyond the phase transition, one has g opt > g (g opt < g) for intermediate (large) coupling as exemplified in the middle (lower) panel of Fig. 6. All in all, the successive inclusion of the particle number restoration and of the qp excitations significantly influence the value of g opt and the associated quality of the variational ansatz (see below), especially at weak and intermediate coupling. This is summarized in Tab. I. Figure 7 provides the same comparison as Fig. 2 but with the optimal order parameter g opt defining the basis at each value of the coupling g. The optimization generates an impressive systematic improvement for g < 0.6 and solves completely the discontinuity problem observed in Fig. 7 at g = g c . The error on the correlation energy is now lower than 0.1 % for all g, which is almost one order of magnitude lower than our original goal. Our results compare very favorably with PoST methods [58,59]. Once again, projected 2qp configurations are redundant and can actually be omitted. Figure. V. PERFORMANCE AND SCALING The main feature of the presently proposed method resides in the optimization of the basis used to diagonalize the Hamiltonian. This results in a dimensionality that is drastically reduced compared to the total Hilbert space and, for a given accuracy, compared to truncated CI calculations based on traditional np-nh configurations. The rationale of the latter method is to describe the system via a basis of product states that respect U (1) symmetry even in the superfluid phase. The rationale of our method is exactly opposite, i.e. it uses a basis that exploits the breaking of U (1) symmetry (while restoring it) to describe the system even in its normal phase. [52], as well as in the presently designed approach. The corresponding error on the correlation energy is provided for g = 0.18, g = 0.54 and g = 0.66. We recall in passing that the dimension of the basis used in truncated CI calculations based on 0qp and 4qp configuration makes the method exact for N = 2 and N = 4. In the weak coupling regime (g = 0.18), truncated CI calculations based on 0p-0h and 2p-2h configurations already achieve an error below 1 % based on a small basis size, which eventually scales as N 2 with the system size. Calculations based on optimized projected 0qp and 4qp configurations perform one order of magnitude better based on a basis that is only twice as large and that scales similarly with the system size. If degrading the calculation to optimized projected 0qp and 2qp configurations, a scheme that scales as N with the system size, the result are however one order of magnitude worse (10 % error on E c ) than the CI calculation based on 0p-0h and 2p-2h configurations. This demonstrates the need to include 4qp configurations to reach (much) better than the 1 % accuracy at weak coupling. In the superfluid regime, truncated CI calculations based on projected 0qp and 4qp configurations reach again an accuracy well below 1 %, which is comparable to the results obtained from truncated CI calculations including up to 8p-8h configurations for g = 0.54 and is even one order of magnitude better for g = 0.66. While the dimension of the latter basis scales as N 8 with the system size, the set of projected 0qp and 4qp configurations scales as N 2 , which is obviously much more gentle. For rather strongly paired systems, i.e. for g = 0.66, degrading the calculation to optimized 0qp and 2qp configurations, which scales as N with the system size, already reaches 1 % accuracy on the correlation energy. Of course, part of the cost of the calculation is trans-ferred into the particle-number projection but the end scaling is still very favorable. Eventually, the numerical cost Num(N ) of the scheme is polynomial and scales according to Num(N ) = n gaux BCS(g aux , N ) where the first term relates to solving BCS equations, the second term to calculating the elements of the overlap and Hamilton matrices while the third term designates the cost of the diagonalization of these two matrices. The cost scales linearly with the number of times n gaux the calculation must be performed to find the optimal g opt . In practical calculations, it is possible to keep n gaux < 10 once the calculation at g aux = g has been performed. Of course, n gaux = 1 when the optimization of the order parameter characterizing the basis is omitted. The cost associated with the BCS variation is negligible as it scales essentially linearly with Ω = N . Employing projected 0qp and 4qp (2qp) configurations, the number of matrix elements n 2 st to calculate scales as N 4 (N 2 ) while the cost of their computation is ME(n φ , N ) = α n φ N 2 , which makes the overall scaling go as n φ N 6 (n φ N 4 ). The cost of computing the matrix elements is linear with the number of gauge angles n φ employed in the particle-number projector (see Eq. 11). This number can be kept essentially constant, i.e. n φ ∼ 10, when increasing N . Finally, the cost of the diagonalization is DIAG(n st ) = βn 3 st = βN 6 (βN 3 ). All in all, the building of the matrices and their diagonalization scale similarly as N 6 (the building of the matrix goes as N 4 and dominates when using projected 0qp and 2qp configurations) with the system size. There are ways to further improve on this situation. First, full diagonalization is not mandatory as one can envision the use of alternative methods such as Lanczos to extract a few low-lying states at a much reduced numerical cost. This might be particularly useful when addressing large model spaces and/or particle numbers associated with realistic cases of interest. Second, the pre-factor α n φ associated with the direct integration over the gauge angle to perform the particle number projection can be scaled down by performing the latter on the basis of recurrence relations [65]. Last but not least, there probably is a systematic convergence of the result, as in standard truncated CI calculations [52], as a function of the maximum unperturbed energy of the 2qp and 4qp included in the ansatz for a given single-particle basis size (Ω here). This means that given a targeted accuracy, the dimensionality and the numerical cost might be significantly scaled down by exploiting this additional convergence parameter and complementing the calculation by an appropriately designed formula to extrapolate the results to the un-truncated limit. Such a systematic study has not been performed within the scope of the present paper but could be envisioned in the future. VI. ADDITIONAL OBSERVABLES To complete our study, the discussion is extended to other observables. A. Effective pairing gap We start with the computation of the effective pairing gap [66,67] which generalizes the BCS gap ∆(g) and where the expectation values are to be computed for any ground-state wave-function of interest. In Fig 9, the effective gap obtained in the exact case is compared to the one obtained from various approximate many-body methods of present interest. We observe that truncated CI calculations based on (non) optimized projected 0qp and 4qp configurations provide results that are below 0.05 % (1.5 %) error for all coupling strengths g (g > g c ) and much superior to the other methods shown. B. One-body entropy States obtained via the presently proposed method are strongly entangled, in the sense that they correspond to a complex mixing of independent-particle states. As a matter of fact, exact solutions are known to be highly correlated states, resulting into extended diffusion of singleparticle occupation numbers across the Fermi energy. To quantify the deviation of these many-body states from any independent-particle state, the single-particle entropy defined as is computed. Exact results are compared in Fig. 10 to those obtained from various approximate many-body methods of present interest. Again, truncated CI calculations based on (non) optimized projected 0qp and 4qp configurations provide results that are below 0.1 % (2 %) error for all coupling strengths g (g > g c ) and much superior to the other methods shown. This demonstrates that single-particle occupation numbers across the Fermi energy are accurately described, which eventually propagate to any one-body observable. Fig. 9 for the one-body entropy. C. Low-lying excitations Only ground-state properties have been discussed so far. Being based on a direct diagonalization of the Hamiltonian in a restricted space, it is a tremendous advantage of the presently designed method to also access excited states. Given that the size of the sub-space covered is drastically smaller than the one of H N , one can only expect to provide a fair account of a few low-lying states. Energies of the 10 lowest excited seniority-zero states are compared to exact results for N = Ω = 16 in Fig. 11. Truncated CI calculations based on projected 0qp, 2qp and 4qp configurations provide an accurate reproduction of the low-lying spectroscopy. More specifically, the error made on the correlation energy 15 of the five lowest excited states is lower than 8.5 % for g ∈ [0, 1]. This error drops to less than 3.5 % for N = 8. It is to be remarked that the order parameter has not been optimized, i.e. g aux = g is presently used, because such an optimization does not necessarily decrease the error. Indeed, the improvement of the ground-state energy obtained on the basis of Ritz' variational principle does not carry over to excited states. While no particular pattern can be anticipated for individual excited states, it happens that the reproduction of the low-lying spectroscopy is of the same overall quality for g aux = g and g aux = g opt . VII. CONCLUSIONS A novel approximate many-body scheme is presently tested on the so-called attractive pairing Hamiltonian as a way to gauge its capacity to account for the physics of N -body systems transitioning from the weak to the strong coupling regime via a normal-to-superfluid phase transition. This work takes place in the context of designing polynomially-scaling methods that are possibly more (i) accurate and (ii) easily applicable to more quantum states than those, i.e. Gorkov self-consistent Green's function (GSCGF), multi-reference in-medium similarity renormalization group (MR-IMSRG) and Bogoliubov coupled cluster (BCC) methods, that are currently operating a breakthrough in the ab-initio calculations of medium-mass open-shell nuclei. The presently proposed method is variational and happens to be an interesting candidate to achieve the abovementioned goal. It does so by combining three features that have been employed separately in various existing many-body methods so far • It is a truncated configuration interaction method, i.e. it amounts to diagonalizing the Hamiltonian in a highly truncated subspace of the total N -body Hilbert space. • The reduced Hilbert space is generated via a set of states that exploit the spontaneous (U (1)) symmetry breaking and restoration associated with the (normal-to-superfluid) quantum phase transition of the N -body system. Specifically, the set of states considered is given by the particle-number projected BCS state along with projected seniorityzero two and four quasi-particle excitations built on top of the BCS state. Because each basis state is symmetry projected, the method consists of representing the Schroedinger equation onto a nonorthonormal basis. The corresponding diagonalization can be performed using standard techniques. • The extent by which the BCS reference state breaks (U (1)) symmetry is optimized in presence of projected two and four quasi-particle excitations. This constitutes an extension of the so-called restricted variation after projection method in use within the frame of multi-reference nuclear energy density functional calculations [64]. The many-body scheme has been compared to exact solutions of the attractive pairing Hamiltonian based on Richardson equations [28,31,37]. By construction, the method is exact for N = 2 and N = 4. For N = (8,16,20), the error on the ground-state correlation energy is less than (0.006, 0.1, 0.15) % across the entire range of coupling g defining the pairing Hamiltonian and driving the normal-to-superfluid quantum phase transition. To the best of our knowledge, this is better than any many-body method scaling polynomially (N 6 here) with the system size and tested so far on the pairing Hamiltonian. In particular, it is superior to the highly accurate PoST α and PoST x methods recently proposed in Ref. [59] with the same motivations as here. The presently proposed method offers the great additional advantage to automatically access low-lying excited states. The error on the correlation energy of the five lowest excited states is smaller than 8.5 % (3.5 %) for g ∈ [0, 1] for N = 16 (N = 8). The schematic pairing Hamiltonian employed here corresponds to modeling sub closed-shell systems, i.e. the naive filling of the doubly-degenerate picket fence singleparticle scheme with an even number of particles always leads to a sub closed-shell system. Correspondingly, the Hartree-Fock reference state can always be defined, which is mandatory to apply many methods, including the recently proposed PoST methods [59]. However, this HF reference cannot even be defined in genuinely open-shell systems we are actually interested in, i.e. for the vast majority of singly or doubly open-shell nuclei. The presently proposed method, however, is based on a reference state that spontaneously breaks (U (1)) symmetry whenever necessary and can be equally applied independently of the closed-shell, sub closed-shell or genuinely open-shell character of the system under study. This makes the method extremely versatile. Although IMSRG and SCGF techniques have not been applied to the pairing Hamiltonian problem throughout the superfluid phase transition (while CC has), their ac-curacy in the best current level of implementation is of the order of a few per cent error on the ground-state correlation energy of singly open-shell nuclei. In view of that, results obtained in the present work indicate that the truncated CI method based on low-order projected qp excitations constitutes an interesting method to pursue. In order to go beyond the present proof-of-principle calculation, our objective is to implement the method for ab initio calculations of mid-mass open-shell nuclei. Last but not least, one should note that the highly accurate character of the method is achieved at the price of giving up on size-extensivity. It is a common feature of all truncated CI methods that is also shared by the PoST method of Ref. [59]. The increasing relative error from 0.006%, to 0.1 % and to 0.15 % when increasing the particle number from N = 8 to N = 16 and to N = 20 might already be a trace of it. Restoring size consistency demands the inclusion of very high excitation levels and possibly all excitations, which is prohibitive. Although given up on size extensivity is somewhat unconventional from the perspective of modern many-body methods, and although it deserves attention as larger systems are studied, it is a price one is willing to pay to obtain a highly accurate description at a reasonable computational cost.
2016-10-13T13:08:59.000Z
2016-10-13T00:00:00.000
{ "year": 2016, "sha1": "517ea1f21681db091fcd10b56e9cfb523a819c69", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1610.04063", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "517ea1f21681db091fcd10b56e9cfb523a819c69", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
189927200
pes2o/s2orc
v3-fos-license
The expression of S100A8/S100A9 is inducible and regulated by the Hippo/YAP pathway in squamous cell carcinomas Background S100A8 and S100A9, two heterodimer-forming members of the S100 family, aberrantly express in a variety of cancer types. However, little is known about the mechanism that regulates S100A8/S100A9 co-expression in cancer cells. Methods The expression level of S100A8/S100A9 measured in three squamous cell carcinomas (SCC) cell lines and their corresponding xenografts, as well as in 257 SCC tissues. The correlation between S100A8/S100A9, Hippo pathway and F-actin cytoskeleton were evaluated using western blot, qPCR, ChIP and Immunofluorescence staining tests. IncuCyte ZOOM long time live cell image monitoring system, qPCR and Flow Cytometry measured the effects of S100A8/S100A9 and YAP on cell proliferation, cell differentiation and apoptosis. Results Here, we report that through activation of the Hippo pathway, suspension and dense culture significantly induce S100A8/S100A9 co-expression and co-localization in SCC cells. Furthermore, these expressional characteristics of S100A8/S100A9 also observed in the xenografts derived from the corresponding SCC cells. Importantly, Co-expression of S100A8/S100A9 detected in 257 SCC specimens derived from five types of SCC tissues. Activation of the Hippo pathway by overexpression of Lats1, knockdown of YAP, as well as disruption of F-actin indeed obviously results in S100A8/S100A9 co-expression in attached SCC cells. Conversely, inhibition of the Hippo pathway leads to S100A8/S100A9 co-expression in a manner opposite of cell suspension and dense. In addition, we found that TEAD1 is required for YAP-induced S100A8/S100A9-expressions. The functional studies provide evidence that knockdown of S100A8/S100A9 together significantly inhibit cell proliferation but promote squamous differentiation and apoptosis. Conclusions Our findings demonstrate for the first time that the expression of S100A8/S100A9 is inducible by changes of cell shape and density through activation of the Hippo pathway in SCC cells. Induced S100A8/S100A9 promoted cell proliferation, inhibit cell differentiation and apoptosis. Electronic supplementary material The online version of this article (10.1186/s12885-019-5784-0) contains supplementary material, which is available to authorized users. Background Approximately 80% of human cancers are epithelial in origin, with SCC being one of the most common tumor types. SCCs predominantly derived from the squamous epithelia of the skin, oral cavity, esophagus, and cervix and can be highly aggressive and metastatic [1]. Despite surgery, radiotherapy, and chemotherapy, SCC lesions often recur and spread to other body sites, such as the lung. Therefore, it is important to identify the molecules that inhibit the aberrant proliferation of SCC, as it may help in improve the clinical treatments of SCCs. S100A8 (calgranulin A) and S100A9 (calgranulin B), two heterodimer-forming members of the S100 family, were originally discovered as immunogenic proteins expressed and secreted by neutrophils [2]. Additionally, constitutive expression of S100A8/S100A9 is largely restricted to myeloid cells and tightly regulated during myeloid differentiation [3][4][5]. In phagocytic and nonphagocytic cells, S100A8 and S100A9 form heteromeric complexes that bind arachidonic acid (AA) and promote NADPH oxidase activation [6]. Therefore, S100A8/ S100A9 have emerged as important pro-inflammatory mediators of acute and chronic inflammation [7]. Although researchers focused on the roles of S100A8/ S100A9 in inflammatory cells currently, there is also growing evidence for important roles of both proteins in non-myeloid cells, such as skin cells and cancer cells [8,9]. In normal skin, S100A8/S100A9 minimally express, but they are massively expressed in psoriasis keratinocytes, which demonstrate abnormal differentiation and hyperproliferation [10,11]. Aberrant expression of S100A8/S100A9 complex was also detected in a variety of cancer tissues, including in squamous cell carcinomas of the esophagus, oral cavity, and cervix [2,[12][13][14][15]. It has reported that strong expression and secretion of S100A8/S100A9 may be associated with the loss of estrogen receptor in breast cancer, and may be involved in the poor prognosis of Her2+/basal-like subtypes of breast cancer [16]. S100A8/S100A9 expression in epithelial cancer cells causes enhanced infiltration of immune cells, especially neutrophils, and stimulates settlement of the cancer cells in the lung [17]. Some work demonstrated extracellular S100A8/S100A9 proteins contribute to colorectal carcinoma cell survival and migration via Wnt/β-catenin pathway [18]. However, few studies have investigated whether S100A8/ S100A9 displayed co-expressions in SCC tissues and how these two proteins regulated in SCC cells. The Hippo/YAP-signaling pathway is a critical regulator of tissue homoeostasis [19]. At the core of this pathway in mammals includes MST1/2, and their substrates, the kinases LATS1/2 [20]. LATS1/2 (LATS-HM) is phosphorylated by MST1/2, which in turn directly phosphorylates YAP (Yes-associated protein) at Serine 127 (YAP-S127). The phosphorylated YAP results in its cytoplasmic localization and no longer acts as a transcriptional coactivator [20][21][22][23][24]. However, dephosphorylated YAP exerts its transcriptional activity mostly by interacting with the TEAD in nucleus [24]. TEADs are the primary transcription factors and play an important role in development, tissue homeostasis, and tumorigenesis. Importantly, these functions of TEADs including cell proliferation, differentiation, and survival are largely thought to be regulated by binding of YAP/TAZ [25]. Interestingly, recent studies demonstrate that changes of cell shape and/ or cell density also activate the Hippo-YAP pathway via actin cytoskeleton reorganization [26,27]. In this study, we found that both S100A8 and S100A9 are inducible under cell suspension and dense culture through activation of the Hippo pathway in SCC cells. Consistent with these findings, S100A8/S100A9 colocalization and/or co-expression also found in SCC cells and their corresponding xenografts, as well as in the clinical SCC tissues. We also proved that S100A8/ S100A9 functions as promoting cell proliferation, but inhibiting squamous differentiation and apoptosis induced by suspension and dense culture. Our findings provide new insights into the Hippo/YAP pathway in regulation of S100A8/S100A9 expression in SCCs. Methods Detailed methods include Plasmids and Reagents, siRNA and transfection, RT-PCR, Immunofluorescence staining, Tumorigenicity in nude mice, Tissue specimens, ChIP, Flow Cytometry are in Additional file 1. Cell culture The human carcinoma cell lines A431, HCC94, and FaDu were purchased from the Chinese Academy of Sciences Committee Type Culture Collection Cell Bank and the cell lines were authenticated by short tandem repeat analysis at HK Gene Science Technology Co (Beijing, China). All cells were cultured in 1640 medium with 10% fetal calf serum. Cells were maintained in a humidified incubator at 5% CO 2 and 37°C. Suspension culture was achieved by plating the cells in Poly-HEMA coated (12 mg/ml dissolved in 95% ethanol) 6-well plates in medium. Low cell density (here-after called 'sparse') were achieved by plating 10,000 cells/cm 2 in plates, and high cell density (here-after called 'dense') were achieved by plating 100,000 cells/cm 2 in plates. Statistical analysis Statistical analysis performed using GraphPad Prism software. The statistical significance evaluated using Student's t-test (2-tailed) to compare two groups of data. The asterisks indicate significant differences between the experimental groups and corresponding control condition. Differences considered statistically significant at a P-value of less than 0.05. P-values < 0.05, < 0.01 are indicated with one and two asterisks, respectively. S100A8/S100A9 is inducible and both proteins display colocalization in vitro Recent studies have shown that S100A8/S100A9 is associated with various neoplastic disorders [31][32][33]. However, the regulation of both expressions in SCCs is not well defined. Therefore, we examined the expression of S100A8/S100A9 in squamous cell carcinoma cell lines A431, HCC94 and FaDu. Interestingly, we found that less than 1% S100A8/S100A9-positive cells were detected in all tested cell lines under normal culture conditions ( Fig. 1a-f ). It has reported that S100A9 expression can be induced in normal primary keratinocytes (HEKn) under dense culture [34]. Therefore, to investigate whether S100A8/S100A9 could also be induced under dense conditions, HCC94 cells were cultured at high densities ('dense' , 100,000 cells/cm 2 ). Immunohistochemical staining revealed that S100A8/S100A9-positive cells could be induced by dense treatment compared with low cell density (hereafter called 'sparse' , 10,000 cells/cm 2 ). The percentage of S100A8/S100A9-positive cells increased from less than 1% under normal culture conditions to above 40% after dense 2 days (Fig. 1g) and then, significantly decreased after the dense cells were reseeded at the pre-dense density and returned to the original ratio after three cell passages (Additional file 2: Figure S1). Suspension culture is another inducer of S100A9 expression in keratinocytes [34]. To explore whether S100A8/S100A9 may also induced by suspension in SCC cells, the HCC94 cells in suspension reattached to a slide for 12 h and then examined by immunohistochemistry. The expression of S100A8/S100A9 significantly up regulated in suspension culture (Fig. 1h). Subsequently, double immunofluorescence staining further confirmed that S100A8/S100A9-positive cells co-localized in all culture conditions (Fig. 1i-k). Collectively, these results indicate that cell density and cell morphology can regulate S100A8/S100A9 co-expression and both protein display co-localization in SCC cells. S100A8/S100A9-positive cells are co-inducible in vivo Based on the above results, we hypothesized that S100A8/ S100A9 may also be induced and co-expressed or colocalized in vivo. To test our guess, we examined S100A8/ S100A9 expression pattern in xenografts derived from FaDu, A431, and HCC94 cells by immunohistochemistry in two consecutive sections. As expected, the percentage of S100A8/S100A9-positive cells was raised from less than 1% in FaDu, HCC94 and A431 cells in vitro to score 1, score 2 and score 3 (IHC staining score: 0, no positive cells; 1, < 10% positive cells; 2, > 10 and < 50% positive cells; 3, > 50 and < 75% positive cells; 4, > 75% positive cells) in their corresponding xenografts, respectively ( Fig. 2a-f). Importantly, co-expression of S100A8/S100A9 also detected in all tested xenografts by immunohistochemistry, and the co-localization further confirmed in HCC94 xenograft by double immunofluorescence analysis (Fig. 2g). These results directly support the hypothesis that S100A8/S100A9 co-induction in SCC cells also occurs in vivo and implies that the functions of S100A8/S100A9 in SCC may be dependent on each other. To investigate the expression pattern of S100A8/ S100A9 in different types of SCCs, five types of SCC tissue arrays with 257 SCC cases examined by immunohistochemistry in two consecutive sections. S100A8/S100A9 expression could be detected in 74.4% of lung SCCs, 100% of esophageal SCCs, 100% of cervical SCCs, 96% of oral SCCs, and 95% of skin SCCs stained positive for both S100A8/S100A9 (Additional file 1: Table S3). Although S100A8/S100A9 expression was frequently focal positive immunostaining in the well or moderately differentiated areas even with in one specimen, but both proteins showed the similar staining distributions (Fig. 2h-q). These results indicate that the co-expression of S100A8/ S100A9 may be a common feature of SCCs, at least in well and moderately tissues and /or regions. S100A8/S100A9 co-induction is regulated by Hippo-YAP pathway Both suspension and dense can activate the Hippo pathway via actin cytoskeleton reorganization [26,35]. Based Fig. 1 Immunohistochemical analysis of S100A8/S100A9 protein expression in SCC cells. The staining of S100A8/S100A9 in A431 cells (a and d), HCC94 cells (b and e) and FaDu cells (c and f) in normal culture conditions. Cells were cultured in dense condition (D48h) and suspension condition (S48 h) for 2 days, the staining of S100A8/S100A9 in HCC94 cells were analyzed by immunohistochemical (g and h). Double immunofluorescence staining of S100A8/S100A9 in normal, dense and suspension culture condition, as well as DAPI staining in HCC94 cells (i-k) on the above results, we hypothesized that the Hippo/ YAP pathway may participate in S100A8/S100A9 induction. As expected, first, we found the positive correlation of S100A8/S100A9 induction and the Hippo pathway activation. In the suspended and dense cells, the expression of S100A8/S100A9 in mRNA and protein levels was significantly increased while the Hippo pathway was activated, as indicated by an increase in the status of LATS-HM (LATS1-T1079) and YAP (YAP-S127) phosphorylation ( Fig. 3a, b, e, f; Additional file 2: Figure S2a-c), as well as transcription suppression of CTGF and CYR61, two direct endogenous markers of YAP activity (Fig. 3c, d, g, h). Next, the decrease of S100A8/S100A9 co-expression also accompanied by inhibition of the Hippo pathway after recovery of cell attachment or relief from dense culture. These results suggest the Hippo-YAP pathway may regulate S100A8/S100A9 expression in SCC cells. To test this hypothesis, YAP-S127A (a constitutively activated form of YAP) or YAP-WT plasmid was transfected into HCC94 cells and then cultured these cells under suspension and dense conditions. As expected, overexpression of YAP-S127A led to the more obvious suppression of S100A8/S100A9 than YAP-WT overexpression (Fig. 4a). Similarly, inactivation of the Hippo pathway by knockdown of MST1 or LATS1 dramatically decreased S100A8/S100A9 expression as well as YAP phosphorylation in suspended or dense HCC94 cells (Fig. 4b, c; Additional file 2: Figure S2d). The knockdown efficiency of two different specific LATS1 siRNAs and MST1 siRNAs detected by qPCR (Additional file 2: Figure S3a, b). Conversely, depletion of YAP induced S100A8/ S100A9 co-expression in HCC94, FaDu and A431 cells ( Fig. 4d-f; Additional file 2: Figure S2e, f), the knockdown efficiency of two different specific YAP siRNAs were detected by qPCR (Additional file 2: Figure S3c). This inducible phenomenon of S100A8/S100A9 also detected Fig. 2 Co-induction and co-localization of S100A8/S100A9 expression in vivo. The staining of S100A8/S100A9 was examined in xenografts derived from FaDu (a and d), HCC94 (b and e), and A-431 cells (c and f) by immunohistochemistry in two consecutive sections. Co-localization of S100A8/S100A9 expression further confirmed by double immunofluorescence with DAPI staining in HCC94 xenograft (g). The staining of S100A8/ S100A9 in the lung (h and m); esophageal (i and n); cervical (j and o); oral (k and p); and skin SCC specimens (l and q) were examined by immunohistochemistry in two consecutive sections in overexpression of LATS in HCC94 and FaDu cells ( Fig. 4g; Additional file 2: Figure S2g). These results confirm that the Hippo-YAP pathway can regulate S100A8/S100A9 expression. Since TEAD is indispensable for YAP to regulate transcription as a co-activator or corepressor in the nucleus [36][37][38], we first transiently knocked down TEADs in attached HCC94 and FaDu cells using two specific TEADs siRNAs. Interestingly, only silencing of TEAD1 led to a substantial increase of S100A8 and S100A9 expression (Fig. 4h, i; Additional file 2: Figure S2h, i), the knockdown efficiency of two different specific TEAD1 siRNAs were detected by qPCR (Additional file 2: Figure S3d). However, knockdown of the other three TEADs (TEAD2, TEAD3, TEAD4) almost had not any effects (data not shown). YAP-S94 is essential for the combination of YAP and TEAD [26,38], overexpression of YAP-S94A (a mutation of YAP 94 site) marginally affected S100A8/S100A9 co-expression compared with YAP-WT (Fig. 4j, k). However, the direct association of YAP with the promoter of S100A8/S100A9 was not observed by chromatin immunoprecipitation (ChIP) (Fig. 4l), supporting that the transcription of S100A8/S100A9 may be indirectly regulated by YAP/TEAD1 complex. To test whether the Hippo pathway activated in vivo, we also examined YAP and pYAP-S127 expression pattern in xenografts derived from A431 cells by immunohistochemistry in two consecutive sections. Expectedly, the low Fig. 3 S100A8/S100A9 induction accompanied by inactivation of YAP and activation of Hippo pathway. Western blot analyses of S100A8/S100A9 and Hippo pathway in HCC94 (a and b) and FaDu (e and f) cells. Cells cultured in suspension for 2 days (S48 h) and then reattachment for 1 day (S48 h-reattached). Cells cultured densely for 2 days (D48h) and then relief from dense culture (D48h-sparse). GAPDH was used as a loading control. The expression of S100A8, S100A9, CTGF and CYR61 were analyzed by qPCR in HCC94 cells (c and d) and FaDu cells (g and h). Error bar, SD of three different experiments. *p < 0.05, **p < 0.01; t-test expression of YAP and the high expression of pYAP-S127 were detected in the same area, indicating that the Hippo pathway was indeed activated in xenografts. These results suggest that the induction of S100A8/A9 expression in vivo also related to the Hippo pathway (Additional file 2: Figure S4). Collectively, these data unequivocally demonstrate for the first time that activation of the Hippo pathway is a critical step for S100A8/S100A9 induced by cell shape and cell density in SCC cells. F-actin disruption enhances S100A8/S100A9 coexpression via activation of the Hippo pathway It has reported that suspension and dense culture activate the Hippo pathway by F-actin cytoskeleton reorganization Fig. 4 The Hippo pathway is responsible for S100A8/S100A9 induction. HCC94 cells were transfected with His-YAP-WT and His-YAP-S127A under suspension and dense culture condition (a). MST1 and LATS1 were knockdown by corresponding siRNAs in suspension and dense cultured HCC94 cells (b and c). Deletion of YAP in normal attached HCC94 and FaDu cells, the expression of S100A8/S100A9 was tested by western blot (d), and CTGF, CYR61 were detected by qPCR (e and f). Overexpression of LATS1 in normal attached HCC94 cells and FaDu cells (g), anti-flag tag antibody was used to judge the transfection efficiency. TEAD1 was deleted by two specific siRNAs in HCC94 (h) and FaDu (i) cells, the expression of S100A8/S100A9 was detected by western blot and the gray value of S100A8/S100A9 was analyzed by ImageJ Launcher. HCC94 cells (j) and FaDu cells (k) were transfected with Flag-YAP-WT and Flag-YAP-S94A plamids, anti-flag tag antibody was used to judge the transfection efficiency. YAP was not binding on S100A8/ S100A9 promoter sites were detected by CHIP analysis using anti-YAP versus IgG control antibody, CYR61 was as positive control (l) [26,35]. Thus, we examined the intracellular F-actin distribution in normal, suspension and dense cell cultures. At low cell density, F-actin bundles exist to supporting cell morphology (Fig. 5a-c, j-l). However, bundles formed by F-actin was not observed in the suspension and dense cultured cells, F-actin tended to depolymerize and depolymerized F-actin aggregated around the cell membrane ( Fig. 5d-i, m-r). If F-actin is also involved in YAP-mediated S100A8/ S100A9 induction, disruption of F-actin should affect the expression of S100A8/S100A9. To test this hypothesis, we treated the HCC94 and FaDu cells with two drugs, LatB and cytoD, which could disrupt the F-actin cytoskeleton by preventing actin polymerization and capping filament plus ends respectively [26]. As expected, treatment of cells with LatB for 24 h resulted in F-actin depolymerized (Additional file 2: Figure S5d-f ), and western blot showed that S100A8/S100A9 was significantly induced and accompanied by the Hippo pathway activation (Fig. 6a, b, g, h) which was indicated by a significant decrease of CYR61 and CTGF mRNA level (Fig. 6d, e, j, k). Similar results were also observed in the same cells after treatment with Rho inhibitor C3 that could induce the dephosphorylation of YAP by regulating actin cytoskeleton [32] (Fig. 6c, f, i, l, Additional file 2: Figure S5g-i). Together, our findings indicate that actin cytoskeleton remodeling either by cells suspension and dense culture or disruption of F-actin induces S100A8/S100A9 co-expression through activation of the Hippo pathway. S100A8/S100A9 promotes cell proliferation and inhibits squamous differentiation It has reported that S100A8/S100A9 forms heterodimers and plays an important role in regulating cells proliferation and apoptosis in some normal and cancer cells [32,[39][40][41][42][43]. To explore the effect of S100A8/S100A9 on SCC cells, we performed depletion of YAP with or without combined with knockdown S100A8/S100A9 in A431 and FaDu cells. The results revealed that depletion of YAP and S100A8/S100A9 together resulted in the more obvious suppression of cells proliferation and promotion of squamous differentiation compared with silencing of YAP alone, as indicated by an increase of squamous differentiation markers including Keratin 1, Keratin 13 and TG1, as well as involucrin [34,37,44,45] (Fig. 7a-d). These results suggest that S100A8/ S100A9 and YAP both play the similar role in promoting cell proliferation and inhibiting squamous differentiation. In these cases, the function of YAP seems more effective than S100A8/S100A9, becauseS100A8/S100A9 induced by YAP knockdown was not enough to counteract the effect of YAP silencing on cell proliferation and differentiation. Collectively, our findings indicate that S100A8/S100A9 and YAP function as the positive regulator of cell proliferation and negative regulator of cell differentiation in SCC cells. S100A8/S100A9 inhibits cell apoptosis induced by suspension and dense culture Because suspension and dense culture leads to cell apoptosis, even for cancer cells, we next explore the function of S100A8/S100A9 in suspended and dense SCC cells. The results showed that silencing of S100A8/S100A9 together significantly promoted cell apoptosis in suspended and dense cells compared with the control Fig. 6 S100A8/S100A9 induction mediated by actin cytoskeleton via the Hippo pathway. HCC94 (a) and FaDu (g) cells were treated with LatB (20μg/mL) for 6 h and 24 h. HCC94 (b) and FaDu (h) cells were cultured for 48 h in the present of CytoD (0.05, 0.1 uM). C3 (1 μg/mL) was added to HCC94 cells (c) and FaDu cells (i) with serum-free growth medium for 4 h prior to harvesting for Western blotting analyses. CYR61, CTGF, S100A8/S100A9 was detected by qPCR in HCC94 (d-f) and FaDu (j-l) cells groups (Fig. 8, and Additional file 2: Figure S6). It has been reported that YAP can bind to p73 during cells are cultured in an apoptosis-inducing environment, such as suspension, and the complex of YAP/p73 initiates the expression of apoptosis-related genes [46,47]. To investigate whether YAP and p73 participate in SCC cell apoptosis, we introduced YAP-S127 and p73 plasmids into three kinds of SCC cells, and then cultured these cells in suspension and high density for 48 h. The results showed that overexpression of YAP-S127 and p73 significantly increased the proportion of apoptosis in all tested cells relative to the control groups (Fig. 9, Additional file 2: Figure S7). These suggest that YAP may play dual functions in SCC cells depending on the cells cultured microenvironment. Therefore, for suspended and high-density cultured cells, inactivation of YAP and induced expression of S100A8/S100A9 are beneficial to cell survival. Discussion In the present study, we examined the expression of S100A8/S100A9 in three SCC cells, less than 1% of S100A8/S100A9-positive cells observed in all the tested cells under the normal culture condition. However, we observed interesting phenomena. When cells cultured in dense, the percentage of S100A8/S100A9-positive cells were obviously increased and co-localized but rapidly returned to the original ratio after recovery of low cell density culture. Similar phenomena were also detected in suspend cells. Importantly, S100A8/S100A9 mRNA and protein levels were always consistent with the percentage of their positive cells. These results suggest that S100A8/S100A9 is inducible in vitro, which depends on cell shape and cell density. Next, we also examined the expression pattern of S100A8/S100A9 in vivo. We found that the expression characteristics of S100A8/S100A9 were marked heterogeneity and display co-expression and co-localization in three SCC cell lines' corresponding xenografts. Interestingly, when we compared the percentage of S100A8/S100A9 immunostaining in cultured SCC cells and their corresponding xenograftes, we unexpectedly found that there were an inconsistent proportion of S100A8/S100A9-positive cells in vitro and in vivo. Although less than 1% of S100A8/S100A9-positive cells were observed in cultured SCC cells, a greater number of both positive cells were detected and displayed co-localization in the all tested xenografts. These results indicate that S100A8/S100A9 can also be Fig. 7 Loss of S100A8/S100A9 and YAP leads to cell differentiation and growth inhibitions. The cell proliferation of single deletion of YAP, S100A8/S100A9 or both in FaDu cells (a) and A431 cells (c) were determined by IncuCyte ZOOM long time live cell image monitoring system. Differentiation genes were induced by delete YAP, S100A8/S100A9 in FaDu cells (b) and A431 cells (d). Error bar, SD of three different experiments. *p < 0.05, **p < 0.01; t-test co-induced in vivo. The co-expression of S100A8/ S100A9 was also observed in 257 SCC specimens derived from SCC of lung, esophagus, cervix, oral cavity, and skin by immunochemistry and clinical specimen. Taken together, our results indicate that S100A8/ S100A9-negative cells can convert into the positive cells and increase their expression levels either by suspension (change of cell morphology) or by dense culture (altering cell density), vice versa. Based on the above results, we have reason to speculate that the inter-conversion of S100A8/S100A9-negative and -positive cells also happen in vivo. Subsequently, we found and confirmed that cell shape and cell density controlled S100A8/S100A9 co-expression through F-actin-mediated the Hippo pathway. The following results supported our conclusion; first, we overexpressed LATS1 or depleted the endogenous YAP in order to activation of the Hippo pathway in SCC cells. The results revealed that S100A8/S100A9 co-expression was significantly increased. Conversely, inactivation of the Hippo pathway by either deletion of LATS1 and MST1 or overexpression of YAP-WT and YAP-S127A markedly blocked suspension-and dense-induced S100A8/S100A9 co-expression. Second, we demonstrated that the induction of S100A8/S100A9 and activation of the Hippo pathway were also detected in attached cells after disruption of F-actin by LatB, CytoD and C3. Finally, we proved that TEAD1 was a cofactor of YAP for repressing S100A8/ S100A9 co-expression, which further confirmed by the transfection of YAP-S94A into the cells. Although it has been reported that YAP can be combined with numerous transcription factors, including ErbB4, Runx2, and TEAD, but TAED is one of the most important ones [38,39,[48][49][50]. Therefore, we do not rule out the possibility of other transcription factors to regulate S100A8/S100A9 expression by interacting with YAP in SCC cells. Interestingly, it has reported that YAP-TEAD1 complex paly dual functions of activation and inhibition of gene transcription were transfected with S100A8/S100A9 specific siRNAs, 24 h later cells were suspended culture 48 h. The proportion of cell apoptosis was measured by Flow cytometry via recruitment of different complex to target gene [50][51][52]. Our study provides the first biological evidence that S100A8/S100A9 expression is really, but indirectly regulated by the Hippo/YAP pathway in SCC cells. These results suggest that there is another intermediate protein to mediate the regulation of S100A8/S100A9 expression by YAP-TEAD1 complex. Except of the actin cytoskeleton, the microtubule cytoskeleton is also reorganized during cell detachment, which in turn activating the Hippo/YAP pathway. Interestingly, only detachment-induced YAP phosphorylation can be strongly blocked by nocodazole, but not attachmentinduced YAP dephosphorylation [38]. In addition, other signal pathways can also regulate the activation of the Hippo-YAP pathway, such as G-protein coupled receptor and E-cadherin-catenin complex [53,54], Therefore, it is not impossible that other pathways may also participate in regulating of S100A8 and S100A9 expression except of the actin cytoskeleton . Although the aberrant expressions of S100A8/S100A9 has reported in a variety of cancer tissues, the functional studies are also necessary to pay more attention. It has reported that, in SCC12 cells (cutaneous SCC cell line), overexpression of S100A8 and/or S100A9 increase cells proliferation and migration capacity in vitro, as well as promote tumors growth in vivo [31]. In the present study, we demonstrated that deletion of YAP combined with or without knockdown of S100A8/S100A9 inhibited cells proliferation but promoted squamous differentiation, but triple deletion have a better effect. These results support that although YAP activity and S100A8/ S100A9 expression display the negative correlation in the test SCC cells, they both perform the same and similar effects on cell proliferation and differentiation in the tested SCC cells. Thus, we speculate that YAP and S100A8/S100A9 might act as the compensatory function depending on cell microenvironment. In normal adherent cultured cells, the Hippo pathway is in a closed state. YAP binding with TEAD in the nucleus, play a role in promoting cell proliferation and inhibiting of cell differentiation. YAP downstream protein 'X' binds to the promoter of S100A8/S100A9, inhibiting S100A8/S100A9 expression. When SCC cells are detached or cultured in high density, the Hippo pathway are activated and nuclear YAP are decreased so that S100A8/S100A9 lost the inhibitory effect on protein 'X' , which leads to them induction (Additional file 2: Figure S8). The induced S100A8/S100A9 instead of YAP plays the similar effect on promoting cell proliferation and inhibiting differentiation. Although we did not find out protein 'X' in the present study, this work is ongoing in our laboratory. More importantly, we found that under suspension and dense culture YAP and S100A8/S100A9 play the opposite biological function, YAP and p73 promoted cell apoptosis and S100A8/S100A9 inhibit apoptosis. Therefore, we hypothesized that the inactivation of YAP and the induction of S100A8/S100A9 in the abovementioned conditions may be a mechanism for cancer cells resisting apoptosis and maintaining survival. In the process of metastasis through blood and lymph, cancer cells are suspended due to lack of matrix adhesion. To overcome anoikis, cancer cells develop a series of coping strategies, such as delaying the time of anoikis by autophagy and mutual phagocytosis, and improving the survival rate of metastatic cells [55]. Our results also revealed that the inactivation of YAP and the induction of S100A8/S100A9 could significantly increase the survival of cancer cells, which may also be a mechanism by which cancer cells overcome anoikis during metastasis. Conclusion This study uncovers for the first time that S100A8/ S100A9 is inducible both in vitro and in vivo, and both proteins display co-expression and /or co-localization. Actin cytoskeleton reorganization plays a critical role in control of S100A8/S100A9 co-expression through the Hippo-YAP pathway. Induced S100A8/S100A9 promoted cell proliferation, inhibit cell differentiation and apoptosis. Additional files Additional file 1: Methods. Table S1. siRNA sequences. Table S2. Primers used for qPCR. Table S3 S100A8 and S100A9 expression in SCC tissues. (DOCX 25 kb) Additional file 2: Figure S1. The Expression of S100A8 and S100A9 in HCC94 cells. Figure S2. Induction of S100A8 and S100A9 expression in A431 cells. Figure S3. Silencing effect of siRNAs in A431 cells. Figure S4. The expression of YAP and pYAP-S127 in xenografts. Figure S5. S100A8/ A9 inhibit cell apoptosis induced by dense culture. Figure S6. YAP and p73 promote cell apoptosis induced by dense culture. Figure S7. Diagram to summarize S100A8 and S100A9 induction procedure. Figure S8. Diagram to summarize S100A8 and S100A9 induction procedure. (a) In normal adherent cultured cells, the Hippo pathway is in a closed state. YAP binding with TEAD in the nucleus, play a role in promoting cell proliferation and inhibiting of cell differentiation. YAP downstream protein 'X' binds to the promoter of S100A8 and S100A9, inhibiting S100A8 and S100A9 expression. (b) When SCC cells are detached or cultured in high density, the Hippo pathway are activated and nuclear YAP are decreased so that S100A8 and S100A9 lost the inhibitory effect on protein 'X', which leads to them induction. (DOCX 3844 kb) Abbreviations AA: Arachidonic acid; C3: Botulinum toxin C3; ChIP: Chromatin immunoprecipitation; Cyto D: Cytochalasin D; DAPI: 4′, 6-diamidino-2-phenylindole; FITC: Fluorescein isothicocyanate; LatB: Latrunculin B; SCC: Squamous cell carcinomas; TRTIC: Tetramethyl thodamine isothiocyanate
2019-06-18T14:33:18.923Z
2019-06-17T00:00:00.000
{ "year": 2019, "sha1": "2cc0f64c987fb5e3f22151e447cd94010af60c4c", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-019-5784-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2cc0f64c987fb5e3f22151e447cd94010af60c4c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
237330345
pes2o/s2orc
v3-fos-license
Changes in Brain Activation through Cognitive-Behavioral Therapy with Exposure to Virtual Reality: A Neuroimaging Study of Specific Phobia Background: Cognitive-behavioral therapy (CBT) with exposure is the treatment of choice for specific phobia. Virtual reality exposure therapy (VRET) has shown benefits for the treatment and prevention of the return of fear in specific phobias by addressing the therapeutic limitations of exposure to real images. Method: Thirty-one participants with specific phobias to small animals were included: 14 were treated with CBT + VRET (intervention group), and 17 were treated with CBT + exposure to real images (active control group). Participants’ scores in anxiety and phobia levels were measured at baseline, post-treatment, and 3-month follow-up, and brain activation was measured through functional magnetic resonance imaging (fMRI) baseline and post-treatment. Results: Both groups showed a significant decrease in anxiety and phobia scores after the therapy and were maintained until follow-up. There were no significant differences between both groups. Overall, fMRI tests showed a significant decrease in brain activity after treatment in some structures (e.g., prefrontal and frontal cortex) and other structures (e.g., precuneus) showed an increasing activity after therapy. However, structures such as the amygdala remained active in both groups. Conclusions: The efficacy of CBT + VRET was observed in the significant decrease in anxiety responses. However, the results of brain activity observed suggest that there was still a fear response in the brain, despite the significant decrease in subjective anxiety levels. Introduction Phobias are one of the most frequent types of anxiety disorders [1]. The clinical relevance of specific phobia is that it is a disease with a chronic course that reduces the coping skills and quality of life of patients in the presence of the phobic stimulus and in the absence of the stimulus, e.g., by avoidance [2,3]. The prevalence of specific phobia worldwide has been estimated at 7.4% and 4.8% specifically for Spain [4]. It is one of the most common and prevalent anxiety disorders present in middle-and high-income countries [5], being less frequent in adults (3-5%) than in children and adolescents (11%) [1]. Similar to most anxiety disorders, specific phobias are more common in women (9.8%) than in men (4.9%) [1,4,6,7]. Animal-specific phobia is the most prevalent phobia subtype [7]. Seventy-five percent of people with this disorder often have a phobia of more than one object or situation [1,8]. As a result, it is conceivable that an underlying general vulnerability enhances the risk for additional comorbid phobias and other anxiety disorders [9][10][11]. The treatment of choice for specific phobia is cognitive-behavioral therapy (CBT) with exposure [12,13]. The exposure can be in vivo, in imagination, or in images. Although in vivo exposure has been shown to be the most effective, the choice of exposure in images is preferable in those cases where in vivo exposure is difficult to manipulate by the therapist or where the patient shows serious reluctance to be exposed to real phobic stimulus [14]. This represents a therapeutic limitation for in vivo exposure, since the patient must be progressively exposed to those phobic stimuli that precisely generate a lot of distress, causing this procedure to benefit only a limited number of patients and to generate considerable dropouts [14,15]. Due to this substantial gap in treatment, it is necessary to consider alternative means of delivering exposure-based treatments [15]. Exposure to images is an acceptable alternative to in vivo exposure, with clinically significant results in various types of specific phobia [16][17][18]. The development of virtual environments is an important improvement considering the therapeutic limitations of exposure to images because it provides threedimensionality [19][20][21]. Different research supports the clinical efficacy of virtual reality in mental health for conditions including anxiety disorders and others [22,23]. This has led to a treatment known as virtual reality exposure therapy (VRET) [24]. VRET has shown better performance when conducted in a variety of settings and contexts to facilitate generalization and prevent the return of fear in phobia [17,25], and the design goal of achieving virtual reality experiences for treatment in mental health requires an interdisciplinary approach [26]. Studies about functional neuroimaging in specific phobia suggest that the efficacy of psychological treatment in phobic disorders is commonly associated with functional changes in the fronto-limbic brain areas such as the thalamus, amygdala, insula, anterior cingulate cortex, visual cortex, and prefrontal cortex [27][28][29]. This aspect is related to the physiological-emotional factors involved in the maintenance of specific phobias and highlights the dual-path processing model [30]. This model implies the existence of a short pathway for emotional processing of stimuli that involves the direct connection between the thalamus and the amygdala and a long pathway that involves the direct connection between the thalamus, the involved sensory cortex, and the amygdala. In the long processing route, the connection with the prefrontal cortex allows the cortex to provide non-contingent information to the emotion of fear created in the amygdala. This enables the subsequent regulation of a voluntary and planned response to the feared situation or object by the prefrontal cortex on the amygdala [30][31][32]. The regulation process of limbic structures by the prefrontal cortex is also known as the up-down control mechanism [33,34]. A recent meta-analysis on studies based on analysis of the region of interest to appreciate differential activations between healthy subjects and people with specific phobia in various regions of the limbic circuit showed a greater convergence of activations in the right amygdala, insula, and cingulate cortex of phobic patients compared with the controls [35]. Functional neuroimaging techniques such as nuclear magnetic resonance imaging (NMRI) have supported the understanding of how psychological treatments manage to modify neural circuits. They have shown that, in addition to the clinical efficacy of CBT, there is a close relationship between the clinical improvement resulting from therapy and certain brain changes [36][37][38]. There is scientific evidence that exposure to virtual stimuli leads to similar results to those achieved with cognitive-behavioral techniques [39,40]. However, the VRET has shown significant advantages in the longer duration of positive results [39,41] and greater clinical efficacy [21,41]. Previous studies have shown that the pattern of brain activity of people with specific phobias of small animals exposed to real images differs from that of people without this disorder [36], and these differences are maintained after having been treated with CBT + exposure to real images [42]. Another study on the pattern of brain activity resulting from exposing untreated people with specific phobia to phobic stimuli presented in virtual images observed a significant activation of fear processing circuits, as occurs in people exposed to real images [43]. Studies based on self-report tests suggest that the usefulness of VRET is found in the involvement of the frontal structures of the brain to promote self-efficacy and self-instructional capacity, which are important cognitive elements for the best approach to phobic pathology. However, it is necessary to objectively corroborate these therapeutic mechanisms through the information provided by fMRI. The aim of this study is to compare brain activation from exposure to real images vs. VRET in a CBT treatment program in people with specific phobias of small animals (specifically spiders, cockroaches, or lizards) and whether the results of the treatment program were maintained at 3-month follow-up. Participants Computer-based simple randomization was performed, and participants were assigned a correlative numerical code based on the order in which they were contacted. Neither the researchers who recruited and interviewed the participants nor the therapists knew the participant assignment until the time of the first fMRI and treatment sessions, respectively. Participants were recruited through a public call from the University of La Laguna (Tenerife, Spain) between 2016 and 2018, and all participants provided written informed consent. Participants of the intervention groups were administered the evaluation instruments (see Section 2.2. Instruments) and underwent the first fMRI session with exposure to real or virtual 3D moving images, depending on their group. A total of 131 adults were randomized, of whom 78 did not receive the assigned intervention for various reasons (not responding, health condition incompatible with fMRI, non-attendance of the appointment for the interview, or fMRI). Finally, 23 received intervention with CBT + VRET (intervention group), and 30 received CBT + real images (active control group); however, there were fourteen dropouts (7 in the intervention group and 7 in the active control group) due to non-completion of therapy due to self-perceived improvement in phobic symptoms or non-assistance post-fMRI. After a realignment analysis of fMRI images was performed, two participants from the active control group and five from the intervention group were excluded because they did not meet the minimum quality criteria for the evaluation and comparison of their neuroimages. As a result, the total sample consisted of 31 participants: 17 in the active control group (real images) and 14 in the intervention group (virtual images) (see Figure 1). The inclusion criteria for participants were: (1) being an adult with a specific phobia of spiders, cockroaches, or lizards (small animals for which both videos of real and virtual images were developed for exposure purposes); (2) the specific phobia had to be a primary psychological disorder and not be explained by another health condition or phobia; (3) participants who had not received any pharmacological and/or psychological treatment for their specific phobia in the last 12 months; (4) being right-handed; (5) having normal vision; and (6) not having any impediment for a magnetic resonance imaging session (e.g., a metal implant not compatible with fMRI, possible pregnancy, or any other medical condition in which the use of MRI is discouraged). The most frequent phobic animal was the cockroach and most of the participants indicated that the onset of their phobia was during childhood. Women were the majority, and the mean age of all participants was 33.55 years (SD 11.77); there was no significant difference between the two age groups. None of the participants had received therapy prior to this study ( Table 1) The S-R (Situation-Response) Inventory of Anxiousness [44] was administered to obtain a participant phobia rating at baseline. A Spanish translation was used. This instrument has a high internal consistency (Cronbach's alpha = 0.95) and is a 14-item inventory, 5-point Likert scale, that assesses the physiological, cognitive, and behavioral symptoms associated with the response to an anxiogenic stimulus; in this study, the researcher pointed to the phobic animal targeted. This instrument was administered at baseline, post-treatment, and at follow-up (3 months). To confirm the diagnosis of phobia, participants were asked to answer questions of the structured Composite International Diagnostic Interview (CIDI), Version 2.1, related to specific phobia, social phobia, agoraphobia, and panic attacks [45]. The Latin American and Spanish cultural adaptation was used [46]. They also performed a semi-structured interview that included questions on each specific criterion. Participants diagnosed with specific small animal phobia (F40.218, according Diagnostic and Statistical Manual of Mental Disorders classification code [1]) were included in the study [45]. The Hamilton Anxiety Rating Scale (HAM-A) [47] was administered as a complementary diagnostic test to explore participants' self-perception of anxiety symptoms experienced in the presence of the phobic animal. The Spanish validation was used [48]. HAM-A showed good interjudge reliability as the intraclass correlation coefficients range from 0.74 to 0.96 [49], and a score of 14 or higher was required to consider participants phobic. This instrument was administered at baseline, post-treatment, and follow-up (3 months). The Hospital Anxiety and Depression scale (HAD) [50]-specifically the anxiety subscale-was administered according the Spanish validation [51]. It is a short test that allows the participant to obtain a score for depression and other for anxiety, referring to the last week. It was used to obtain a second measure of anxiety in order to confirm that the sample selection was adequate (Cronbach's alpha = 0.81). This instrument was administered at baseline and post-treatment. Hand preference was assessed with the Edinburgh Handedness Inventory [52] to determine that all participants were right-handed, with the aim of controlling for the effect of manual dominance on brain activation. This inventory is widely used in research and evaluates manual preference through 10 activities. The cut-off points to determine whether the participants are left-handed, right-handed, or ambidextrous have been established based on statistical criteria [53]. This instrument was administered before the first fMRI. At the end of the treatment sessions, participants completed the Spanish validation [54] of the Revised Helping Alliance Questionnaire, Patient Version (HAQ-II-PV) [55], to assess the therapeutic alliance. HAQ-II-PV had good psychometric properties and consists of 17-item inventory, 6-point Likert scale (Cronbach's alpha = 0.88). All clinical assessments were conducted by the researcher group, and the administration of S-R inventory and HAD scale in the follow-up to obtain a measure of the therapeutic efficacy of the treatment program was carried out by email. This online administration made it impossible to evaluate the HAM-A scale at follow-up. A block design was chosen to present the stimuli in the MRI device, and fMRI sessions were about 11 min per participant. Participants wore glasses that allowed the presentation of images in stereoscopy with paramagnetic isolation (VisualStim digital MRI compatible 3D glasses and graphics card: GeForce 8600GT). Exposure to real moving images of spiders, cockroaches, and lizards was conducted through 3D videos of these animals filmed/recreated virtually in motion. All the images had an identical white background. Design and Statistical Analysis The overall design of this study consisted of: (1) an ANOVA to detect differences between groups in clinical instruments scores administered before and after treatment and at three-month follow-up; and (2) analysis of activations recorded in fMRI sessions before and after treatment in both groups using Statistical Parametric Mapping software (SPM 12). A 2 × 3 factor design for the analysis of the results was used (factors: Image and Treatment; levels: real vs. virtual and pre-treatment vs. post-treatment vs. follow-up). The hemodynamic changes associated with the functional brain activity, that is, the Blood Oxygenation Level Dependent (BOLD) effect [57], was used to study the neuroimaging measures obtained through the fMRI sessions according to the type of exposure (real or virtual images). Both functional and anatomical images were manually reoriented to the anterior-posterior commissure plane before pre-processing. A significance level of p < 0.001 was used without any correction algorithm and the criterion to the use of accepting 10 voxels to consider an activation [58]. This criterion is usually used in fMRI studies to eliminate activations that may be false positives for functional voxel dimensions of 2 mm × 2 mm × 2 mm (8 mm 3 ), so multiplying the resulting volume of activation by 10 would make it 80 mm 3 . However, in our case, we used voxels of 4 mm × 4 mm × 4 mm (64 mm 3 ) and at least 3 contiguous voxels per cluster to consider an activation (i.e., k-space data ≥ 3), since multiplying 64 by 3 m the resulting extension volume would be 192 mm 3 , higher than the criteria of frequent use when they use uncorrected p. In this way, we assume that the possibility of choosing false positives is controlled. Procedure CBT Program Fourteen clinical psychologists were trained in the administration of the psychoeducational program and tests. To guarantee the quality of the sessions and the training of the therapists, biweekly clinical seminars were held with psychologists who were experts in the field to supervise and guide the individual clinical sessions. Each therapist was randomized and blinded to a maximum of five participants. The CBT program that participants received consisted of a one-hour session per week for 8 weeks, distributed as follows: first session: presentation of the therapist and the program and start of psychoeducation on phobias; second session: presentation and use of the activation of Subjective Units of Anxiety (SUAs) and physiological deactivation with breathing control techniques; third session: practice of breathing control, use of SUAs and presentation of cognitive restructuring; fourth to eighth sessions: review and practice of the contents dealt with in previous sessions and exposure to phobic stimuli with real or virtual images through 3D glasses, with anxiety control for the management of cognitive distortions. The exposure time for the sessions was approximately 27-32 min. Table 2 shows the results of intervention and active control groups in the instruments administered. No significant differences were observed between both groups in any instruments at any time evaluated, and there were no differences in the HAQ-II-PV questionnaire regarding the possible effect of the therapist on the outcomes. However, the phobic anxiety scores of both groups had significantly decreased after treatment and remained in the follow-up (S-R f HAQ-II-PV = Revised Helping Alliance Questionnaire Patient Version; S-R = S-R Inventory of Anxiousness. * The HAM-A was administered in person at the interview and before the second fMRI session, so this information was not available for follow-up, where the instruments were administered online. Functional Brain Activation Regarding the main effect of the treatment, regardless of the type of image presented, the following activations were observed after treatment in both groups: (1) bilateral activation of the thalamus and the lower frontal lobe; (2) unilateral activation of RH in the supplementary motor area, the prefrontal lobe, the precentral gyrus, and the precuneus; and (3) unilateral activation of LH in the insula, cerebellum, and supramarginal gyrus ( Figure 2). Significant differences were observed in the pattern of brain activity after treatment in both groups. The main activation effect caused specifically by exposure to images of the phobic stimulus, regardless of the exposure condition (real or virtual), was bilateralalthough predominant in the left hemisphere (LH)-and occurred in the mid-occipital cortex, cerebellum, and hippocampus. Bilateral activation was also observed, but with predominant extension in the right hemisphere (RH) in the fusiform gyrus. Other structures such as the temporal cortex, the calcarine groove, and the lingual gyrus were only activated in the RH. The inferior parietal cortex was the only structure that showed activation exclusively in the LH (Figure 3). structures such as the temporal cortex, the calcarine groove, and the lingual gyrus were only activated in the RH. The inferior parietal cortex was the only structure that showed activation exclusively in the LH (Figure 3). Figure 2. Main treatment effect (p < 0.001). The figure shows the differential brain activation of all participants (intervention group and active control group) after treatment when presenting phobic stimuli (regardless of the type of image: virtual or real, depending on the group). The degree of activation ranges from red (not very intense) to white (very intense). We used an inclusive mask based on anatomical atlas to analyze the indicated regions in targeted structures related to specific phobia. All the structures discussed below showed activity before and after treatment (see Supplementary file, Table S1), so differential activation at each time point will be discussed (i.e., activation before treatment > activation post-treatment and activation post-treatment > activation before treatment). Table 3 show these differential functional brain activations in both groups. Main treatment effect (p < 0.001). The figure shows the differential brain activation of all participants (intervention group and active control group) after treatment when presenting phobic stimuli (regardless of the type of image: virtual or real, depending on the group). The degree of activation ranges from red (not very intense) to white (very intense). We used an inclusive mask based on anatomical atlas to analyze the indicated regions in targeted structures related to specific phobia. All the structures discussed below showed activity before and after treatment (see Supplementary file, Table S1), so differential activation at each time point will be discussed (i.e., activation before treatment > activation post-treatment and activation post-treatment > activation before treatment). Table 3 show these differential functional brain activations in both groups. The figure shows the differential brain activation of all participants (intervention group and active control group) caused specifically by exposure to images of the phobic stimulus (regardless of the exposure condition: virtual or real, depending on the group). The degree of activation ranges from red (not very intense) to white (very intense). Table 3. Differential functional brain activation: before (pre) and after (post) treatment. Active Control Group Intervention Group Thalamus Brain activity detected before treatment in thalamus was significantly higher in the active control group. After treatment, this activity remained bilateral in the active control group and was only recorded in the right thalamus in the intervention group. Amygdala After treatment, the differential functional brain activation in the amygdala disappeared in the active control group (p < 0.001). However, a significant activation in the left amygdala was observed in the active control group at post-treatment with an extension of k ≥ 3 voxels (see Supplementary file, Table S1). Occipital Cortex Because the presentation of phobic stimuli involved the animal moving, the visual cortex (Brodmann areas (BA) 17, 18, and 19) was expected to be activated. In the inferior and medial occipital cortex there were no statistically significant differences in brain activity in BA17 and BA18 between baseline and post-treatment in any of the groups. However, differential activity in BA19 was greater at baseline than post-treatment in both groups. Frontal and Prefrontal Cortex As regards most anterior areas, a significant bilateral pre-treatment activity that disappeared after treatment was observed in both groups in the orbital frontal cortex. The bilateral activity recorded in the dorsolateral prefrontal cortex in the active control group and in the right dorsolateral prefrontal cortex in the intervention group was significantly higher at pre-treatment than at post-treatment. Cerebral activity recorded in the right ventromedial prefrontal cortex in both groups disappeared after treatment. Other Brain Structures Involved in Emotional Regulation and Specific Phobias A change in the location of the post-treatment activity of the virtual image group was observed in the anterior cingulate cortex from a more medial to a rostral area. In the insula and fusiform gyrus, a significantly higher bilateral activity was observed at pre-treatment in both groups. After treatment, a low level of activity was observed in the right insula only in the intervention group. Finally, baseline and post-treatment activity was observed in the precuneus in both groups, although in different areas. The precuneus was the only structure that showed higher post-treatment activity compared to baseline in both groups. Before treatment, significant activity was observed in the right precuneus in the intervention group and in the left precuneus in the active control group, coinciding with BA7. At post-treatment, a change in laterality reflected in the left precuneus was observed in the activity of the intervention group. In the active control group, post-treatment brain activity was observed bilaterally, coinciding with BA31 and BA23. Discussion The main aim of this study was to determine whether CBT + VRET is of comparable efficacy to CBT + exposure to real images in the treatment of specific phobia to small animals (specifically spiders, cockroaches, or lizards) and whether the results of the therapy are maintained in both treatment modalities during a three-month follow-up period according to the evaluation instruments administered. Our results join those systematic reviews that indicate the use of virtual reality as a useful tool for the treatment of different mental disorders [59][60][61]. The main advantage of using virtual reality lies in its ability to replicate real stimuli and situations and individualizing the intervention [60,62]. In addition, the VRET can even be considered as a preparation technique for in vivo exposure of the feared stimulus. Although it may seem an expensive technique, some studies show that it is a cost-effective technique for the management of some mental pathologies [63]. The changes observed in the self-report scales assessing specific anxiety associated with small animals suggest comparable therapeutic efficacy between CBT + VRET and CBT + exposure to real images. These results are in accordance with results observed in other studies [22,64]. Regarding the therapeutic alliance (HAQ-II-PV questionnaire), there is literature that shows that the therapeutic effect can explain a high percentage of variance of the treatment effect [65][66][67]. However, in this study, if this effect occurred, it was not significant in any group. Therefore, the therapeutic alliance did not seem to explain the results of the CBT + VRET. We observed differences between the patterns of brain activity of the both groups at pre-and post-treatment, presumably due to the effect of therapy. In many structures, greater pre-treatment compared to post-treatment activation was observed in both groups. However, this activity was greater in the active control group, perhaps due to the characteristics of the image, since the phobic stimuli used in the active control group were more similar to the phobic stimulus generated by the conditioning, that is, a real animal. The decrease in the activity of the thalamus after therapy observed is consistent with the results obtained in other studies that found reduced activity in the limbic and paralimbic areas as a result of CBT [68]. However, due to the temporal resolution of the fMRI, in this study, it was not possible to know exactly which thalamic nucleus is activated at the time of exposure to the phobic stimulus. These data support the importance of the connectivity of the limbic and paralimbic circuits in the emotional deregulation of pathological fear [69,70]. The activity observed in the posterior fusiform gyrus is similar to the results of other studies that reported the central role played by this structure in the viso-attentional network related to the awareness of the visual stimulus [71]. In fact, this structure is specialized in high-level vision, for example, in object recognition [72]. The decrease in activation of fusiform gyrus found after therapy in both intervention groups could be interpreted as a lower expenditure of hemodynamic resources by participants when visually attending to the phobic stimulus after therapy [71,72]. The results obtained in the intervention group showed that the activations recorded before and after the therapy changed their location within the cingulate cortex. Medial activity was observed in this structure at pre-treatment, but once the therapy was completed, the activation of the cingulate cortex was recorded in the anterior area. These results seem to point to the research conducted on the specificity of the anterior cingulate cortex depending on its activation. In fact, the anterior cingulate cortex has been linked to a variety of functions, from the processing of rewards to the execution of self-control actions [73]. Some studies suggest that the demands of cognitive control executed by that brain region motivate new learning [74], thus biasing behavioral decision-making towards tasks and strategies that are cognitively efficient in terms of self-care [74,75]. As regards the activity recorded in the precuneus, greater bilateral activation was observed in the participants of both groups after completing the CBT program, as has been observed in previous studies [76]. This structure has been related to episodic memory, the integration of attentional and perceptual processes, visuospatial processing, self-awareness, and the response to emotional stimuli [77,78]. In this regard, the precuneus may act as an emotional regulator that reorganizes the processing of phobic stimuli. It is logical to expect people who have overcome their phobia to have increased their levels of self-efficacy regarding their problem. This is likely to be reflected in an increase in the activity of this structure, since egocentric representations are achieved through a specific network that includes the right precuneus and the angular gyrus. In the results of this study, since the change found in the amygdala before and after therapy was minimal, participants may have self-referenced differently when faced with the phobic stimulus, regardless of the characteristics of the image (real or virtual), which could hypothetically explain the greater energy consumption of this brain structure. From a theoretical perspective, given the activations found in both groups, the results of this study seem to be consistent with the processing of the long pathway present in the dual pathway model of processing [30]. From a therapeutic perspective, the results coincide with the inhibitory learning model [79,80] more than with the emotional processing model of fear [81,82], because even though a significant decrease in anxiety levels was observed, in the brain, the person would continue to feel afraid. These data suggest that a new adaptive response occurs alongside the initial fear response, rather than replacing it, and the person has learned another way to respond to fear according to the inhibition of the fear response. Some limitations of this study should be mentioned. Regarding the role of neuroscience in addressing specific phobias, we found some effect on the amygdala after treatment. However, it would be interesting to have performed an MRI test in the followup period to determine whether this activity was maintained after therapy and continued to differ from pre-treatment data. This type of study should also be extended to other phobias to determine whether the brain activity of individuals treated for such phobias is similar. Although we performed a semi-structured interview to exclude participants with health conditions explaining their specific phobia, there was not a specific analysis to detect the existence of comorbid disorders. Further research on functional connectivity is necessary to explore whether the activity of the amygdala upon visualization of phobic stimuli is due to activity in the long or short pathway. In this study, given the technical characteristics of the MRI machine, changes could be inferred from the processing of the long pathway if the activity were observed in the occipital cortex involved in this pathway, but it was not possible to determine what specific input caused the activity of the amygdala. Follow-up fMRI is also especially interesting to explore whether the configuration of the precuneus activations obtained in the present study is permanent or this effect disappears over time. Longitudinal studies with larger samples are needed to explore whether the brain activation pattern of people treated with CBT + VRET and the CBT + exposure to real 3D moving images over time is closer to the brain activation of non-phobic people. Previous comparisons made with people without phobia exposed to the same real images showed significant differences between the patterns of brain activity between participants with and without phobia [42,83]. Conclusions CBT + VRET with 3D moving images was comparable in efficacy to CBT + exposure to real 3D moving images, and the results were maintained for up to three months from the end of therapy (according to the data from the instruments administered). The therapeutic benefits modified the brain activity pattern of people treated with a full CBT + VRET program, showing a decrease in the post-treatment activity of structures related to the visual-attentional processing of phobic stimuli. Moreover, the post-treatment activity identified in the precuneus suggests that people with phobia change their way of selfreferencing with the phobic stimulus in terms of perception of reality. The main effect of the treatment was related to activity in more anterior brain areas and related to emotional regulation (e.g., the prefrontal cortex, the insula, and the precuneus). However, the hemodynamic activity of this brain network linked to the emotional fear response was still active, although its response pattern and intensity were modified (e.g., amygdala). This suggests that the therapeutic effect of CBT + exposure to images (real or virtual) could be that the patient has learned another way of responding to fear, inhibiting their fear response. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/jcm10163505/s1, Table S1: Functional brain activation. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available in supplementary material.
2021-08-28T06:17:19.043Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "34295b9687777bedf1bd0a3ea4be57ad9c89bbe9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/jcm10163505", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9b2cdf54931964d3d3ed02cef17226580efc9c5c", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257102731
pes2o/s2orc
v3-fos-license
RNA secondary structures: from ab initio prediction to better compression, and back In this paper, we use the biological domain knowledge incorporated into stochastic models for ab initio RNA secondary-structure prediction to improve the state of the art in joint compression of RNA sequence and structure data (Liu et al., BMC Bioinformatics, 2008). Moreover, we show that, conversely, compression ratio can serve as a cheap and robust proxy for comparing the prediction quality of different stochastic models, which may help guide the search for better RNA structure prediction models. Our results build on expert stochastic context-free grammar models of RNA secondary structures (Dowell&Eddy, BMC Bioinformatics, 2004; Nebel&Scheid, Theory in Biosciences, 2011) combined with different (static and adaptive) models for rule probabilities and arithmetic coding. We provide a prototype implementation and an extensive empirical evaluation, where we illustrate how grammar features and probability models affect compression ratios. Introduction In this article, we explore the interplay and potential symbiosis between data compression and probabilistic methods for predicting the folding structure of (non-coding) RNA molecules. Ribonucleic acid (RNA) is a bio-polymer that serves various roles in the coding, decoding, expression and regulation of genes in cells. An RNA molecule consists of a chain of nucleotides each having a base attached to it (either adenine (A), cytosine (C), guanine (G), or uracil (U)); this string of bases forms the sequence of the molecule. Unlike the related DNA, RNA is usually single-stranded and forms spatial structures by folding onto itself (similar to proteins), with complementary bases forming a stabilizing hydrogen bond. The set of (indices of the) bases that form such pairs is the secondary structure of the molecule; it can be encoded by the dot-bracket notation, (see Figure 1; a formal definition is given in Section 2). The secondary structure is instrumental for the biological function of non-coding RNA molecules and of great interest to biologists. Much research has hence been devoted to computationally predict the secondary structure from a known RNA sequence (ab initio RNA secondary-structure prediction) [4,9,26], including human swarm intelligence [15], and it remains an active research area [22,7,23]. We explore areas around RNA secondary structures where innovations in compression methods are central for further progress. G C C C U G A U A G C G U A G U U A C U A G C G A G U C U G U A U U C U A A G A A G A U C A C U G A G G G U U C G C G G G G Figure 1: An example RNA sequence and structure. Left: schematic drawing of structure. Above: Representation as dot-bracket sequence when the backbone is "pulled straight". Better RNA Compression. Our first goal is to use the domain knowledge on RNA foldings incorporated into secondary-structure prediction models for improved methods for the joint compression of the sequence and secondary structure of RNA sequences. With biological databases ever increasing, compressed representations become desirable. In the case of databases for non-coding RNA sequences with known secondary structures, the data volume has long remained manageable, but growth is now accelerating: For example, RNA Central [2] now aggregates over 25 million trusted secondary structures 8 years after its first release; 1.8 million of these come from the rfam database [11], collected over its 20 years of existence. The need for space-efficient representations of joint RNA sequence and secondary structure databases has been identified by Liu et al. in 2008 [16]. Their algorithm RNACompress, based on a stochastic context-free grammar (SCFG, defined below), has been recognized as an early application of ideas from grammar-based compression in the data-compression community [17,12]. As we demonstrate in this article, substantially better compression ratios can be achieved than Liu et al. report; interestingly, by carefully extending their very method to a general framework of SCFG-based compression. Improvements are then realized by applying this framework on tried and tested grammars from the RNA secondary structure prediction literature [3,20] (as well as further, orthogonal refinements). Apart from the practical utility of less space, compression methods are of direct interest in bioinformatics as a way to upper bound the Kolmogorov complexity [13] of a dataset, and hence its inherent information content [8]. For example in the context of RNA sequences, one can ask how much additional information is contained in the secondary structure of the RNA when the sequence is known. Compression as a proxy for predictive power. Our second and main goal is to test our hypothesis that for comparing probabilistic models for RNA secondary structures, compression ratio can serve as proxy for prediction quality in RNA secondary-structure prediction. Advances in next-generation sequencing allows determining the sequence of many molecules at scale, whereas secondary structures need to be determined by much more expensive techniques like X-ray crystallography [26]. A much cheaper and faster alternative is to computationally predict the structure from a known sequence. The state-of-the-art approaches either build on a chemical model of the molecules and try to identify a structure with minimal free energy or use a machine-learning approach. Both can formally be described by stochastic context-free grammars (see Section 2). RNA secondary-structure prediction plays a vital role in studying the biological function of RNA molecules and for designing artificial RNA sequences, and so numerous software packages implement different algorithms for this task. Comparing their prediction quality is a delicate undertaking, because no definitive similarity metric is known to judge how close the predicted secondary structure is from an experimentally determined one [18]. Indeed, the method of choice in the literature to compare structure prediction is solely based on individual base pairs [18,3,21,20]: One compares the sensitivity and positive predictive value (PPV) of different approaches (defined in Section 2). We will use the compressed size (in bits per base) of the reference structure under the trained stochastic model as a more direct means to compare how well different models capture RNA folding behavior. This compressed size effectively reflects the log-likelihood of the reference structure and hence has a natural interpretation as the information content that model assigns to the RNA structure. This has several advantages over sensitivity/PPV: (a) It directly evaluates the quality of the model, separating it from the method to produce a (single) predicted secondary structure. There are different options to predict a structure; one can use the most likely structure, or a consensus structure containing the most likely individual pairs, or return a sample of several nearly optimal structures. No choice clearly dominates the others, but they affect the sensitivity and PPV scores. (b) Log-likelihood is a single natural metric derived from first principles of information theory; it does not need trade-offs or further parameters. Contributions. Our contributions are as follows. First, we improve the compression ratio achieved for joint RNA sequence and structure data by 45% over the state of the art, Liu et al.'s RNACompress [16]; compared to the general-purpose compressor paq8l (http://mattmahoney. net/dc/#paq), we see a 70% improvement. The improvement over RNACompress is the combined result of several refinements, but a 30% reduction in compressed size is observed when keeping everything but the used SCFG constant. This clearly shows the relevance of the grammar and the validity of our approach to employ structure-prediction grammars. The proposal and implementation of the more sophisticated grammars (such as the one based on [20]) is hence a useful contribution. Second, we demonstrate that compression ratio can be used as a robust predictor of how well a grammar will perform for ab initio secondary-structure prediction. To our knowledge, this is the first such attempt to identify suitable probabilistic models for RNA structure prediction that is not based on comparing predicted structures to a benchmark dataset. Finally, we reproduce and confirm the computational study of [3] with an independent implementation and additional modifications to their grammars. Related Work. Liu et al. [16] proposed RNACompress in 2008; we discuss their methodology in detail in Section 3. Naganuma et al. [19] explore a related method of SCFG compression closer to grammar-based compression using straight-line programs. They create a stochastic grammar from the text to compress with a variation of the RePair heuristic [14]. For a broader context of grammar-based compression, see the recent survey of Kieffer and Yang [12]. Friemel [6] also targets the joint RNA compression problem, but using a different approach. He encodes RNA structures as labeled trees with each node representing a nucleotide and the branches representing the bonds; unpaired bases yield unary nodes. Friemel's algorithm RNAContract contracts sequences of unary nodes (similar to compact tries) or a sequence of multiple nested brackets in the dot-bracket notation. After the node contraction the algorithm encodes the contracted node tree using Huffman coding. Outline. The rest of this paper is structured as follows. Section 2 collects required concepts. Section 3 explains the grammar-based compression of RNA. Then we report on our two studies: Section 4 discusses the compression achieved with various grammars and Section 5 explores the connection between compressed size and prediction quality. We conclude in Section 6 with future work. In the appendix, we give details on the comparison with a general-purpose compressor (Appendix A), list the precise grammars we used (Appendix B), and investigate further differences between our approach and [16] (Appendix C). Further details, all datasets and code to produce the figures in this article are available online as supplementary material: https://www.wild-inter.net/publications/onokpasa-wild-wong-2023; the code is available on GitHub: https://github.com/evita35/joint-rna-compression. Preliminaries Dot-bracket notation. An RNA sequence is a string of bases A, C, G, U. Stable hydrogen bonds are possible between A and U resp. C and G (the Watson-Crick pairs) and to a lesser extent also between G and U. RNA secondary structures 1 can be represented by the dot-bracket notation [10]: a well-nested string over {•, (, )} where a base pair is denoted by matching parentheses () and an unpaired base by •; see Figure 1 for an example. We use "RNA" as an abbreviation for "a pair of an RNA sequence and its secondary structure". SCFG. Dot-bracket strings can be generated by a context-free grammar (CFG). A CFG is a tuple (N, T, R, S) where N and T are finite sets of nonterminals and terminals, respectively, For every A ∈ N , W represents a probability distribution over the set of rules with left-hand side A. Earley Parser. The Earley Parsing algorithm [5] is able to process any SCFG and efficiently determine whether a string belongs to the language of the grammar. We use the Earley parser implementations by [25,27] when comparing various SCFGs since it does not require a rigid normal form for grammars. RNA secondary-structure prediction. A stochastic context-free grammar can be used for RNA secondary-structure prediction where terminals correspond to bases and the leftmost derivation of an RNA sequence encodes a secondary structure of the sequence. The used SCFGs allow many different derivations (and hence secondary structures) for a given sequence and the rule probabilities induce a probability distribution over those. Using a classical machine-learning approach, the rule probabilities are chosen as maximum likelihood parameters w.r.t. a given training dataset (with known secondary structures). For predicting/inferring the (unknown) secondary structure of a new RNA sequence, a probabilistic parser determines the maximumlikelihood derivation (Viterbi parse) of the RNA sequence in the SCFG, which encodes the most likely secondary structure (under the given probabilistic model). We measure the quality of prediction by sensitivity and positive predictive value (PPV): the fraction of correctly predicted base pairs among all pairs in the reference structure resp. all pairs in the predicted structure. More formally, let TP, TN , FP, FN be the number of base pairs that are true positives, true negatives, false positives, and false negatives, respectively. Then Sensitivity = TP TP+FN and PPV = TP TP+FP . RNA compression using stochastic context-free grammars We now show how to jointly compress an RNA sequence and secondary structure using a SCFG G. This method has been used by Liu et al. [16] on a fixed grammar; we generalize it here to arbitrary grammars G and rule-probability models. The terminals of G are pairs of characters, e.g., A ( for base A in the RNA sequence and ( in the (dot-bracket representation of the) secondary structure. 2 To encode an RNA, we determine the sequence of rules in a leftmost derivation of the RNA and then encode this sequence of rules using a model for the rule probabilities using a standard code; Liu et al. use a fixed Huffman code; we employ arithmetic coding [28]. We illustrate the process on the RNA sequence G • }, and rules R shown in Table 1. The (unique) leftmost derivation using the grammar is as follows: [16], including the partition of the unit interval as used in arithmetic coding. where the sequence on applied production rules is Since we always replace the leftmost nonterminal, the next nonterminal to replace is known inductively, and we can reconstruct the leftmost derivation from only the (index of the) used right-hand sides: 1, 4, 1, 7, 2, 2, using the order of rules in Table 1; (the 4 indicates that the second used rule, where we know it expends L, is the 4th rule with left-hand side L, i.e., L → G ( S C ) ). Now suppose we have the (static) rule probabilities for R from Table 1 and we use arithmetic coding to store the right-hand sides. We obtain the corresponding sequence of intervals from the rules, which we encode using arithmetic coding to obtain the final binary codeword: 0011010100100. The example above (and [16]) uses a static rule-probability model, usually obtained from a training dataset with known structures by counting how often each rule is used in the dataset derivations. With arithmetic coding, we can easily replace this by an adaptive rule-probability model, where rule probabilities are computed as relative frequencies in the prefix encoded so far (starting with some initial value for counters, typically 1). This entirely avoids the need for a second pass or a training dataset, as well as storing the rule probabilities. For long inputs, the adaptive model converges to the sequence-specific relative rule frequencies; we hence also include the semi-adaptive model where rule counts are determined for the given sequence in a first pass. Unless one also stores the rule counts, this model does not allow decoding, but indicates the limiting behavior of the adaptive model. Joint compression of RNA sequence and secondary structure To investigate the effectiveness of different parameters, we have developed a generic prototype implementation in Java that allows us to combine arbitrary SCFGs, rule-probability models, and final encoders (Huffman or arithmetic coding). We use an existing open-source Earley Parser implementation [25] for obtaining a parse tree (given a SCFG and an RNA with sequence and structure). 3 Apart from G L from [16], we use the structure-prediction grammars from [3] and [20]. Since non-canonical bonds are regularly found in experimentally determined secondary structures, all our grammars come in two versions: one that only allows the Watson-Crick and "G-U wobble" pairs, and one that allows all 16 pairs. The difference for compression is small: while most RNA structures do contain non-canonical bonds, most contain only very few of them. For the compression-quality study, we use the "friemel" dataset, consisting of 17 000 ribosomal RNAs from [1] where ambiguously sequenced bases, non-canonical base pairs and pseudoknots have been removed [6]. Information of each RNA in the given datasets is stored in a text file, using the dot-bracket notation. 24 contained empty hairpin loops; since 2 grammars from [3] exclude these, we replaced the innermost pair by two unpaired bases; for the evaluation, we exclude these 24 RNAs. Figure 2 shows the compression quality of different grammars, normalized to the (average) number of bits per base in the RNA. It is striking that the current state-of-the-art method from the literature, Liu et al.'s RNACompress [16], performs much worse than all the structureprediction grammars (for all rule-probability models), indicating that these grammars indeed incorporate effective domain knowledge about RNA structures. Also note that a simplistic encoding of the RNA sequence alone would use 2 bits/base; the most sophisticated grammars come very close to that for the joint encoding of sequence and structure: 2.21 bits/base on average for the grammar of Nebel and Scheid [20]. The large grammars G 2 , G 7 , and G 8 [3] (those with "stacking parameters") and the huge grammar by Nebel and Scheid [20] perform overall best. But some much smaller grammars like G 6 come very close, despite having a factor 10 fewer parameters. This shows that it is the structure of the grammar, not merely the number of parameters of the model, that improve compression of RNA secondary structures. Compression ratio vs. prediction quality We have seen that the choice of the grammar heavily influences the compression quality of our generic joint RNA compressor. In this section, we take a closer look at this grammar dependence from the perspective of both compression and secondary-structure prediction. For that, we reproduced the classic study of Dowell and Eddy [3] comparing several hand-crafted SCFG for their ability to correctly infer RNA secondary structures given only the RNA sequence as input. Due to the bugs from [25], we here used the probabilistic Earley parser from [27]. We use the original datasets from [3] (available at http://eddylab.org/software/conus/): The "benchmark" dataset was used in [3] to compare the prediction quality of SCFGs, whose rule probabilities have been trained on their "mixed80" dataset; see [3] for further details. Both datasets contain many non-canonical bonds and 8 RNAs contain empty hairpin loops; we again eliminated the latter. Mixed80 contains numerous ambiguous bases; these were randomly replaced with a compatible base. Figure 3 shows the results of comparing for each grammar how well it compresses the Figure 3: Scatter plot of compression vs. prediction quality for the grammars from [3]. Each grammar is presented as one point with error bars. The x-axis shows the compressed size (in bits per base) for joint compression of RNA sequence and secondary structure, averaged over the benchmark dataset [3]. Horizontal error bars show one standard deviation of compressed size over the benchmark dataset. The y-axis shows the geometric mean of sensitivity and PPV (for each predicted RNA secondary structure, averaged over the benchmark dataset); error bars show one standard deviation. For the ambiguous grammars G 1 and G 2 , no vertical error bars are available (we did not reproduce predictions for these; the average is taken from [3]). Both compression and prediction use the same training dataset (mixed80 from [3]) to determine the parameters of the grammars; compression here uses the static model for rule probabilities. Compression vs prediction (Static on mixed80) benchmark dataset of RNAs and how well it predicts secondary structures of this set (using the setup and parameters as in [3]). Taking into account the variability across different RNAs within the dataset, a clear and strong negative correlation is visible between compressed size and prediction quality; in particular, there is a clearly distinct cluster of grammars that simultaneously give the best compression and the best prediction. At least for the grammars from [3], this shows that one can use compressed size as a more rigidly defined and robust proxy for secondary-structure prediction quality. Figure 4 takes a closer look at the correlation on a per-RNA level. Even there, a correlation remains visible; in particular very accurately predicted structures are also well compressed. The right panel in Figure 4 shows that compressed size for different grammars is very strongly correlated; pictures for other grammar pairs are similar (excluding the poor performing G 1 , G 4 , and G 5 ). Note that despite the strong correlation at RNA level, there is a significant difference in the (mean) compression ratio between different grammars. This might indicate that there are intrinsically more and less "surprising" RNA secondary structures (knowing only the RNA sequence). Conclusion In this paper, we demonstrated how domain knowledge of RNA secondary structures encapsulated in stochastic context-free grammars for structure prediction can be used to obtain the best single-RNA compression ratios known for this type of data. Moreover, we showed promising compressed size against prediction quality using G 6 . Right: compressed size using G 6 against compressed size using G 8 . All compression methods use the static rule probabilities trained on mixed80. first evidence for the utility of compression ability as a cheap and robust proxy for prediction quality for RNA secondary-structure prediction. This work opens up several enticing avenues for future research. Using compression ability as simpler guide, we are working on an approach to discover new promising models for secondarystructure prediction. It would be interesting to investigate whether the robust correlation between prediction quality and compressed size continues to hold for large grammars with many parameters; here prediction could suffer due to overfitting issues, whereas compression might continue see improvements from additional parameters. Since many natural RNA secondary structures contain "pseudoknots", a principled approach for compressing such structures would be interesting. If the compression-prediction correlation can be demonstrated in this domain as well, the lack of reliably free-energy models for pseudoknotted RNA structures and the relative lack of high-fidelity training data would make compression ability of even greater value in the search for better predictions models. An alternative version does not have the rule M → T ; that grammar then disallows hairpins of length one, i.e., '( • ) '. Unsurprisingly, the arithmetic coding produced better compression results than Huffman coding, but the difference between the means is only 2.7%. Figure 5 shows the distribution of compressed size over the RNAs; while arithmetic coding has moderate impact on the mean compressed size, it helps a lot to bring down the right tail. The scatterplot in Figure 6 further shows that indeed, arithmetic coding (with this fixed static model) is doing better on almost all RNAs, and the effect is bigger for those RNAs that are compressed worse. and RNACompress with arithmetic coding, and the RNACompress variant with the ε-rule-free grammar. The vertical bars show from, left to right, the 1% quantile, mean, 99% quantile, and maximum. The means are at 3.195 resp. 3.110 bits per base. Figure 6: The same data as in Figure 5, but as scatter plots with one point per RNA. C.2. Nullable Grammar vs. Non-Nullable Grammar Liu et al. [16] originally use the following grammar (in the notation from Appendix B): For general parsers, ε-rules are often inconvenient; we therefore modified this grammar to G L shown in Appendix B. This transformation makes the probabilistic model slightly richer and so will help compression, but it does not change the nature of the grammar; the structure of leftmost derivations of strings remain (almost) the same. (We here ignore the fact that the empty string is no longer in the language of grammar G L , while it was derivable in G L . For RNA compression, this is not relevant.) We manually implemented a parser for the original G L grammar and compared the compression outcome. As Figure 5 shows, this very moderate enrichment of the probabilistic model has a larger impact than moving from Huffman to arithmetic coding. The scatter plot in Figure 6 (right) shows that again, we never do worse in G L compared to G L , but that this time, the biggest savings are happening for the (much larger number of) RNAs that are compressed well.
2023-02-24T06:42:31.924Z
2023-02-22T00:00:00.000
{ "year": 2023, "sha1": "e3683949a6730df195106100648fcf24aa9d494f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e3683949a6730df195106100648fcf24aa9d494f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Biology", "Mathematics" ] }
86501611
pes2o/s2orc
v3-fos-license
The effect of butyric acid glycerides on performance and some bone parameters of broiler chickens A concern about enhancing the natural defense mechanisms of animals and reducing the massive use of antibiotics led to the banning of studies in this field. So, this research was done to investigate the effect of butyric acid glycerides and salinomycin sodium on the performance of the broiler chickens (strain Ross 308). A total of 800 chickens were reared for 42 days. A 3 factor statistical design was conducted with 4 replicates, and each factor contained 2 levels (25 broilers in each pen). The factors were butyric acid glycerides (0 and 0.3% of diet), salinomycin sodium - an anticoccidial drug (0 and 0.5% of diet) - and litter moisture (normal litter with average moisture of 35% and wet litter with average moisture of 75%). Data were collected and analyzed by SAS with GLM procedure. The results showed that butyric acid glycerides had no significant effect on feed intake. Weight gain and feed conversion ratio were not significantly affected by the mentioned factors. The effect of the treatments on the number of Eimeria oocytes excreta in the second and fourth week of breeding and feed intake were significant (p<0.05). Diet acidification with butyric acid glycerides caused an increase in ash, calcium and phosphorus of the chicken tibia, but this increase was not significant (p>0.05). Considering the result of this experiment, the use of butyric acid glycerides and salinomycin sodium in the aforementioned levels had no positive effect on the performance of broiler chickens (p>0.05). INTRODUCTION In the past, antibiotics have been included in animal feed at sub-therapeutic levels, acting as growth promoters (Dibner and Richards, 2005).Worldwide concern about development of antimicrobial resistance and transference of antibiotic resistance genes from animal to human microbiota led to the placement of a ban on the use of antibiotics as growth promoters (Mathur and Singh, 2005;Salyers et al., 2004).There is the need to look for viable alternatives that could enhance the natural defense mechanisms of animals and reduce the massive use of antibiotics (Verstegen and Williams, 2002).A way is to use specific feed additives or dietary raw materials to favorably affect animal performance and welfare, *Corresponding author.E-mail: Dr_Mehrdadirani@yahoo.com. particularly through the modulation of the gut microbiota which plays a critical role in maintaining host health (Tuohy et al., 2005).A balanced gut microbiota constitutes an efficient barrier against pathogen colonization, produces metabolic substrates such as vitamins and short-chain fatty acids, and then stimulates the immune system in a non-inflammatory manner.Using new feed additives (for example, enzymes, organic acids, probiotics, prebiotics and herbal extracts), towards hostprotecting functions to support animal health, is a topical issue in animal breeding and it creates fascinating possibilities.Use of organic acids is very appropriate because of the ease of use, accessibility, reinfection improbability, positive effect on broiler performance, lack of bacterial resistance, providing proper balance of intestinal flora and prevention of feed nutrient destruction (Waldroup and Kanis, 1995).Organic acids mechanism of action is totally different from antibiotics.Organic acids are lipophilic in their unsegregated form and can easily pass through the bacterial cell membrane.An organic acid is segregated inside the bacterial cell and cause pH reduction in the cytoplasm which consequently cause enzyme activity and material transfer disorders.Bacterium tries to send H + ions out of the bacterial cell to protect homeostasis, which is an endergonic activity.Organic acids reduce accessible energy for other bacterial activities through this way.Rcoo -ions can also have negative effects on DNA and bacterial cell division.So organic acids can act as bactericide combinations and cause bacteria death (Chaveerach et al., 2008;Dibner and Buttin, 2002;Griggs and Jacob, 2005;Partanen and Mroz, 1999).Among short-chain fatty acids, butyric acid has been specially noticed.The liquid form of butyric acid is given to the bird mainly in combination with water, while the powder form is given with their diet.By using methods such as mineral carriers, esterification with glycerol and also encap-sulation, organic acids are protected from being absorbed in the upper parts of the digestive system.A study by Bolton and Dewar (1965) showed that 60% of butyric acid was absorbed only in crop and less than 1% of this acid reached the lower parts of the small intestine.So, butyric acid glycerides were used in this experiment in order to prevent quick absorption in upper parts of the digestive system.Various beneficial experiments have shown that organic acids were used to control disease causing bacteria such as Salmonella, Campylobacter and E. coli (Chaveerach et al., 2008;Van Immerseel et al., 2005), but only a few researches have been done to study the effect of butyric acid on other microorganisms of the digestive system. This research was conducted to study the effect of butyric acid glycerides on the performance of some bone traits of the broiler chickens and the microbial population of the digestive system, especially Eimeria Protozoan.Different factors such as litter moisture and existence or absence of anti-coccidial drug (salinomycin sodium), were included in the experimental design in order to measure the anti-microbial power of butyric acid. Birds and diets In this research, a completely randomized design was selected with factorial method.So, 3 factors were selected and the level number of each factor was 2. A total of 800 male broiler chickens (Ross 308) were obtained from a local breeding farm.Experimental factors were butyric acid glycerides (0 and 0.3% of the diet), salinomycin sodium -anticoccidail substance (0 and 0.5% of the diet) and litter moisture (normal litter with average moisture of 35% and wet litter with average moisture of 75%).Upon arrival, chickens were wing-banded, weighed and randomly allocated to 8 treatment groups of 100 birds each.Each group was further divided into 4 Irani et al. 12783 replicates of 25 birds.All replicates were housed in 32 separate wire-suspended cages equipped with plastic sides, and the bottoms covered with clean wood shavings.Light was continuously provided for the duration of the experiment.The temperature in the cages was 32°C on arrival of the chickens.From day 8 of the experiment, the temperature was gradually decreased by 2°C every day, until it reached 20°C by day 14.However, feed and water were available ad libitum. UFFDA program was used for diets formulation, based on the National Research Council recommended table (National Research Council, 1994).However, mash diets were used in this experiment.In order to compare the effect of butyric acid glycerides with salinomycin sodium, this anti-coccidial drug was added to the experimental diets with the amount of 0.5 kg/ton, during the grower and finisher stages.Before the experiment, chemical analyses of experimental diets were determined according to the methods of AOAC (Association of Official Analytical Chemists, 1990).The ingredients and the composition of the experimental diets are presented in Table 1. Butyric acid and salinomysin sodium were added to the basal diet by substitution at the expense of corn.The starter diet was fed until day 10, the grower diet was fed from day 11 to 28, and the finisher diet was fed from day 29 to 42. Traits and data collection Data were collected as per the number of coccidia oocytes in the excreta, feed intake, weight gain and feed conversion ratio, as well as the amount of mineral storage in chicken tibia (ash, calcium and phosphorus). In order to determine the number of Eimeria oocytes, fresh excreta samples were collected from the four corners and the middle of each cage on days 14, 21, 28, 35 and 42 of the experiment.Excreta collection was done in the evening and the samples were stored overnight in a refrigerator.The oocytes of each cage were counted the next day and the numbers were expressed per g of excreta.For oocyte counting, a modified McMaster counting chamber technique of Hodgson (1970) was used.A 10% (w/v) feces suspension in a salt solution (151 g NaCl mixed into 1 L of water) was prepared.After shaking thoroughly, 1 ml of the suspension was mixed with 9 ml of a salt solution (311 g of NaCl mixed in 1 L of water).Then, the suspension was put into the McMaster chamber using a micropipette and the number of oocytes was counted (Peek and Landman, 2003). Body weights were measured on days 10, 28 and 42.Feed intakes were determined per week for every cage and were expressed as g/bird/day.The feed conversion ratio was calculated as feed intake per cage divided by weight gain of birds in the cage.At the end of the experimental period (42 days of age), one broiler chicken from each replicate was randomly selected.Live weights of birds were recorded after a 12-h-hunger period.The selected birds were subjected to feed withdrawal overnight, permitting gut clearance, after which they were killed via neck cutting. To study the effect of butyric acid glycerides on digestibility and absorption of minerals in the diet, measurement and comparison of the amount of mineral storage (ash, calcium and phosphorus) of the tibia were done for treatments 1 and 3 on days 14 and 35 of the breeding.After the chickens were suffocated by CO2, the left tibia were removed from the body, packed in nylon bags, indexed and transferred to a cool mortuary (4°C) for storage.To determine ash, calcium and phosphorus contents, the bones were then transferred to a lab where they were boiled in water and dried in an oven for 24 h following flesh and cartilage removal.The products were then 1 Content per 2.5 kg: Vitamin A, 9,000,000 IU; vitamin D, 2,000,000 IU; vitamin E,18,000 IU; vitamin K, 2,000 mg; vitamin B1, 1.800 mg; vitamin B2, 6.600 mg; vitamin B3, 10.000 mg; vitamin B5, 30.000 mg; vitamin B6, 30.000 mg; vitamin B9, 1.000 mg; vitamin B12, 15 mg; vitamin H2, 100 mg; choline chloride, 500,000 mg and antioxidant, 3000 mg; 2 Content per 2.5 kg: manganese, 100.000 mg; iron, 50,000 mg; zinc, 100,000 mg; copper, 10,000 mg; iodine, 1.000 mg; selenium, 200 ; mg; cobalt, 100 mg.placed in Soxhlet apparatus for 16 h for fat extraction, after which they were transferred to dry oven and electric furnace for 8 h treatment in order to obtain ash.The ash was then weighed in order to determine the ash percentage of the bones.It was then used to determine the calcium and phosphorus contents using the standard methods recommended by the Association of Official Analytical * A1 and A2 were supplemented with 0 and 0.3% butyric acid glycerides, B1 and B2 were supplemented with 0 and 0.5% salinomycin sodium, and C1 and C2 were supplemented with normal litter with an average moisture of 35% and wet litter with an average moisture of 75%, respectively; a,b means within columns with different superscripts differ significantly at P <0.05.Chemists (1990). Statistical analysis Analysis of variance was performed by applying 3-way ANOVA procedure of the SAS ( 2004).Comparison of the mean test was done by Duncan's multiple range test (Duncan, 1955). AND DISCUSSION Data analysis showed that in the second and fourth week of breeding, experimental treatments had significant effect on the number of Eimeria oocytes per g of excreta (p<0.05)(Table 2).Treatment 2, which was found with wet litter and without butyric acid glycerides and salinomycin sodium, caused most of the infections by Eimeria Protozoan.This treatment had a significant difference from the control in the fourth week of the experiment.Litter moisture can increase infection and provide better environmental conditions for coccidia oocyte growth.In the second and fourth week of breeding, the fewest number of oocytes was observed in treatment 3 containing butyric acid glycerides and in treatment 7 containing salinomycin sodium and butyric acid glycerides, respectively.In other weeks of breeding, experimental treatments had no significant effect on this parameter (p>0.05).The interaction between butyric acid, salinomysin and litter moisture reduction in oocysts production indicates that it might have anticoccidial activities.Thus, it can be concluded that butyric acid and salinomysin preparation has the potential to lower the severity and pressure of the infection and at the same time maintain the oocysts production, which is crucial for the re-infection and the maintenance of the immunity stimulated by the initial infection.In the different weeks of breeding, experimental factors such as butyric acid * A1 and A2 were supplemented with 0 and 0.3% butyric acid glycerides, B1 and B2 were supplemented with 0 and 0.5% salinomycin sodium, and C1 and C2 were supplemented with normal litter with an average moisture of 35% and wet litter with an average moisture of 75%, respectively; a,b means within columns with different superscripts differ significantly at P <0.05; n.s, not significant. glycerides, salinomycin sodium and litter moisture had no significant effect on this parameter (p>0.05).Some researchers showed that using organic acids caused a significant reduction in the microbial balance of the digestive system and consequently improved the bird's performance (Van Immerseel et al., 2005), which did not agree with the result of this experiment.This difference can be because of these reasons: The tested strains in those researches such as Salmonella and Campylobacter were non-resistant against the acids, and the effect of organic acids on organic acid-resistant strains such as Eimeria was not studied in any of them.On the other hand, since each organic acid has its own antimicrobial power, using other acids or a mixture of acids with synergetic effect could cause different results.Also, Eimeria was studied, but since each organic acid has its own anti-microbial power, using other acids or a mixture of acids with synergetic effect could cause different results.In addition, higher levels of butyric acid glycerides may be needed to destroy excreta Eimeria oocytes.In a study by Conway et al. (2002), it was reported that salinomycin sodium had no significant effect on the amount of infection by Eimeria Protozoan.These researchers showed that salimycin in comparison with diclazuril and roxarsone has less power to control the Eimeria oocytes.Increasing the resistance of Eimeria oocytes against ionospheres can also be one of the reasons for the insignificant reduction of oocytes in response to adding salinimycin sodium to the diet.This result agree with the findings of Ali et al., (2002) and Goncagul et al. (2004). The effect of experimental treatments on the amount of feed intake was significant (p<0.05)(Table 3), in that the highest and lowest values for feed intake were observed in the control and treatment 7, respectively.This reduction, in comparison with the control group was significant (p<0.05).Treatment 7 was supplemented with butyric acid glycerides and salinomycin sodium in a normal litter.Feed intake in treatment 5 (with salinomycin sodium) and treatment 7 (with salinomycin sodium and butyric acid glycerides) had a significant reduction in comparison with the control.The used level of salinomycin sodium could cause feed intake reduction, and this effect could increase by adding butyric acid glycerides to the diet containing salinomycin sodium.According to the results, the main effects of the experimental factors; butyric acid glycerides, salinomycin sodium and litter moisture were not significant on feed intake (p>0.05)although the numeric value of feed intake reduced by increasing these factors.This result is in agreement with the findings of Pinchasov and Jensen (1989).These researchers reported that butyric acid in contrast with propionate has no significant effect on broiler's performance and feed intake. According to Table 3, different treatments had no significant effect on weight gain (p>0.05).The main effects of butyric acid glycerides and litter moisture on weight gain were not significant (p>0.05).This observation is in agreement with the findings of Leeson et al. (2005).Using salinomycin sodium with the amount of 5% in diet caused an improvement in weight gain, but this improvement was not significant (p>0.05), and was also reported elsewhere (Ali et al., 2002).The experimental treatments and factors had no significant effect on feed conversion ratio (p>0.05)(Table 3), although salinomycin sodium factor caused improvement in feed conversion ratio. Using butyric acid glycerides had no significant effect on the percentage of tibia ash, calcium and phosphorus at 14 and 35 days of age (p>0.05)(Table 4).However, on these two dates, consumption of this feed additive and consequently diet acidification, caused an insignificant increase in the values of the mentioned parameters.A study by Boling-Frankenbach et al. (2001) showed that using citric acid caused a significant increase in tibia ash and calcium, but the result of this study's experiment did not agree with the findings of Boling-Frankenbach et al. (2001).This can be due to different acidity of butyric and citric acids.In addition, unprotected organic acids were lipophilic and were mainly absorbed in crop, but the organic acid used in this experiment was butyric acid glycerides and was mainly released in the lower parts of the digestive system which has fewer absorption sites. In conclusion, considering the existing condition in this experiment and values of the parameters, butyric acid glycerides and salinomycin sodium used in the mentioned levels had no significant positive effect on the performance of broiler chickens. Table 1 . Composition of experimental diets. Table 2 . Main and interactive effects of experimental factors on the number of Eimeria oocytes per g of excreta in different weeks of breeding. Table 3 . Main and interactive effects of experimental factors on feed intake, weight gain and feed conversion ratio. Table 4 . Main and interactive effects of experimental factors on the amount of mineral storage in chicken tibia (ash, calcium and phosphorus).
2018-12-11T03:32:11.010Z
2011-10-03T00:00:00.000
{ "year": 2011, "sha1": "1de66e97dddbc143b271e414e9d7009767a49710", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/E1162AE34823.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1de66e97dddbc143b271e414e9d7009767a49710", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
219350936
pes2o/s2orc
v3-fos-license
Macroeconomic Stability and Inclusive Growth in Nigeria: A Cointegration Approach The strategy of Inclusive growth is a newly introduced concept in Development economics that emerged in late 2000s out of the gross failure of traditional growth models to deal with the contemporaneity of high economic growth on one hand, and soaring poverty, inequality and unemployment on the other hand particularly in the developing world. Ever since, it has dominated policy-making framework in the world. This study sets out to examine the inclusiveness of growth in Nigeria and the role of macroeconomic stability to spur inclusive growth and development in Nigeria using the data for the period of 1960-2012. Due to lack of a standard measure of inclusive growth, an index of inclusive growth has been constructed using 23 agricultural, economic, education, environmental and health variables while applying Principal Component Analysis and Human Development Index formula. Econometric approaches of Johansen Cointegration testing and Vector Error Correction Model have been employed further to test the long run relationship between macroeconomic stability and inclusive growth in Nigeria. Our findings come up with three stylized facts: firstly, there is a long run relationship between all the regressors and inclusive growth; secondly macroeconomic stability has a significant impact on inclusive growth as GDPV and INV revealed an inverse relationship between them and inclusive growth. Lastly, TOP, FDI, C-GDP and GFC have negative impacts on inclusive growth. Hence the recommendation that there should be committed and sincere efforts towards diversifying the economy so as to contain the volatility by reducing the dominance of oil sector in the economy. Moreover, a macroeconomic policy targeting moderate inflation should be formulated just to make the economy stable and favorable for inclusive growth. I. Introduction corroborated that the strategy of Inclusive growth emerged in the late 2000s out of the colossal failure of the traditional growth models to address the chant of coexistence of growth, poverty and inequality in the developing world. In the other words, the developing countries though witness high economic growth in recent times but with high poverty incidence and unemployment. Growth is inclusive if there is a rapid pace and sustainable growth in the long run, which reduces poverty substantially and it is broad-based across sectors thereby guaranteeing full employment of labor force (Ianchovichina and Lundstrom, 2009). Inclusive growth paradigm brings about sustained economic growth with equal and equitable opportunities to the citizenry in form of employment generation, rise in per capita income, availability of goods and services, price stability, access to better socioeconomic services and their likes. As a result, the policy making environment has been engrossed by the inclusive growth strategy. Thus, policies that foster inclusive growth should create a favorable investment climate, clear obstacles to growth, and result in a greater number of opportunities in the society. Macroeconomic stability is a necessary condition for inclusive growth. This is particularly so as such macroeconomic stabilization policies (fiscal and monetary) are fundamental in achieving full employment, price stability, and high and sustained growth in an economy (Groepe, 2012). In view of the nature of inclusive growth, it is said to be central to attaining MDGs, as the MDGs advocate for human dignity, equality and equity over and above their primary goal of poverty eradication. Just like any other developing countries, Nigeria is also well known with the paradox that though it is blessed with human and natural resources or rich and experiencing superb economic growth yet its people are impoverished, poor and unemployed. This is further compounded by the growing inequalities. In 2011, Nigeria witnessed an economic growth rate of 7.4% and with better outlook in the future. Unfortunately, the poverty and unemployment rates in same year stood at 54.4% and 23.90% respectively (World Bank, 2013 andNBS, 2011). Against this backdrop, the paper strives to research for answers to the following questions: has economic growth in Nigeria been inclusive? And can macroeconomic stability lead to inclusive growth in Nigeria? The remaining part of the paper is organized as follows. The Section 2 reviews the related studies while Section 3 focuses on the methodology of the study. The Sections 4 presents and interprets the estimated results. Finally, Section 5 concludes the paper with policy implications and recommendations. A. History of the Inclusive Growth Strategy The traditional models of Kuznets, (1955) and Solow, (1956) were the outshining ideas about the link between growth, inequality and poverty in the late 1950s and 1970s. These models postulated that income inequalities rather deteriorate at the early stage of economic growth, and then succeeded by the gradual erosion of the income inequalities until the per capita income of the developing world converged or equalized with that of the developed one. The convergence is arrived at when the marginal returns of factors of production of both developed and developing world equalized. This corresponded with the period of pre-Washington consensus who believed that poor countries will remain poor unless government intervenes through infrastructural development and capital forming projects. Since development is all about system transformation via modernization and industrialization (Filho, 2010). The above propositions were proven to be a 'figment of imagination' or illusive by the late 1970s and 1980s. This is so because the developing economies not only failed to converge with the developed ones but also their income inequalities degenerated further. This paved way to the emergence of monetarism and new classical economics thereby displacing the conventional Keynesianism. This resulted in paradigm shift where development postulation was tilted toward 'trickle-down effect' of the gains of economic growth. This was viewed as a Washington Consensus (WC) whose characteristic features were neoliberal and held a view that government intervention leads to inefficiencies and hence responsible for poverty and inequalities in the developing countries. The WC recommended free market policies as a panacea to the problems (see Atif, and Sardar, 2012). Also in Filho, (2010), the failure of the WC-type economic policies led to the once again emergence of the New Institutional Economics in 1990s, as people from all walks of life pressurized for the development of new policy frameworks. This was in connection with the 'economic miracles' of the 'newly industrializing countries (NICs) like Japan, the four Asian tigers (South Korea, Taiwan, Singapore and Hong Kong) in the 1960s and 1970s, and Indonesia and others in 1980s; using protectionism and guided macroeconomic policies. As a consequence, the mainstream consensus split into the WC and the post-Washington Consensus (PWC) by the late 1990s and the early 2000 with the latter emerged victorious. The PWC advocated for Pro-Poor growth policies. This was indisputable given the global devotion and doggedness to the Millennium Development Goals (MDGs). By the late 2000s the WC seemed to be more sophisticated with the new concept of Inclusive growth. Unlike the pro-poor growth that aims at improving the welfare of the poor; the Inclusive growth addresses all segments of the economy covering the labour force, the poor, the middle and the rich. B. Conceptual Framework. Although inclusive growth is a newly introduced concept in the development economic field, it attracts a multiple of definitions from various economists. These are in a way reflecting the central position it occupies in economics and policy making environment. The following are some of its essential definitions. Definitions of Inclusive growth Despite the fact that the 'inclusive growth' is newly introduced concepts in development economics, yet it attracts a lot of attention in terms of definitions and conceptualizing it to suit various policies' ambitions. To this end, this section will highlight the most comprehensive and interesting definitions of the concept so as to map out its glaring features. According to Ianchovichina and Lundstrom, (2009), inclusive growth is "a rapid pace of growth that is broad-based across sectors and inclusive of the labour force and results in substantial poverty reduction". They suggest that for poverty to be substantially reduced rapid of growth is inevitable but for growth to be sustained over the long run it must be diverse or generic across sectors and a chunk of the country's labour force should incorporated in the process. Thus, their definition observed that there should be a synergy between macro and micro drivers of economic growth. Moreover, Ianchovichina and Gable, (2012) (cited in Anand, Mishra, and Peiris, 2013a) define inclusive growth as "raising the pace of growth and enlarging the size of the economy by providing a level playing field for investment and increasing productive employment opportunities". To Hirway, (2011) as cited in UNDP, (2011), inclusive growth refers to as "the growth process that reduces poverty faster, that is broad-based and labour intensive, reduces inequalities across regions and across different socioeconomic groups, opens up opportunities for the excluded and marginalized not only as beneficiaries but also as partners in the growth process". This definition broader is than the first one as it includes 'inequalities'. Ali and Son, (2007) in Klasen (2010) sees inclusive as "growth that not only create economic opportunities but also one that ensures equal access to the opportunities created for all segments of the society, particularly for the poor". The World Bank, (2009) in AfDB, (2012), views inclusive growth as one that "has to create an environment of equality in opportunity for all, by addressing employment creation, market consumption, production, and a platform for poor people to access good living conditions" Lastly, the AfDB, (2012) refers to Inclusive growth as "economic growth that results in a wider access to sustainable socioeconomic opportunities for a broader number of people, regions or countries while protecting the vulnerable, all being done in an environment of fairness, equal justice, and political plurality" Ingredients of Inclusive Growth From the above definitions, we can deduce the following features of inclusive growth:  The economic growth should be beneficial to the generality of the society by reducing poverty substantially and ensuring full employment.  The rate of economic growth must be high and reasonable enough to meet the costs or expenditure of poverty reduction through empowerment, job creation and provision of infrastructures and social amenities.  The economic growth should be sustained over the long run so as to avoid social and economic crises in the society.  For economic growth to be sustainable, it must be broad-based across all sectors. This means that the economy should be well diversified; all sectors must be reinvigorated and interlinked just to tap the potentials of each sector.  As opposed the pro-poor growth strategy that concentrates on the welfare of the poor; this strategy ensures fair, equal and equitable opportunities to all segments of the society. The Drivers of Inclusive Growth  Sound macroeconomic policies that ensure stability, probity, sustained growth and full employment (Birdsall, 2007). That a developing country should try to have a track record of credible fiscal management by maintaining very low public debt thereby reduce interest rate. These policies should be redistributive in nature and one leading to broadbased economic growth.  Winters, (2014) and Governance and Social Development Resource Centre (2010) posited that infrastructure is a very essential driver of inclusive as it reduces the costs of trade and trade in turn raises the incomes beyond subsistence level. Also, having access to infrastructures improves the well beings of the citizenry significantly as that eases the difficulties of doing economic activities. Thus, government should invest in such infrastructure that has direct bearing on business and trade as well as on the disadvantageous groups. These include transport, energy, communication, education and health facilities, dams, and so on.  Social inclusion is a fundamental pillar of inclusive growth. There should be active and deliberate government intervention to protect the most vulnerable and deprived sections or disadvantageous groups through social security schemes like unemployment benefits, old age allowance, subsidies on essential goods and services, etc. (Porter, 2010). This is critical in keeping aggregate demand high enough to encourage investment. (2008); Rauniyar and Kanbur, (2010) and Chakrabarty (2009) cited in Porter (2010) point out that private-public partnership also fosters inclusive growth; by so doing the goods and services to rescue poor people from poverty could be enhanced.  Mendoza and Thelen  Agricultural sector is integral to inclusive growth as it targets rural economy, which is the home of a vast majority of the population and the poor particularly in the developing world. Hence, investing in rural infrastructures and agricultural technologies might enhance inclusive growth by making rural population to have greater access to markets, basic needs and employment and income opportunities (Governance and Social Development Resource Centre, 2010).  Good governance and strong institution are also weighty to inclusive growth (Arcenas, 2013). These guarantee credible, accountable, fair and transparent leadership. Such a leadership has the potential to formulate and implement policies that can promote inclusive growth by providing essential services, conducive investment atmosphere and quality infrastructures. C. Survey of Empirical Studies Being a newly introduced concept of economic development, it is indisputable to find few empirical studies on inclusive growth due to lack of standard unified measure of the concept, unavailability of data on it and high level of sophistication required to construct its approximate. As such, below are the few empirical studies available. Anand, Mishra, and Peiris, (2013b) carried out a panel study of 143 countries across all continents of the world on the measurement and determinants of inclusive growth. They constructed a unified macro measure of inclusive growth by calibrating and combining PPP GDP per capita and income distribution. An unbalanced 5-year panel data was used and estimated by a fixed effect panel model. Their results reveal that macroeconomic stability, human capital and structural changes are the major determinants of inclusive growth. Arcenas, (2013) explored the influence of mining sector on inclusive growth using binary logit model on a survey data set. His findings include among others, that mining sector has neutral impact on poverty as measured by household income disparity from poverty threshold. As a result, the author concluded that it is unlikely that mining sector can directly affect inclusive growth if the only channel is labour employment. Regional inequalities and rural poverty are found to constrain economic growth from becoming inclusive in Egypt by Ghanem, (2014) using descriptive statistics. An inclusive system of planning and budgeting, better safety net systems, and implementing of agricultural policies were recommended by the author. Anyanwu, (2013) determined the correlates of poverty for inclusive growth using multivariate models (OLS, FGLS, IV-2SLS and IV-GMM) on 43 African countries for the period 1980-2011. The empirical results indicate that higher levels of income inequality, primary education alone, mineral rents, inflation, and higher levels of population aggravate poverty and thus bad for poverty reduction and inclusive growth in the continent. The results suggest further that higher real per capita GDP, net ODA, and secondary education have significant negative effect on poverty and hence favourable for poverty reduction and inclusive growth in the continent. In the discourse of issues of international migration in inclusive growth, Jennifer and Elina, (2011) discovered that the macroeconomic policies that address migration tend to affect inclusive growth positively. While, a panel study by Kumah and Sandy, (2013) focused on the role of economic institutions and policy on inclusive growth. The study made use of the both fixed and GMM models on the data from selected advanced economies, low income countries and sub-Saharan Africa for the period 1960-2010. Their findings reveal that macroeconomic stability via reduction in inflation variability enhances per capita economic growth rate, which is the proxy of inclusive growth. Also, the structural reforms and quality of the business environment are found to impact on the inclusive growth significantly. Dollar and Kraay, (2001) analyse the effect of various policies on economic growth and poverty reduction. They realized that trade openness, good rule of law and fiscal discipline alongside the avoidance of inflation are the major drivers in that right. Hence, their conclusion that growth basically benefits the disadvantageous group and any effort towards poverty reduction should lead to high economic growth. The determinants of strong growth, employment and poverty reduction (components of inclusive growth) are found to be greater savings, investment, and more productive utilization of capital by better trained workers, reduction in the skill constraint and moderation in labour costs. Also, higher labour productivity growth enhances labour intensity. These were the findings of Faulkner, Loewald and Makrelov, (2013) using a dynamic computable general equilibrium model on South African data. The influence of globalization or international trade and infrastructures on inclusive growth was investigated by winters, (2014) using descriptive statistics. The results reveal that both are found to improve the well beings of the people and thus influencing inclusive growth. From the literatures reviewed, it is obvious that few studies on that topic exist in Africa in general and Nigeria in particular. Thus, this study will help fill the research gap in the continent. D. Stylized Facts about Inclusive Growth in Nigeria A cursory view of the distribution of essential services in Nigeria reveals that there have not been meaningful improvements over the last four decades as indicated by table A below. Moreover, poverty, inequality and unemployment have all been on the upswing for the same period as shown by table B. However, this is happening at a time when the country is recording one of the fastest rates of economic growth in the world. But this growth in GDP has been majorly contributed by the oil GDP meaning that the growth has not been broadbased across all sectors in the country. Moreover, the gains from growth, although had been high and sustainable, they have not been benefitting the poor who constitute the lion share of the Nigerian population. What a paradox! That the country is rich but its people are deprived, unemployed and extremely poor. Economic growth has been high and sustainable (with some fluctuations) since 1980s as it almost doubled itself between 1980 and 2010. However, it has not been broad-based across all sectors as the Oil sector has been dominating the economy. This is shown by the contribution of the oil sector as a share of GDP in Table C above. The sector is indicated to Pakistan Journal of Humanities and Social Sciences, 6(3), 2018 379 have been contributing 20 to 40% to the GDP. And the sector is known to be generating less job opportunities. This might be the factor responsible for the increasing economic growth on one hand and rising unemployment, poverty and inequality on the other hand. From the tables above, it could be inferred that economic growth has been high and sustainable since 1980s in Nigeria. Yet, it has not been broad-based across sectors; it has not reduced poverty, unemployment and inequality but rather increased them. Moreover, Nigerians are highly deprived in the provision of social services like health, water, sanitation, and electricity. A. Theoretical framework Macroeconomic stability promotes sustained economic growth, and macroeconomic policies in turn enhances economic stability and sustained economic growth bring about reduction in poverty and inequality (Kumah and Sandy, 2013;Rodik 2000, Dollar and Kraay 2001, Pinkovskiy and Salai-Martin 2010. Also, improvement in the growth of gross domestic capital (GDP) enhances reduction in poverty in India (Bhalla, 2011). Since inclusive growth among other things implies sustained economic growth accompanied by substantial reduction in poverty, inequality and unemployment among the poor, there is an established link between macroeconomic stability, sustained economic growth and inclusive growth. We therefore, specify the augmented Solow where the production function of the economy is given as: Expressing in per capita by dividing through by L t gives Then, α + β = 1 Taking the log and differencing with respect to time gives the growth rate of the variables as: Where: are the growth rates of output, factor productivity, capital and economic stability respectively. This forms the basis of the empirical model specified below. B. Empirical model Given the established relationship between economic stability and economic growth in literature, to examine the long-run relationship and short run dynamics between economic stability and inclusive growth, on the basis of the theoretical derivation of VECM employed by Greene, (2012) and Harris & Sollis, (2003), we specified the regression equation as follow: C_GDP= credit to GDP, GFC= gross fixed capital, INV= inflation variability, TOP= trade openness, GDPV=GDP volatility and FDI= foreign direct investment. Hence, to estimate the empirical model and conduct the Johansen Cointegration test, we specify the VECM in matrix form as follows: The operator of lags However, there is no standard measurement of inclusive growth; we therefore, IV. Discussion of Results To examine the data, preliminary analysis of unit root test was conducted for all the variables using ADF test. The result of the test shown in Table 1 in the appendix indicates that all the variables are stationary at first difference. Also, lag exclusion (Wald) test retains maximum of two lags informing the inclusion of only two lags in the estimation of the VECM. We further conducted post estimation test of model stability test (AR roots graph test and recursive residual test). The results (reported in figure 1 and figure 2) show that the model is not stable and by implication, any shock will lead to temporary or permanent disequilibrium in the system. The speed of convergence to equilibrium is explained by the use of the error correction Cointegration coefficients of the short run dynamics (see table 5). Only GFC and INV found to have a positive short run impact on inclusive growth. The result indicates that a unit increase in GFC will lead to 5.12 increased in inclusive growth, and a unit increase in INV will generate 146.34 increased in inclusive growth respectively. Contrarily, we find CGDP, GDPV, TOP and FDI to have negative impact on inclusive growth in the short run. The result shows that a unit increase in CGDP, GDPV, TOP and FDI will lead to 87.43, 4.69, 54.82 and 14.12 decreases in inclusive growth respectively. However, the long run relationship between explanatory variables and inclusive growth (IG) is reported in table 6. Both GFC and INV have negative impact on inclusive growth as shown by the coefficients (-5.49 and -0.00167) and we find both the variables to be statistically significant. This implies that a unit increase in GFC will bring about 5.5 units decrease in IG while, a unit increase in inflation variability (INV) will lead to 0.00167 units decrease in IG. On the other hand, CGDP, GDPV, FDI and TOP are positively related to IG. A unit increase in CGDP, GDPV, FDI and TOP will bring about 0.008, 0.038, 0.004 and 0.001 units increase in inclusive growth (IG). V. Conclusion and Recommendation This study focused on the empirical examination of the impact of macroeconomic stability on inclusive growth in Nigeria for the period 1960-2012. An index of inclusive growth has been constructed and tested against the macroeconomic stability variables using Johansen co integration approach and VECM. The tests established a long run relationship between macroeconomic stability and inclusive growth and confirmed by the VECM results. Likewise, the results conform to the theoretical signs as GDP volatility and inflation variability (all representing macroeconomic stability) impact negatively on inclusive growth. Other key findings are that trade openness, gross fixed capital, credit to GDP ratio and FDI have also significant and negative impacts on the inclusive growth. However, a descriptive analysis (tables A, B and C) of inclusive growth in Nigeria shows that growth has not been inclusive. Thus, the research questions have empirically been answered. To this effect, it could be inferred from our findings that macroeconomic stability is very fundamental to any economic policy aiming at sustaining growth, equitable opportunities, full employment and poverty reduction. As such, the following recommendations have been suggested by the authors. First, holistic, committed and sincere efforts should be geared towards diversifying the economy. This is so because the economy is featured to be highly volatile given the oil domination in the economy as shown table C.
2020-05-30T08:33:33.899Z
2018-09-30T00:00:00.000
{ "year": 2018, "sha1": "d5ac329d31bfdff2a953378024bed875a9621dc7", "oa_license": "CCBYNC", "oa_url": "https://www.internationalrasd.org/journals/index.php/pjhss/article/download/79/52", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0d23389a63244efa7fbe763c48e7b71b2f5bf485", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
266281337
pes2o/s2orc
v3-fos-license
MANAGING ONLINE TEACHING ACTIVITIES AT VOCATIONAL COLLEGES: CURRENT STATUS, POSITION, ROLE AND SOLUTION ORIENTATION Online teaching activities in recent years have achieved certain results. Most teachers and administrators of colleges and intermediate schools have practical experience in online training. However, this activity is still not highly effective due to a lack of systematicity and synchronization. Specifically, technical means for teaching and learning in difficult and remote areas are inadequate; Online teaching methods are unattractive. Besides, the classroom management method is not really appropriate. Based on an overview of previous studies, 203 people were surveyed (n = 203); in which, the management staff is 22 people (n1=22), the lecturers of vocational colleges are 181 people (n2=181). This study shows that if we want to improve online teaching activities at Vocational education establishments need to have appropriate management measures. This study also clearly shows the role and importance of managing online teaching activities at vocational education institutions. At the same time, based on assessing the current situation and analyzing the causes of problems, this study recommends management solutions to improve the quality of online teaching activities at vocational education institutions. Introduction In the era of the fourth industrial revolution (4.0) with the strong development of information and communication technology, especially in the context of epidemics, natural disasters, etc. Online teaching is an inevitable trend in the education industry in general, and vocational education institutions in Ho Chi Minh City in particular because of its superiority such as flexibility, ease of access; rich content; saving costs and time; global; meets the diverse learning needs of learners very well.Online teaching was born as a revolution in teaching and learning, becoming an inevitable educational trend of the era, of education (4.0), this educational trend is "exploding" in many countries around the world, especially in the current context of globalization. In recent years, due to the impact of the Covid-19 pandemic, many schools have switched to online teaching.Initial online teaching activities have achieved certain results.Online teaching, from its passive birth in the context of the epidemic, has transformed into a new teaching method, adapted to the information technology era.Thanks to online teaching, the information technology competency level of teachers and students has been improved, and the outstanding advantages of online teaching have been discovered.However, the birth of online teaching is mainly to handle situations where students cannot attend school because of the pandemic. The birth in such a context makes online teaching contain inadequacies right in the process of formation, online teaching is carried out without systematicity, or synchronization and is not highly effective (Institute for Research Education Development Cooperation, 2022, p.74).The cause of that situation is mainly due to management.That creates the need to research, summarize, and evaluate the current situation, find the causes of advantages and limitations, draw practical lessons, and adjust management methods in online teaching (Thuan & LongAn, 2022). In the future, with the development of information technology, online teaching will become an official teaching channel that exists alongside face-to-face teaching.Online teaching will have new and more diverse developments.Many subjects and teaching content will be digitized and included in the school's regular online teaching (Institute for Research and Development of Education Cooperation, 2022, p.24).That raises the need to research and find management solutions to anticipate the development of online teaching to meet the requirements of the fourth industrial revolution.If you want to improve the quality of online teaching, you must first find an online teaching management method that is suitable for practice (Duchiep, et al., 2022). To address the issues raised, this study focuses on addressing the following specific questions: What is the position and role of online teaching activities and management of online teaching activities at vocational education institutions in the context of educational innovation? What are the causes of shortcomings and weaknesses in the management of online teaching activities at vocational education institutions in the context of educational innovation?What measures are needed to improve the quality of online teaching management at vocational education institutions in the current context of educational innovation? Context of educational innovation Vietnam education (including training, hereinafter referred to as education) has achieved many achievements and results, making an important contribution to the victory of building and defending the Fatherland.However, in the process of development, education has revealed weaknesses and inadequacies, including issues that cause lasting social frustration and have not met the requirements of industrialization modernity internationalization, and integration.Educational innovations in recent times have been inconsistent and patchy; many policies, mechanisms, and solutions on education that were once effective have now become no longer suitable for the country's new development stage and need to be adjusted and supplemented. The work of building and protecting the Fatherland in the new situation, especially the requirement to transform the growth model in depth and restructure the economy towards quality, efficiency, and high competitiveness, requires education must meet the diverse learning needs of the people, quickly contributing to creating a high-quality workforce.If we do not fundamentally and comprehensively innovate education and training, human resources will be a factor hindering the country's development. Our country is in the process of increasingly deepening international integration; the rapid development of science and technology, educational science, and fierce competition in many fields between countries require education to innovate.Today's essence of competition between countries is competition in human resources, science, and technology.The general trend of the world entering the 21st century is to carry out strong innovation or educational reform. Concept of online learning Online learning (E-Learning) is a form of learning that has appeared under the development of information technology.In it, learners will participate in virtual classes on the Internet instead of going to traditional physical classes.Below are some examples that will help learners understand better. According to Van, D. D. (2018), the impact of the 4.0 Industrial Revolution on online teaching today, along with the application of IoT technology in developing digital teaching and virtual reality technology in teaching will almost completely change the form of teaching in universities.In teaching activities, the role of lecturers will gradually shift from imparting knowledge to guiding students to Based on practical surveys as well as inheriting research from colleagues, it shows that currently, online teaching activities are divided into two forms, which are: Real-time online learning: Teachers and learners will interact in real time through chat applications and online conferences.This form is similar to the traditional teaching method.Participants can be flexible about their study location (Duc, 2018;Tran, 2019;Institute for Educational Development and Cooperation Research, 2022;Van, 2022a). Study available courses: Learners will participate in pre-designed courses via video.The instructor will teach the lecture content in the video, then there will often be some exercises to test knowledge.Depending on the course, learners can take the exam to get a certificate (Hong, 2018;Tran, 2019;Institute for Educational Development and Cooperation Research, 2022).With this form of learning, learners just need to log in to their account to study anytime, anywhere, easily review lectures, and perform review exercises many times. About advantages: Save time and study costs: Learners and teachers will not waste time traveling from home to class.For those who go to school or teach far from home, they can also save on rent and food costs.With online courses on platforms, learners can study many courses at the same time, and even learn many degrees and certificates in a short time.This will help them optimize their learning time and save money compared to traditional learning methods that take months. Promote proactive learning in learners: Learners proactively choose online courses that suit their personal development needs.They can learn at a pace that suits them without being affected by fixed study programs that last for many months like traditional classrooms. Expanding learning opportunities: Developing online learning helps learners take classes in other regions and countries.Thanks to that, you can easily access the knowledge you want to learn without having to travel far. Helps learning not to be interrupted due to external factors: Factors such as epidemics, natural disasters, etc. will make in-person classes impossible.At this time, online learning will become an effective solution, helping learning and teaching take place normally, avoiding delaying learners' learning and graduation time. About disadvantages: Dependence on the Internet: To study online, you need to be connected to the Internet.This is an obstacle for local learners who do not have Internet access.On the other hand, the Internet connection is sometimes unstable, causing images and sounds to be interrupted, making it difficult for learners to hear the content. Reduce direct social interaction between people: Learners and teachers will not directly meet and interact with each other, which sometimes leads to boredom. Over time, it can affect dynamism and communication ability, especially for kindergarten and elementary school students. Requires high discipline from learners: When learning online, teachers cannot pay close attention to learners like in class, so they will require learners to actively focus and study seriously.However, this is especially difficult for elementary school students who are active and easily distracted. Managing online teaching In 2006, the American E-Learning Research Council (Sloan Consortium) proposed a classification of classes as Table 1.According to the general assessment of Sloan Consortium ( 2006 In reality today, vocational colleges also provide formal training programs through online teaching, granting certificates and diplomas at the end of the course (certificate at the end of the course, diploma at the end of the course).Full online teaching is operated through a system of courses in the following 4 main formats: Independent courses: For non-formal teaching, learners choose and register according to their needs, abilities, and interests, and in accordance with personal conditions; Simultaneous course: learning activities take place in an online environment at the same time as scheduled in advance; The course is not concurrent: Learning activities take place at different times, and the results of teaching activities are stored and shared.The course integrates tools for assessment; Blended course: A combination of synchronous and asynchronous teaching. The most common point of fully online teaching is that teaching activities take place in a virtual environment with simulation and reproduction activities that increase the opportunity to access information, knowledge, and learning conditions for students.learning while also creating a huge learning space and data resources to share in society. Online teaching management is the process of purposeful, planned impact of the management entity (including different levels of management from the Theoretical research methods This method is used to analyze, synthesize, and systematize scientific information collected from documents related to research issues, perspectives, and theories on ensuring the quality of education and training.training in the context of educational innovation.From there, draw conclusions related to the research problem.This research method aims to explore theoretical issues associated with online teaching activities and management of online teaching activities in vocational colleges in the context of educational innovation in order to build theories about the topic "Improving the quality of educational management" and collect scientific information about the history of online teaching activities and management of online teaching activities in vocational colleges in the context of innovation education. Investigation and survey method Purpose of investigation and survey: The author uses the questionnaire survey method to collect data on positions, roles, necessity, and limitations in management measures.online teaching activities at vocational education Seniority of work From 1-5 years 37 18.22 From 5 to less than 10 years The question is divided into five levels with conventional scores (table 2).according to a certain quantity criterion of the sum of many units of the same type. The average score reflects the average level of the phenomenon and compares two (or more) populations of the studied phenomena of the same type, not of the same scale. Position and role of online teaching activities: To determine the position and role of online teaching activities in vocational colleges, the author conducted a survey of 203 people (including 22 managers and 181 teachers).Of the 203 managers and teachers surveyed, 98.02% of managers and teachers perceived that online teaching activities in vocational colleges have an "important" and "very important" position.Of these, up to 55.17% of awareness is at the "very important" level (table 2).This result reflects the correct awareness of managers and teachers in online teaching activities in vocational colleges.The general trend can be seen very clearly when the ratio in the "important" and "very important" columns is always very high.For example, in chart 1, when comparing by job position, the good news is that the "Administrators of vocational college" group has the highest awareness with 54.16% rating it at the "very important" level.When compared according to seniority, in the group with the longest working experience, "over 20 years", all of them rated it as "quite important" or higher. Position and role of managing online teaching activities: To determine the position and role of online teaching activities in vocational colleges, the author conducted a survey of 203 people (including 22 managers and 181 teachers).Of the 203 managers and teachers surveyed, 90.64% of managers and teachers perceived that online teaching activities in vocational colleges have an "important" and "very important" position.Of these, up to 36.45% of awareness is at the "very important" level (table 2).This result reflects the correct awareness of managers and teachers in online teaching activities in vocational colleges.The general trend can be seen very clearly when the percentage in the "important" and "very important" columns always accounts for a very high percentage, in which the "important" percentage accounts for the most.For example, in chart 3, when comparing by job position, the good news is that the group " Head of Department, Vice Head Department; Dean of Faculty, Vice Dean of Faculty" and "Administrators of vocational college" have the highest awareness with 18.91% and 18.75% respectively rating it at the "very important" level.When compared according to seniority, in the group with seniority from "10 to less than 20" years and "over 20 years", all managers and teachers rated it as "quite important" above. Particularly for the group "over 20 years" or more, all rated it as "important" and very important". Current status of management of online teaching activities in vocational colleges Vocational colleges have not expanded online teaching for the following main reasons: 1) The cost of designing electronic lectures is very high, especially practical modules that require a lot of machinery and equipment; 2) Students have difficulty The results of testing the Cronbach's Alpha scale show that all seven independent variables have high reliability (table 4).After evaluating the reliability of the scale using Cronbach's Alpha coefficient, 23 variables of the scale of factors affecting the management of online teaching activities at vocational colleges were included in the factor analysis.Through EFA analysis, we identify 4 factors affecting the management of online teaching activities in vocational colleges. The principle of questionnaire investigation is that each subject independently answers a survey.Before answering, subjects were given detailed instructions to clearly understand the purpose and response requirements in the contents of the questionnaire.To collect more information to supplement the qualitative information obtained in a wide scope of investigation, we also conducted in-depth interviews.The subjects included 16 managers, lecturers, and staff participating in online teaching at the universities selected for the study.The interview content is about the current status of online teaching, the current status of online teaching management, and the factors affecting online teaching management.Depending on the subject, the interview addresses this situation in different aspects consistent with the role of the management subject participating in online teaching management.Interviews are conducted in the best setting to obtain accurate information.The survey results presented in Table 5 show that: All XTBs are in the range of 3.41X4.20,reaching the "important" level, there are no XTBs that the survey subjects rated as "not important", "less important" and "quite important".In which, ND1."Teaching plans are periodically developed and fully" is rated highest with = 3.70.This is because vocational colleges periodically make teaching plans to assign and arrange lecturers.For each class, schools have planned teaching activities associated with the learning materials and resources used in the teaching process. Most vocational colleges consider online teaching to play a role in supporting regular training, so planning work is combined with traditional learning, so online teaching activities are identified as supporting and there are no specific regulations. In general, the teaching planning work at schools has responded to the characteristics of online teaching and is suitable for students' distance learning.The remaining contents are all assessed at a good level of implementation (average score from = 3.61 to = 3.66 within the range of 3.41X4.20).This is because the overall organization and implementation of teaching is according to plan, however, because online teaching instructors have to multitask, some modules are carried out later than prescribed.Directing and supervising the process of online teaching activities to ensure quality and effectiveness has not been carried out in a methodical and rigorous manner like traditional teaching.This is understandable because up to now, the assessment of the quality of this type of training has not yet had an official document from the education management agency and is still receiving comments. Cause of existence Survey results and previous research results have shown that: there are many reasons leading to limitations and inadequacies in managing online teaching activities, specifically as follows: Firstly, the quality of online teaching and learning is not guaranteed, due to many objective factors such as the quality of the transmission line being unstable, some teachers, especially teachers, Older adults have difficulty applying information technology to teaching.The quality of online teaching is partly affected because the equipment used for teaching is limited in both quantity and quality (Thanh, 2019). The management of students during the learning process is not very effective. Although the Government has launched the program "Wave and Computer for Children", it has not met real needs (Matei & Iwinska, 2016;Thang, 2019;Vu, 2022). Second, prolonged online teaching and learning has caused health problems for teachers and learners when they have to sit in contact with electronic devices for long periods of time and are inactive for long periods of time.Anxiety arises when X X interaction with teachers and friends is reduced.While many parents do not interact properly with their children during online learning, teachers develop psychological pressure when they teach hundreds of hours per lesson.The eyes, audience, and listeners of teachers during online lessons are now not only students but also students' parents, public opinion, and social networks (Hong, 2022). Third, the role of the teacher has not been clearly defined in online teaching, and there is no set of rules for teachers to implement.Online teaching and learning is not something that is done regularly; therefore, when the COVID-19 epidemic broke out, teachers were extremely confused about implementation techniques. There are many reasons, but the main one is that many teachers' ability to apply information technology to teaching is limited, and the use of online learning software is not proficient, leading to ineffective implementation.Furthermore, most teachers are used to being in the face-to-face space in front of their students.Now standing in the online space to give lessons, many teachers will be confused or not confident when delivering the lesson (Duchiep, et al., 2022). Fourth, although students are quite active in applying information technology to exploit lecturers' lectures, in reality, the circumstances and physical conditions of the student's family will greatly influence the situation of online learning activities.Because, not every family can equip the internet, computers, and smartphones for their children to study, especially in remote and extremely difficult areas.Furthermore, due to the characteristics of online learning, the management of students' learning habits and awareness is not direct, which will affect students' learning results.When teaching and learning, the interaction between teachers and students is a very important factor.If in classroom lectures, interaction is promoted effectively, then in online learning, teachers mainly conduct one-way lectures, and students receive them online, through various means, interaction needs to be through a system of questions and exercises afterward, not directly.This will affect the quality of the lecture. Solution for managing online teaching activities for vocational colleges From the above analysis, to enhance the management of the online teaching It is necessary to actively improve the ability to apply ICT (Han, 2023), and skills to proficiently use online learning management systems and information technology facilities for lecturers in the online teaching environment.The school must also strengthen the development of facilities to ensure the implementation of online teaching activities such as studios, learning management software systems, virtual classroom systems, forum systems, etc.These tools need to be continuously upgraded and developed with new functions and utilities to meet the needs of lecturers and students. (iii) Build a mechanism to promote and control interactive activities between lecturers -students and students -students to improve teaching and learning effectiveness.It is necessary to promote the above tools, utilities, and software. Online learning system to deploy classes such as discussion forums, virtual classrooms, and chat applications.Depending on the tools and communication environment, teaching activities are carried out through discussion of situations and projects.On the other hand, the school also needs to develop a process for organizing teaching activities and a monitoring and inspection mechanism for lecturers, students, and program support staff to implement.Monitoring classroom activities needs to be done regularly to ensure the maintenance and promotion of interactive activities.Student discussions and questions must be controlled so that lecturers can respond and promptly detect discussion content that violates the rules.Monitoring teaching activities can also be done through the learning management system with a number of activities such as: participating in discussions, asking questions, and doing multiple-choice exercises, etc. (iv) Strengthen supervision of management activities and evaluation of lecturers' learning outcomes and student learning outcomes.Inspection and evaluation activities of the teaching process need to be carried out for each subject and each study (Van, 2022a;Han, 2023).It is necessary to specify evaluation criteria for the class that has been conducted in terms of lecturers, teaching activities, learning, interaction, etc. as a basis for evaluation and conclusion.Evaluation results should be used as a basis to adjust course design, training programs, and related activities. (v) Develop a plan to use supporting means such as Electronic board, drawing software, graphs, software for creating questions, online testing, etc. at vocational colleges.Reality shows that online teaching can never replace face-to-face teaching, because each form of education has different characteristics, strengths, and weaknesses (Do, 2018;Van, 2022b).Therefore, these two forms cannot replace or negate each other.When education does not have a national strategy for online teaching, we need to organize online teaching flexibly and always combine it closely with face-to-face teaching, without being too strict or directive.Teaching online simultaneously with schools and localities across the country. Planning: Instructions on how to organize lessons and study hours to avoid being too stressful too formal or superficial.In addition, innovative testing, evaluation, and exam methods to suit teaching and learning conditions in the epidemic situation, all to ensure fairness and students' rights.Thoroughly grasp measures to manage lecturers during the teaching process.Require lecturers to increase interaction with students through channels to grasp students' learning awareness, lesson learning quality, and difficulties through lectures.Regularly remind students, send online lecture schedules to central and local television stations for students to participate in learning, then have questions and exercises to evaluate learning results via television and images. Conclusion The direct and profound impact of the 4.0 industrial revolution has rapidly changed the learning needs of learners, especially the need for online learning. Therefore, online teaching management is the integration of core management capacity, technical expertise, teaching capacity, technology application capacity, and innovative teaching methods.Based on analyzing the impact of the 4.0 industrial revolution on the online teaching system, and analyzing the current situation of online teaching management in vocational colleges, the article has proposed solutions for development.Online teaching management meets the requirements of innovation and improving teaching quality in the context of current educational innovation. Online teaching activities are not just a temporary solution but have gradually demonstrated the advantages of a flexible teaching method with many advantages in the modern social context and with students as learners, student. However, to ensure stability and promote the effectiveness of this teaching method, it is necessary to have close coordination between the school, educational management levels, lecturers, and students, to develop and encourage the positivity, initiative, and responsibility of all parties to build an effective online teaching mechanism, ensuring good quality.To achieve this, the role of management is extremely important. Faced with the above reality, the Resolution of the XIth National Party Congress (2011) determined to "Fundamentally and comprehensively innovate education in the direction of standardization, modernization, socialization, democratization, and international integration" and "Rapidly develop human resources, especially high-quality human resources, focusing on fundamentally and comprehensively innovating the national education system".At the same time, Resolution No. 29-NQ/TW (2013) "On fundamental and comprehensive innovation of education and training, meeting the requirements of industrialization and modernization in the context of a market-oriented economy Socialism and international integration" was approved by the 8th Central Conference (term XI).Fundamental and comprehensive reform of education is an extremely important task.The Central Government issues a Resolution to unify awareness and action; promote the intelligence of the entire Party and the entire people, and mobilize resources with the coordination of many agencies, departments, and social organizations for the cause of education.Fundamental and comprehensive innovation in education and training is innovation of major, core, and urgent issues, from perspectives and guiding ideas to goals, content, methods, mechanisms, policies, and conditions to ensure implementation; innovation from the Party's leadership, the State's management to the governance of educational and training institutions and the participation of family, community, society and the learners themselves; innovation at all levels and majors.Innovate to create strong changes in the quality and effectiveness of education, better meeting the requirements of the cause of building and protecting the Fatherland, and the learning needs of the people. discover new knowledge.At the same time, teaching management must also change in an open and flexible direction to meet the diverse learning needs of learners.According to Hong, B. V. (2019), Industrial Revolution 4.0 is based on the integration of a series of technologies such as artificial intelligence, the Internet of Things/IoT, big data, and cloud computing.(cloud computing), etc. is growing very quickly and has a strong impact on all aspects of socio-economic life, including the field of online training. ), classes that apply Internet technology levels: C and D are considered eLearning classes.Massive open online courses (MOOCs) today are often designed based on open-source code, allowing changes in component configuration and working interface.The content of online courses is very diverse, often not framed by any program of any unit or training facility.It closely follows and meets the diverse learning needs of learners and provides practical skills, research capabilities, or careers in society. Administrators of vocational colleges (Principal, Vice Principal), Faculties, and Training Centers) on the subjects.management (including lecturers, students, managers, and training staff) to carry out training activities on the application of electronic equipment, software, and telecommunications networks.According to Tran (2019; Chung, 2018; Institute for Educational Cooperation and Development Research, 2022; Vuhong, 2022), management through the application of management functions and means to help the training process operate effectively, improving the quality of teaching and learning in education and training.Factors affecting online teaching management at universities are the organization's awareness of online teaching; the capacity and qualifications of the online teaching management team; the application of information technology in training management; and the organizational structure of the online teaching unit. institutions in the context of educational innovation, and at the same time identify management measures to improve the quality of online teaching activities at educational institutions career in the near future.Content of investigation and survey: Collect information about the current status of online teaching activities at vocational education institutions and manage online teaching activities at vocational education institutions in the context of change.new education.We also used questionnaires to investigate the necessity and feasibility of measures to improve educational quality in the context of educational innovation.Subjects of research and survey: This study has the participation of 203 people (n = 203), who work in the field of state management of education, teachers, and educational management teams who are working in education.Teaching and doing management work at a number of vocational education institutions ( Vocational colleges).Specifically, 22 managers (including 08 Principals, 14 Vice Principals, n1 = 22) and 181 lecturers (teachers at vocational colleges, n2 = 181).Designing a questionnaire for investigation and survey: Questionnaire related to online teaching activities at vocational education institutions and management of online teaching activities at vocational education institutions career in the context of educational innovation.Questions about gender, age, education level, seniority, and work position were added to the questionnaire (table1). Source of the author's survey: n=203. Figure 1 - Figure 1 -Perception of the importance of online teaching activities by work position (%) Figure 4 - Figure 4 -Perception of the importance of managing online teaching activities according to seniority (%) less than 10 years 10 years to 20 years Over 20 years absorbing knowledge, especially practical modules, and have difficulty practicing practical skills and internships; 3) It is difficult for lecturers to accurately assess students' learning outcomes.Currently, the majority of the above vocational colleges apply the online teaching model described in Figure 1.The online learning schedule is announced in advance to lecturers and students at the beginning of each semester.Lecturers will come to class at school according to schedule to present lectures.Students to online classes anywhere with the internet to follow the lecture, ask questions to the lecturer, and receive answers immediately afterward.Instructors can provide additional materials and assign assignments for students to submit during the online lesson or after the time specified by the instructor.The lecture will then be recorded and posted on the website for students to review or for students not participating in the online lesson.At the end of the semester, all qualified students will gather to take the exam (offline) at the school or a training location affiliated with the school.To evaluate the current situation of managing online teaching activities in vocational colleges, the article delves into research on online teaching management at a number of vocational colleges in Ho Chi Minh City.The survey subjects included 3 subjects: Administrators of vocational college (Principal, Vice Principal), lecturers, support staff, and lecturers of the online teaching program.To determine the position and role of online teaching activities in vocational colleges, the author conducted a survey of 207 people (including 22 managers and 181 teachers).After counting the votes, 4 votes were invalid because the information was not filled in completely.Therefore, 203 valid votes were used to process the investigation results.The content of the questionnaires was built based on Tran (2019); Institute for Educational Cooperation and Development Research (2022); Thuan & AnLong DangNguyen (2022 & 2023), et al., for managers and lecturers of online teaching programs to learn about the current situation of online teaching management of schools with a scale of 5 levels: (1) Very good, (2) Good, (3) Rather, (4) Medium, (5)Weak with 4 contents, specifically as follows:(ND.2) Organize and implement teaching according to plan;(ND.3)Directing teaching activities to ensure quality and effectiveness;(ND.4)Monitor the teaching process and evaluate the effectiveness of teaching activities. Students' learning plans are developed by schools based on the training program X X and ensuring regulations of the Ministry of Education and Training; Ministry of Labor, War Invalids and Social Affairs and made public to lecturers and students very early.Some schools that focus on student support services have advised students to register for study plans suitable to their abilities and time conditions and create study plans for each student. process and support learners to ensure training quality, universities need to implement the following solutions: (i) Promulgate regulations on online course design; process of organizing teaching activities, online teaching support activities develop a detailed training program with teaching content, teaching methods, and teaching activities to deploy in online classrooms based on go to the detailed course outline (Chung, 2018).The team participating in the construction includes subject lecturers, training plan developers, and training support staff.Lecturers are responsible for expertise and staff develop training plans and control content designed to comply with the course outline requirements; training support staff provide technical support to develop designs and post them to online classrooms for students to follow.(ii) Organize training in online teaching skills, online teaching management, information technology application, and pedagogical methods for lecturers; online learning skills for students; Develop a reasonable remuneration regime for the team participating in the online teaching program; develop a reasonable remuneration regime for lecturers working in the online environment; Planning for training to improve E-Leaning e-lesson editing capacity and online teaching skills for online teaching instructors is one of the fundamental solutions to improve teaching quality. Table 1 - Classification of survey objects Table 2 - Table of Scale Conventions Medium score 1.001.801.81 2.60 2.613.403.414.204.21  5.0 : Medium score.Xi: Score at level i. Ki: Number of participants rated at Xi level.n: Number of people participating in the assessment.Meaning of using : The average score in the statistical results represents the degree of representation Table 2 - Perception of the position and role of online teaching activities Source of the author's survey: n=203. Table 3 - Perception of the position and role of managing online teaching activities Table 4 - Testing Cronbach's Alpha scale Source of the author's survey, n=203. Table 5 - Survey results on the current status of online teaching management
2023-12-16T16:57:03.663Z
2023-12-06T00:00:00.000
{ "year": 2023, "sha1": "48478e791d900a6625a0b333f4b06f7a5240d3b0", "oa_license": "CCBY", "oa_url": "https://revistas.unilasalle.edu.br/index.php/conhecimento_diversidade/article/download/11307/4303", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8075b2a33ee2a949950f60858ea07007d79c1628", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [] }
4980105
pes2o/s2orc
v3-fos-license
Human-centric predictive model of task difficulty for human-in-the-loop control tasks Quantitatively measuring the difficulty of a manipulation task in human-in-the-loop control systems is ill-defined. Currently, systems are typically evaluated through task-specific performance measures and post-experiment user surveys; however, these methods do not capture the real-time experience of human users. In this study, we propose to analyze and predict the difficulty of a bivariate pointing task, with a haptic device interface, using human-centric measurement data in terms of cognition, physical effort, and motion kinematics. Noninvasive sensors were used to record the multimodal response of human user for 14 subjects performing the task. A data-driven approach for predicting task difficulty was implemented based on several task-independent metrics. We compare four possible models for predicting task difficulty to evaluated the roles of the various types of metrics, including: (I) a movement time model, (II) a fusion model using both physiological and kinematic metrics, (III) a model only with kinematic metrics, and (IV) a model only with physiological metrics. The results show significant correlation between task difficulty and the user sensorimotor response. The fusion model, integrating user physiology and motion kinematics, provided the best estimate of task difficulty (R2 = 0.927), followed by a model using only kinematic metrics (R2 = 0.921). Both models were better predictors of task difficulty than the movement time model (R2 = 0.847), derived from Fitt’s law, a well studied difficulty model for human psychomotor control. Introduction Human-in-the-loop robot-assisted systems have substantial freedom in design that enable operators to interact with complex physical systems in a variety of ways. In the case of teleoperated robotic systems, users can control robot end-effectors through position-based teleoperation [1], or use asymmetric teleoperation methods, where the master and slave systems have different degrees of mobility (i.e. user inputs, system inputs, and system outputs have different degrees of freedom) [2]. A classic example of an asymmetric teleoperation control scheme is the control of multiple agents (e.g., unmanned aerial vehicles) [3,4], with a single user input. Another example is controlling nonhonolonomic systems such as wheelchairs [5] or steerable a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 needles [6] through desired end-effector positions in Cartesian space, rather than through joint space control of system inputs (i.e. velocity and steering angle). Finally, techniques are also now available to allow both human and robot some level of autonomy while sharing control of the overall task [7,8]. With all this flexibility in the design of human-in-the-loop control systems, an important research question arises: how should one design these interfaces to be intuitive and easy to use, and how can the assessment of this control effort be quantified for complex tasks? Substantial efforts have been made to quantify, or model, difficulty in general humanmachine interaction. The most well-known theory among these is Fitts' law [9], a predictive model of human motor systems. Fitts' law is a widely accepted theory which can describe human psychomotor behavior in a simple bi-variate pointing task. In a Fitts' task, participants are required to move as fast as possible between two targets of a certain width, W, separated by a distance, D. The task index of difficulty, ID, can be mathematically quantified by a log-linear relationship between the distance, D, and the target width, W. The units of this index of difficulty are called "bits". This phenomenon is formalized as MT = a + b × ID, where MT is the movement time, and difficulty levels are quantified as the index of difficulty (ID), which can be mathematically determined by: ID = log 2 (2D/W). Fitts' difficulty model has been primarily used in the evaluation of human-computer interfaces and for ergonomic applications [10][11][12]. The Fitts' law model has also been shown to apply to complex tool manipulation tasks, such as those required in surgery. A work by Lin, et al. explored the validity of Fitts' law in addressing laparoscopic instrument manipulation while performing laparoscopic surgery [13]. Chien, et al. investigated the relationship of speed and accuracy in robot-assisted surgery, and suggested its roles in association with surgical skills [14]. However, conventional Fitts' law may not be sufficient to evaluate the difficulty of more complex human-in-the-loop control tasks, particularly in teleoperated or shared-control scenarios where the user's task objectives are not known to the robotic system. Fitts' law simply quantifies the relationship between the difficulty of the task and the movement time required to finish it. Recently, a few studies have begun to reveal correlations between the difficulty index of a task and changes in human user motion dynamics, in both discrete and cyclical movements [15,16]. Still, there is an opportunity to further explore how changes in task difficulty affect the user response, globally, including cognitive, physiological, and kinematic changes. These insights could lead to the improved design of human-in-the-loop control algorithms for complex tasks, such as the teleoperation of robotic systems. As a step towards this goal, this paper presents an data-driven approach to assess the difficulty of typical reaching task, based on objective, task-independent measures of human cognition, physical effort, and motion characteristics. We hypothesize that Fitts' law will be preserved in robot-assisted control interfaces, meaning that we expect an increase of movement time using a haptic device with increasing difficulty levels. Furthermore, we propose that the difficulty of a reaching task can be objectively quantified from multiple measures of user sensorimotor response, including user physiology and motion kinematic metrics found in both the user (limb motions) and task (tool motions) workspaces. This paper is organized as follows: First, we provide a review on current difficulty assessment tools and existing techniques for human response recognition. Second, we describe our experimental protocol, including the experimental task design, signal acquisition and sensor benchmarking. Next, we introduces methods used to extract important modeling features and technique used for modeling. Further, we present statistical analysis results of proposed features, and the performance of our models to predict task difficulty. Last, detailed explantation of results and discussions on the limitations, potential use, and future work are presented. Finally, we conclude our study. Current methods to assess task difficulty In general, the difficulty of a human-in-the-loop control task, such as teleoperation, is difficult to define. It is a common practice to evaluate teleoperation difficulty and performance via quantitative tool-based metrics and performance measures, such as completion time and manipulation accuracy [17,18]. Also, these metrics are often coupled with checklists or userresponse surveys, such as NASA Task Load Index (NASA-TLX) [19], where operators report the ease-of-use and acceptance of the system by rating it on predefined scales. These ratings have been widely used to assess task workload and perceived performance in a variety of domains, including robotic surgery [20-23], and teleoperation [24]. However, these evaluations are limited because: (1) task-specific performance measures do not always correspond to perceived user acceptance and performance [6,[25][26][27][28]; (2) user ratings can only be measured in a post-hoc manner, and do not capture the real-time response of the human user. Objective human response recognition Over the years, several techniques have been proposed for objectively measuring a human users physiological response in terms of affective state, motivation, environment awareness, and mental and physical workload, to name a few [29][30][31][32][33][34]. These studies typically leverage the use of sensors such as electroencephalography (EEG), surface electromyography (EMG), galvanic skin response (GSR), and heart response (HR). Electroencephalography (EEG) is an objective measure of neurophysiological changes related to electrical activity of brain neocortex. This technique enables researchers to quantitatively study human emotion, perception, cognition and technical skills [22, [35][36][37]. Surface electromyography (EMG) measures signals of electrical activity that are generated by active muscles. Analysis of EMG response is substantially helpful in revealing underlying motor patterns, physical effort, and user motion intent prediction [38,39]. Galvanic skin response (GSR), also known as skin conductance response, is a quantitative measure of electrical conductance fluctuations on the skin surface during a period of time. It is regarded as a reliable indicator of the individual's cognition load, attention, and emotional states [40]. Skin conductance increases due to an increase of moisture on the skin when the individual is under stress, and vice versa. Measurements of heart rate and its variability (HRV) include the electrocardiograms (EKG) or photoplethysmography (PPG). Heart activity has been shown to capture the dynamic workload, emotion, and cumulative stress [41]. In addition to physiological response, metrics derived from human movement sensors have also shown to be able to capture important information regarding the ability of the human user to perform a motor task [42,43]. Orientation-based motion metrics were able to discriminate expert from novice surgeons in robotic and open needle driving [44]. Estrada, et al. developed smoothness assessment on tool motion data, and reported that motion kinematics in endovascular tasks showed significant correlations with participants' surgical skills [45]. Kinematic profiles of user movement also demonstrated as an objective tool to assess technical skills and performance in surgical tasks [46,47]. Nisky, et al. explored the effects of teleoperation and expertise on the kinematics of user joint movement [48][49][50]. Experimental methods An IRB approval (UTD IRB #14-57) was obtained through the University of Texas at Dallas IRB office. In this paper, we aim to develop predictive models of task difficulty that do not depend on task-based metrics, such as movement time and targeting error. Rather, metrics derived from the user physiological response and kinematic movements could prove to be better predictors of how difficult the task is. Experiment protocol A bivariate target-reaching task was developed in a simulated virtual environment. Participants were instructed to perform the reaching task by using their dominant hand to manipulate a virtual tool to reach predefined target locations. To control the position and orientation of the tool in the virtual reality, a 6 degree-of-freedom haptic device, the Phantom Omni (Geomagic Touch, 3D Systems, SC, USA), was used. This device provides 3-degree-of-freedom force feedback and 6 degree-of-freedom sensing. A custom C++ code was developed to randomly generate targets within the virtual experiment, rendered by the CHAI3D haptic library. To constrain movement in 2D task workspace, a virtual haptic wall was created, where the haptic gain (k) was set as k = 150. In order to create different difficulty conditions in the experiment, the distances between the starting and final target were changed, according to Fitts' law (Fig 1). The target width, W, was set to 5mm, and the target distances varied, including: 10mm, 20mm, 40mm, 80mm, 180mm, and 320mm. The combination of the target width and different distances resulted in reaching tasks of 6 different difficulty conditions, with the index of difficulty (ID) ranging from 2.0 to 7.0 bits. The experiment consisted of a training session with 6 unrecorded exercises and a formal testing session. In the training session, subjects practiced the reaching task in our simulation and learned how to move the stylus and advance the experiment. To initialize each task, participants were asked to hold the stylus in the center of the workspace, resulting in a neutral position for the shoulder: a 90˚elbow flexion, 90˚forearm pronation, and a neutral wrist. Then, they moved to the starting target. Data started recording when users hit the initial starting point. After training exercises, participants performed the reaching tasks in five randomized, blocked repetitions for six different conditions of difficulty, resulting in a total of 30 trials for each subject. In order to avoid biasing due to the potential learning effects, all task conditions and target locations were randomly chosen. In each block, participants were asked to reach each of the six targets in 5 repeated cycles. This was done to ensure sufficient data collection time for the targets with the lowest index of difficulty. The subjects were explicitly asked to manipulate the haptic device freely and in a natural way, and informed that the experiment had no time or performance requirements. Participants A total of fourteen subjects (11 males and 3 female: mean age = 21 years, SD = ±6 years) participated in this study (recruitment date: January 10, 2017). All subjects provided informed written consent in accordance with The University of Texas at Dallas Institutional Review Board (UTD IRB #14-57). The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details. Participants had no previously reported muscular-skeletal injuries or diseases, or neurological disorders. Sensor data acquisition To capture human response during the manipulation task, a custom multi-channel sensor data acquisition system was developed. Fig 2 shows a subject interacted with the acquisition system via controlling a robotic interface. Real-time physiological response and user motor kinematics were recorded using the Robot Operating System (ROS) framework. Additionally, to minimize external distractions, noise-canceling headphones playing white noise were used. The sensor configuration and placement is shown in Fig 3. To monitor muscle activity, surface EMG signals were collected using Shimmer Sensing toolkit at the sampling rate of 1024 Hz, and streamed wirelessly via Bluetooth. Two muscles, Pectoralis Major (PectM) and Deltoid Posterior (PostD), were selected for EMG data collection. These muscles are primarily active during internal and external rotations of the shoulder (internal rotation rounds the shoulder in, while external rotation rounds the shoulder back), and were found to be most active during the bivariate pointing task. Two Al/AgCl electrodes (2 cm apart) were placed on the chest wall, 2 cm below the collarbone to capture PectM muscular activity; another one pair of electrodes are placed 2 cm below the lateral border of the scapular spine and parallel to muscular fibers to obtain PostD muscular signals [51]. As EMG signals can vary across individuals, we acquired the maximum voluntary isometric contraction (MVIC) level for each muscle. Each subject was asked to contract muscle as strongly as possible and to hold the contraction for 30 seconds. To test the pectoralis major, subjects stood straight facing the wall with elbow bent at 90-degree angle and pressed their hand against the wall. Next, subjects were asked to rotate their arm and pressed the back of hand against the wall to get the deltoid posterior MVIC. This procedure was repeated three times, with two-minute intervals between MVIC trials to relax the muscle. The highest peak value of the three iterations was chosen as a reference MVIC EMG value for each muscle. EEG signals were collected using the BIOPAC 1 B-Alert X10 wireless EEG system with AcqKnowledge 1 software. The sensor headset includes 10 Al/AgCl scalp electrodes, and data was sampled at 1000 Hz. The 10 sensors were placed in locations on the mid-line and lateral EEG sites, F3, Fz, F4, C3, Cz, C4, P3, POz, P4, as recommended by the sensor manufacturer [52]. Furthermore, the EEG signals were benchmarked to obtain the baseline data of BIOPAC EEG bio-metrics for the individual subject. The EEG benchmark test consisted of three sessions (15 minutes in duration): a three-choice psychomotor vigilance task (3C-VT), a visual psychomotor vigilance task (VPVT), and an auditory psychomotor vigilance task (APVT) [53,54]. The heart rate of each subject was acquired using fingertip photoplethysmography (PPG) [55,56] (a.k.a., optical pulse measurement), included with one of the Simmer Sensing units. The optical pulse sensor was attached to the ring finger tip of the dominant hand. For monitoring user GSR signals, the Shimmer sensor was also used to measure the skin conductance between the index and middle fingers. Both the GSR and PPG signals were sampled at a frequency of 512 Hz, and transmitted by a Shimmer Sensing sensor to the data acquisition computer. To normalize the GSR and heart response among each subject, a benchmarking session was performed wherein subjects were asked to close their eyes and stay relaxed while listening to white noise for 3-minute duration. The baseline heart-rate and variance in skin conductance was recorded from this session. Feature processing and extraction A variety of metrics were generated to describe motor difficulty with respect to the human physiological response and movement during each trial. These metrics are independent of the type of task, and are inspired from literature, as described in the following subsections. Physiological response metrics. Physiological response metrics were generated to describe the user experience in terms of cognition, attention, and physical effort. These metrics include those derived from EMG muscle activity, EEG cognitive state, skin conductance and heart response signals. The raw EMG signals were digitally filtered (4th-order FIR filter) with a bandpass frequency from 20 Hz to 450 Hz to attenuate motion artifacts. A 60 Hz notch filter was also applied to remove unwanted power line interference. In addition, the DC component of EMG signal was removed by subtracting the average of the signal. Two time-domain features of the EMG signals were extracted: the mean absolute value (MAV) and the root-mean-square, both were normalized using the MVIC reference value for each subject. Mean absolute value of EMG is suggested to be an optimal detector of EMG amplitude, commonly used in EMG pattern-recognition and myoelectic control [57,58]. As an extension of the regular mean-absolute value calculation, the second-type MAV has a smoother weighted function, allowing for improved accuracy. This type of MAV is defined as an average of weighted rectified EMG signal amplitude. The mathematical calculation of MAV is expressed as: where |x i | is i-th sample of the rectified EMG signal segment, N is the signal length, and w i is the piecewise weighted function. Root-mean-square (RMS) is another commonly used feature in EMG signal analysis, useful to reveal force patterns and muscle contraction activation [57,59]. The RMS of EMG was calculated using a 2ms moving averaging window based on the rectified envelope of EMG signals. The mathematical definition of RMS is expressed as: where |x i | is i-th segment of rectified EMG signals with length of N. For the EEG measurements, five cognitive parameters were obtained from BIOPAC Cognitive State Analysis software. During each reaching trial, the average probabilities of cognition states are computed through continuous wireless EEG recording at 1 Hz. The cognitive states include: Engagement, Workload, Distraction, SleepOnset, Head Movement Level (HeadMvL). These metrics are classified from EEG raw signals, and normalized as probability measures compared to the EEG benchmarking assessment of each individual subject. Potential contamination from movement artifacts and eye blinking are identified and avoided via filtering methods [60]. Details regarding this EEG signal processing technique and validation of the cognitive state metrics can be found in the literature [53]. The sampled GSR signal was filtered using a 2nd-order lowpass filter with the cut-off frequency at 5 Hz to reduce unwanted muscle artifacts (high frequency noise). To avoid potential phase distortion, the GSR signal was preprocessed with a FIR filter. The variance of the galvanic skin conductance, SC vr , defined as the difference between the global maxima and minima of GSR signals during each task trial, was computed and normalized using the baseline variance measurement for each subject. [61]. Heart rate response was acquired using photoplethysmography (PPG) of user's fingertip. The raw PPG signals were filtered by a FIR filter to remove noise and obtain a linear envelope of PPG signal. The heart rate was calculated based on the peak-to-peak intervals (PPI) of PPG linear envelopes. The calculated heart rate was normalized for each subject by the average heart rate measured in the baseline task. Kinematic motion analysis. In order to assess the ease of the haptic device manipulation task, kinematic motion metrics were derived in both the task (i.e., haptic device) and user (i.e., limb) workspaces. In the simulated task space, we extracted two task-independent metrics from position measurements of the end-effector. These metrics include path straight deviation, PathStrDev, and path efficiency, PathEff, as shown in Fig 4. Path straight deviation (PathStrDev) is defined as the average magnitude of the orthogonal projection of the current position, p i , onto the vector a between the initial user position, p 0 , and the end user position, p n . The mathematical calculation of PathStrDev is expressed as: The path straight deviation quantifies the trajectory straightness compared to the purely straight path between the starting and final positions. This metric is task-independent since it is computed based on the user-defined end-point, not the target end-point. The path efficiency (PathEff) is calculated via dividing the total path-length of the tool by the straight line distance between the initial and end position of the user. PathEff has been reported as a measure of the user's ability to continuously control the end-effector [62,63]. To characterize limb motion in the user space, we generated three types of motion metric using the data from inertial measurement units (IMUs) on both forearm (FA) and upper arm (UA). These metric including: the average magnitude of the angular velocity (AngVel), linear acceleration (LinAcc), and the root-mean-square magnitude of jerk (Jerk). The resultant jerk, J, as a function of the time derivative of acceleration, is defined by: where x, y, z are the three-dimensional components of the trajectory in Cartesian space. The effects of gravity on the linear acceleration measurements obtained from the IMUs were eliminated by calculating the acceleration magnitudes for each time-sampled x, y, z components. In this study, the root-mean-squared magnitudes of jerk were computed in order to reduce the variability and noise of measurements, by averaging over the entire waveforms. Based on the literature, the jerk metric is a valid candidate for assessing movement smoothness, which can be interpreted as a general measure of overall control ability [64,65]. Data analysis and statistics A Pearson product-moment correlation analysis was performed to assess the correlation between the index of difficulty and extracted explanatory features. Correlation coefficients were calculated using the data across all subjects and repetitions. A value of correlation coefficient r above 0.60 was considered to be a strong correlation, a value between 0.30 and 0.59 was considered a moderate correlation, and an r value between 0.20 and 0.29 was considered a weak correlation. In addition, a one-way analysis of variance (ANOVA) was conducted to test for correlation between task difficulty and the proposed dependent variables. A significant correlation was determined by a p-value less than 0.05. Modeling task difficulty Metrics extracted in the previous section were used to quantify reaching tasks with different indices of difficulty. Four candidate models were generated, using a regression technique, to evaluate the ability of kinematic and physiological response metrics to predict task difficulty, when compared to movement time. This type of modeling is important as movement time in teleoperated and shared control systems is ill-defined (i.e., the robotic system has no knowledge of when the user ends a given task). Feature selection and metric sets for modeling. To determine the best subsets of metrics, a feature selection process was carried out using the Pearson correlation criteria. Candidate metrics with no correlation or very weak correlation to task difficulty were excluded to reduce dataset uncertainty. Metrics with correlation coefficients below 0.20 were removed from further modeling. Four sets of metrics were chosen for each of the four models generated. Specifically, we were interested in evaluating the ability of both physiological and kinematic metrics to predict task difficulty when compared to movement time, which is the only metric used in the Fitts' law model. For each model, a different set of metrics was used including only movement time (Set I), all metrics with weak correlations to task difficulty or greater (Set II), only kinematic metrics (Set III), and only physiological response metrics (Set IV). Metric normalization. Due to the differences in dynamic ranges and various units of input metrics, the raw data was pre-processed by a normalization process using a z-score transformation. For a sampled variable x i with mean μ and standard deviation σ over n instances, the z-score for each data point is calculated as: where x i is the i-th data point of n sample instances. Partial least square regression. A multivariate data-based approach, Partial Least Square Regression (PLS-R), was chosen to generate each of the four models. The general aim of partial least square regression model is to explain the information of the task difficulty index (responses) by using the multiple characteristics of human response (predictor variables) as input. The underlying calculation of PLS-R is formulated as: where Y is the matrix of responses, X is the matrix of input predictor variables; T and U are the scores, or latent matrices by decomposing or projecting X and Y, respectively; P and Q are orthogonal factor matrices; E and F are the error residuals matrices, which are assumed to be random normally distributed. Mathematically, the regression of partial least square was achieved by maximizing the covariance structures between the two scores matrices, T and U, so as to maximize the covariance between responses (Y-matrix, ID) and all possible linear combination of predictor variables (X-matrix). In general, PLS regression can produce more reliable models compared to other standard regression methods, such as multiple linear regression. PLS methods are particularly suitable dealing with high-dimensional and noisy data, handling a larger number of predictor variables with a small set of observations. Additionally, PLS regression allows a multivariate modeling, while dealing with the potential problem of multicollinearity, which is often the case in multivariate datasets. Training, testing and evaluation. For the purpose of a valid prediction and non-biased assessment without overfitting, a k-fold (k = 10) cross-validation (CV) technique was employed. In the k-fold cross validation, normalized data samples were randomly partitioned into k non-overlapping subsets with equal sample sizes. The holdout process of cross-validation was repeated k-times. Of the k−th partitions, one single subset of observations were used for testing, while the union of remaining k -1 subsets would form a set for training. The CV estimates of overall accuracy were acquired by averaging of all the k-fold individual measures to obtain a reliable assessment. To assess the predictive performance, models in both training and testing steps were evaluated based on the following deterministic criteria: the Coefficient of Determination (R 2 ), Mean Absolute Percentage Error (MAPE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAV). In addition to these scale-dependent accuracy measures, non-unit accuracy metrics, Normalized Root Mean Squared Error (NRMSE), Normalized Mean Absolute Error (NMAV), were obtained for accessing non-unit magnitudes of residual errors, which gave a idea of the relative differences between the modeled and observed value. Mathematical definition and descriptions of deterministic criteria are given in Table 1. Statistical analysis Example user trajectories, obtained from the position data of haptic device end-effector, are shown in Fig 5. Fig 6 shows all of the measured outcome variables, extracted from EMG, EEG, GSR, IMU sensors and the haptic device, in the predefined tasks associated with different indices of difficulty (IDs). Error bars illustrate the data variances in a 95% confidence interval. It should be noted that the ranges of the raw data from multiple channels are largely different. Therefore, all measures are linearly transformed by min-max normalization for the ease of visualization and comparison, without the distortion of raw data. The entire range of values from minimum to maximum for each feature are mapped to the range from 0 to 1. The physiological response metrics, including EMG muscle activity, EEG cognitive states, galvanic skin conductance, and heart response, are shown in Fig 6A-6C, respectively; movement features in task space, path straight deviation and trajectory efficiency, are shown in Fig 6D; user limb motion kinematics of both forearm and upper arm, including angular velocity, linear acceleration, and RMS magnitude of jerk, are shown in Fig 6E and 6F, respectively. In addition, Pearson's correlation coefficients, r, were reported to indicate the relationship between the features with respects to the predefined task difficulty index. Significance levels of correlation effects were determined via the p-value 0.05. Results of the correlation analysis for predictor variables with the significance values are reported in Table 2. Measured movement time (MT) using the haptic device was most significantly associated with the tasks difficulty indices, p < 0.001, with the highest correlation coefficient, r = 0.93, among all other metrics computed. More importantly, high correlation coefficients were found between the difficulty index and user motion kinematics, in both of end-effector task space and user space, ranging from 0.66 to 0.90, with significance value p < 0.001. Specifically, for movement measures in task space, in terms of the path straight deviation and path efficiency, significantly high correlations have been achieved between the task difficulty and the two, r = 0.67 and r = −0.83, respectively, p < 0.001. Furthermore, user physiological response demonstrates the similar trends with tasks in different difficulty levels, but slightly lower correlations, when compared to the motion characteristics. For EMG muscle activity, average correlation coefficients of RMS and MAV on both active muscles, posterior deltoid and pectoralis major, are 0.58 and 0.52, respectively, p < 0.001. Heart rate response was correlated with task difficulty in the moderate levels, r = 0.35, followed by the variance of skin conductance (SC vr ), r = 0.31, p < 0.001. All EEG-based cognitive metrics derived from the BIOPAC software are significantly correlated with task difficulty levels, p < 0.05. Among these measures, the metrics Engagement and Workload show the moderate correlation with difficulty, with the correlation coefficients r = 0.22, and r = 0.23, respectively. Three cognitive features, Distraction, SleepOnset, and HeadMvL, however, reveal correlations less than 0.20, and thus excluded from the following regression modeling. Model evaluation In this section, the overall accuracy of each of the four difficulty models is investigated. After data acquisition and pre-processing, a total of 420 observation sample instances were obtained in our database. According to the correlation-based feature selection and feature-level fusion, Table 3 shows the excluded and included predictor variables for regression modeling, as a result of the feature selection procedure. Aggregated accuracy results calculated from the 10-fold cross-validation were reported in Table 4 Overall, the fusion model, combining both physiological response and kinematic motion metrics, exhibited the highest accuracy in predicting the task difficulty. The coefficient of determination (R 2 ) of fusion model is 0.927, with the best accuracy with respect to RMSE, MAE, and MAPE. In addition, fusion model, and kinm model, do not show significant differences on the ID prediction accuracy. This indicates the reduced subsets of predictors are able to predict task difficulty with the goodness-of-fit, though less explanatory predictors are included. The kinm model, compared with the other individual modalities, was able to show significantly increased accuracy, followed by MT and physio. In contrast, significant differences on detection accuracy between the movement time and other modalities (except for Table 4 could also confirm the above analysis of prediction accuracy. Discussion As an important step in our analysis, we examined the relationship between the movement time and the difficulty index. It was found that the increase of movement time using the haptic device is essentially in proportion with the predefined difficulty index. This result shows that Fitts' Law would be preserved in the control of the haptic device, consistent with prior studies showing the speed-accuracy trade-off in various robotic interfaces. Indeed, decreases of movement time have been reported to be associated with improved technical skills, control effectiveness [66] and subjectively perceived difficulty [67]. However, it should be emphasized that measure of movement time is insufficient to explain various aspects of motor difficulty, as shown in Table 4. Consistent with our hypothesis, confirmation of the task difficulty differences was observed in the sensorimotor response of human users, from the aspects of physiological response and motion kinematics. Notably, the kinematic motion profiles, obtained from the movement in the user and task workspaces, provide the best predicitors of task difficulty. Specifically, for user limb motion, the produced movement amplitudes and smoothness show the direct oneto-one mappings onto the difficulty index. Tasks with higher difficulty are associated with increased angular velocity and linear acceleration. This increase in task difficulty is also reflected in the increase in the magnitudes of movement jerk. This observation indicates a significant challenge in maintaining smoothness of limb motion with higher task difficulty. For the motor performance measures in the task space, the correlation analysis shows a significant reduction of the trajectory effectiveness and decrease of the straightness of the entire end-effector paths, for the increased task difficulty. Gradually changing difficulty index would likely evoke adjustments of user movement patterns, and affect the overall user manipulability to control the device. Similarly, user physiological response also demonstrates observable associations with task difficulty indices. It is clear that higher task difficulty levels significantly correlate with increased underlying muscular activity, user engagement, cognitive workload and their respective variances. One explanation for the results is that in order to reach far-away targets, participants have to engage more mentally in path planning, and allocate greater cognitive efforts and attention while maintaining performance accuracy. Here it should be noted, however, the measures of physiological response were generally less correlated with the difficulty index, in comparison with motion kinematics. An explantation for this is that varying the target distances has a direct and distinct impact on the kinematic motion behavior and motor Four sets of metrics were chosen for each of the four models generated. https://doi.org/10.1371/journal.pone.0195053.t003 performance; on the contrary, user physiological response may not be sensitive to the changes of difficulty index. This might be due to the nature of participant's physiological activity, and relatively low signal resolution. Nevertheless, concerning the effects of IDs on the human cognition and the significant associations, it was confirmed that task difficulty index could serve as an objective indictor in revealing the user workload and dynamic fluctuations in cognition. In addition to the above statistical analysis, models are constructed by PLS-R regression to explore the usefulness of using predictor variables in optimizing the identification of task difficulty. However, by comparing the results in Table 4, it is clear that multimodal fusion, combining user physiological response and motion characteristics, has the synergetic effect in improving the accuracy of prediction, and ultimately enhancing the value of information for various levels of task difficulty. User physiological response, in conjunction with kinematic motion analysis, was able to explain the motor difficulty of tasks and its variances more accurately. Of course, these improvements involve the increased cost of multiple sensors and higher computational load. The trade-off between predictive accuracy and model complexity is an important consideration for the designers of human-in-the-loop control systems. A potential limitation of this study is the accuracy and robustness using partial least square regression in modeling the highly complex, and nonlinear relationships between the response and input predictor variables. Since several parameters demonstrated nonlinear or exponential relationships with the difficulty index, an improvement in predictive performance is expected by the adoption of advanced machine learning methods, such as neural networks. Moreover, further improvement could be made by improving the user motor capture system and processing methods. Various data processing techniques in both time and frequency domain, such as Power Spectral Density analysis (PSD), could be considered for better detection of user physiological feedback, and how the subjects coordinate their movements during human-robot interactions [16, 57,68]. A larger group of participants would also better define these results. Finally, it must be emphasized that the basic target-reaching tasks in this study could likely be largely different from the context of realistic complex tasks, in which human typically interact with robots to perform an unstructured manipulation. However, segments of user behaviors demonstrated similarity to the representative target-reaching motions [69,70]. Additionally, the size of target, as one of control parameters, may affect user motion and physiological response in a real, physical system. Therefore, additional consideration is necessary when applying our model in broader robot-assisted cases with various kinds of manipulation. The target size will also be an important consideration for our future work. Regardless, results indicate a distinct advantage of using the multivariate data-driven approach to assess difficulty. Also, the features that have been used in this study are independent of task types, and thus have the potential to be applied more globally. Conclusion Difficulty during human-in-the-loop control interactions is hard to define and measure objectively. In this paper, we present and evaluate a model to estimate task difficulty by leveraging Fitts' Law. Findings of statistical results during typical reaching tasks confirm the correlations between user sensorimotor response metrics and task difficulty, p < 0.001. Motion kinematic metrics had the best predication of task difficulty, R 2 = 0.921; a fusion of physiological metrics and motion kinematics are believed to provide richer source of information for identification of difficulty, R 2 = 0.927, with 30.63% improvement of predictive accuracy in comparison with the movement time model. Overall, the task difficulty models presented in this paper, and the method used to develop them, provide useful insights into human response during human-in-the-loop control tasks. As our proposed models are independent of the task, they could be useful for the evaluation of more complex control tasks, such as teleoperated or shared control of robotic systems.
2018-04-26T19:49:04.768Z
2018-04-05T00:00:00.000
{ "year": 2018, "sha1": "2afc3ab78b7fdafd6c2dd6c8890219ad4803b8bb", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0195053&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2afc3ab78b7fdafd6c2dd6c8890219ad4803b8bb", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
4938019
pes2o/s2orc
v3-fos-license
Guided fracture of films on soft substrates to create micro/nano-feature arrays with controlled periodicity While the formation of cracks is often stochastic and considered undesirable, controlled fracture would enable rapid and low cost manufacture of micro/nanostructures. Here, we report a propagation-controlled technique to guide fracture of thin films supported on soft substrates to create crack arrays with highly controlled periodicity. Precision crack patterns are obtained by the use of strategically positioned stress-focusing V-notch features under conditions of slow application of strain to a degree where the notch features and intrinsic crack spacing match. This simple but robust approach provides a variety of precisely spaced crack arrays on both flat and curved surfaces. The general principles are applicable to a wide variety of multi-layered materials systems because the method does not require the careful control of defects associated with initiation-controlled approaches. There are also no intrinsic limitations on the area over which such patterning can be performed opening the way for large area micro/nano-manufacturing. While the formation of cracks is often stochastic and considered undesirable, controlled fracture would enable rapid and low cost manufacture of micro/nanostructures. Here, we report a propagation-controlled technique to guide fracture of thin films supported on soft substrates to create crack arrays with highly controlled periodicity. Precision crack patterns are obtained by the use of strategically positioned stress-focusing V-notch features under conditions of slow application of strain to a degree where the notch features and intrinsic crack spacing match. This simple but robust approach provides a variety of precisely spaced crack arrays on both flat and curved surfaces. The general principles are applicable to a wide variety of multi-layered materials systems because the method does not require the careful control of defects associated with initiation-controlled approaches. There are also no intrinsic limitations on the area over which such patterning can be performed opening the way for large area micro/nano-manufacturing. F racture-induced patterns are ubiquitous in nature, and diverse patterns 1,2 are found in many inanimate objects such as dried mud and rocks. Cellular or hierarchical crack patterns are also observed on the surfaces of living creatures, occasionally determining appearances 3,4 . In order to increase our understanding of these problems, the formation of crack arrays in multilayers has been extensively studied as a fundamental problem in fracture mechanics [5][6][7][8][9][10][11] . Using this knowledge of the theoretical background, artificial crack patterning on layered materials has been achieved for practical applications in micro/nano-fabrication [12][13][14][15][16] . Nevertheless, cracks are typically difficult to control precisely as a means of manufacturing, because they tend to initiate from random defects created during processing. Carefully-controlled conditions, such as those that can be obtained in a cleanroom, are required to create systems in which any natural defects are small and few enough for fracture to be controlled by the subsequent deliberate introduction of artificial flaws 17,18 . While it has been shown that such an initiation-controlled approach can be used to control fracture patterns 18 , this technique has only been applied specifically to materials in which intrinsic flaws are kept below a minimum threshold, and is not robust against accidental introduction of damage or use with soft materials. Here, we report an alternative and more general propagation-controlled approach for precision cracking of multi-layered materials; one that is relatively robust and not sensitive to the nature of the flaws in the system. While our experiments focus on thin films supported by silicone elastomers, the general principles elucidated by the observations are applicable to a broad range of multilayered systems. To our knowledge, this is the first approach to control crack patterns by propagation control, rather than initiation control, and it is a technique that can also be used on multi-layered soft materials, not prepared under clean-room conditions. Results Guided fracture of thin films deposited on soft substrates. To investigate and demonstrate the concept of guided fracture, we designed parallel microgroove structures with opposing V-notch features, spaced up to 300 mm from each other (forming opposing saw-teeth structures). Polymer-''silica'' and polymer-metal bilayer systems were fabricated using micro-patterned polydimethylsiloxane (PDMS) substrates, supporting a thin surface layer of oxidized PDMS (SiO x , 200 nm thick 19 ) or gold (Au, 40 nm on 10 nm chromium (Cr); Fig. 1a). (An adhesion layer of chromium was used to avoid delamination of the fold film during the formation of cracks in this system). We determined experimentally that notch lengths of 40 mm were suitable to initiate crack formation in these systems under a minimum applied critical strain (e c ) (Fig. 1b). Since the notches are intended to activate existing intrinsic defects at their tips, and shadow defects at other locations, rather than introducing a region of significant stress concentration 18 , the design of the notch angle was robust over a range of values (see Supplementary Fig. 1s). Notch angles of 50u were chosen for both systems presented in this work, because of the technical difficulties in fabricating an array of very sharp tips using SU-8 photolithography. It was found that an angle of 50u provided the optimum balance of ease-of-fabrication and generation of single cracks. All further experiments were performed using these parameters while varying the spacing, d, between them. Earlier work has demonstrated that crack arrays are intrinsically much more dense for the SiO x /PDMS system than they are for the Au/PDMS system, owing primarily to the significantly higher modulus of the Au 5,13,20 (See Supplementary Information for further discussion of fracture on gold film). In our case, the natural crack spacing with uniform stress applied in the Au/PDMS system was 340 6 180 mm, and 16.2 6 8.3 mm in SiO x /PDMS at 10% strain, consistent with analyses 21 . The spacing of our V-notches fell between these two natural crack spacings, explaining the differences observed between the two material systems studied. For example, the Au/ PDMS system at 10% strain produced uniform cracks originating from the notch tips for all the spacings used (50 to 300 mm; Table 2). At larger spacings and strains, additional cracks were formed between notches (See Supplementary Fig. 4). The high modulus of the gold was also responsible for driving the cracks into the substrate, which also affected the crack spacing 5,20 . The wrinkles that often formed at higher strains perpendicular to the cracks were generated by a Poisson's ratio effect 22 . The uncracked region of the substrate induced a compressive strain in the cracked surface layer, where the stresses had been relaxed by the cracks. This compressive transverse strain was relaxed by buckling. Three regimes of cracking. Three regimes of crack formation were identified based on the level of applied strain ( Fig. 2a and See Supplementary Fig. 5). In Regime I, at low strains below the threshold required to create uniform crack arrays, the ratio of the number of cracks to the number of V-notches was less than 1. In Regime II, at intermediate strains, there was a 151 correspondence between the number of cracks and notches. At higher strains, in Regime III, additional cracks were generated between the notches. It is important to note that before entering Regime III, one crack formed from every notch tip (See Supplementary movie). As modeled 21 , in both Regimes I and III, the average crack spacing is similar to that obtained in systems without patterned notches. In Regime I, the thermo-dynamic spacing is larger than the spacing of the V-notches. In Regime III, the thermo-dynamic spacing is smaller than the spacing of the V-notches, and cracks can be initiated from intrinsic defects between notches. The range of strains over which Regime II can be obtained is dependent on the spacing between the notches, and is dictated by the strains corresponding to the inherent crack spacing that would be obtained in the absence of notches. Therefore, the link between the appropriate strain range and the notch spacing depends on the material properties of the system (Fig. 2b-c). Our experiments showed that a wider notch spacing reduces the strain range for Regime II and, while we were able to obtain a broad range of strains for Regime II in the Au/PDMS system, achieving the same strain range in SiO x /PDMS would have required a much smaller spacing of V-notches. Optimization of notch structure positioning and strain rates. Although uniform crack spacing was achieved in Regime II, individual cracks in paired notches occasionally generated multiple branches. Presumably, cracks simultaneously propagated from opposing sides and met to form misaligned cracks (Fig. 3a). To reduce the incidence of such imperfections, we tested un-paired notch configurations that initiate cracks from one side only. However, cracks starting from the notched side occasionally did not propagate to the other side; and intrinsic defects along the unpatterned side were occasionally activated and extended to the notched side (Fig. 3b). To address these imperfections, alternating notch structures were tested. This configuration significantly improved crack quality, and cracks propagated only from the notches (Fig. 3c). The problem with paired notches is that each notch in a pair will be the same distance from a fully channeled crack. Cracks will be equally likely to propagate from either notch. A similar problem exists with notches on only one side. Since the notches only serve to control crack propagation, and shield flaws at neighboring sites, channel cracks can equally likely grow from the un-notched side as from the notched side. The advantage of the alternating notches is that each notch on each side provides a single site from which it is thermo-dynamically possible for a crack to channel at a given level of strain and current crack pattern. Another fabrication parameter found to be useful for improving crack quality was the strain rate. A reduction in the applied strain rate improved quality of cracks formed particularly for the paired and unpaired notch features (Fig. 3d). This improvement is presumably because slow strain rates give cracks sufficient time to propagate across the specimen surface before fresh cracks were initiated from the opposite face. Controlled fabrication of arrays with user-defined variable crack spacing. To demonstrate the versatility of this technique beyond uniformly spaced notches, structures with varying notch spacing were designed. Although with limits on the magnitude of notch spacing variability that could be accommodated, it was possible to generate arrays of cracks with variable spacing in both Au/PDMS and SiO x /PDMS systems (Fig. 4a-d). The key is to maintain the applied strain and notch spacing such that the entire system stays within the cracking regime II (Fig. 2a). We did observe, as expected, that for a given strain, crack widths were larger in regions of increased notch spacing (Fig. 4a-b), owing to the larger stress that had to be accommodated by each crack. Rather than have notch features with uniform or gradually increasing spacing, the notch features could also be made to have irregular spacing (Fig. 4c-d). Increasing crack lengths with offset notch positions (Fig. 4e) or aligned notch positions (Fig. 4f) could be generated in Au/PDMS (Fig. 4e) or SiO x /PDMS system (Fig. 4f) to provide crack patterns over large areas. Additionally, arrays of width-adjustable cracks that are each several millimeters in length could also be obtained (See Supplementary Fig. 6a). Millimeter long cracks in the Au/PDMS system were sometimes observed to be branched; however, this was overcome by utilizing sequential arrays of diamond shaped notches which encourage the crack generation in both directions and connected each other without branching (See Supplementary Fig. 6b). Controlled fracture on curved surfaces. We further utilized these principles to demonstrate fabrication of three dimensional, microand nano-sized patterns on multilayer soft polymer systems. (Fig. 5; See Supplementary Fig. 7) Orientation of the crack patterns can be controlled by changing the applied forces. For example, a thin, Au/ PDMS film (,1 mm) was rolled and bonded into a cylinder by plasma treatment or a clip. The film was thin enough not to generate cracks during the rolling step. A hoop stress was gradually applied to the cylinder by the slow expansion of a compressed foam cylinder (Fig. 5a). The resultant cracks followed the designed notch pattern (Fig. 5b), demonstrating that this technique is potentially applicable to broader ranges of non-flat pattern fabrication. Discussion It is important to realize that the situation considered in the current work, and the strategy for forming regular pattern is fundamentally different from that presented by Nam et al. for relatively ''defect-free'' material systems 18 . The approach described in that work was one that relied on controlling the flaw population responsible for generating cracks. By careful processing, the intrinsic flaw population was kept so low that, in the absence of artificial flaws, the crack spacing would be much greater than expected from the predictions of fracture mechanics. The introduction of artificial flaws then allowed control of the crack spacing, but always at spacings much larger than the thermo-dynamic equilibrium levels. The mechanics of fracture can be broadly categorized into two situations: (i) the ''defect-free'' situation where there are few defects large enough to readily nucleate crack, and (ii) the situation where the number of large defects from which cracks readily propagate is not limiting. In the former situation, the limiting factor to initiate cracking is achieving sufficiently high stress. Thus, in such materials, cracks formation can be controlled by stress concentration without much concern for unintended crack formation. The challenge is creating materials with features that are sharp enough to create the level of stress concentration needed to form crack while maintaining the material devoid of any unintended damage or flaws. This type of defect-free materials requires careful fabrication techniques, often in a clean-room 18 . The alternative situation in which crack arrays are propagation controlled is more common. Here, the statistics of the flaw population do not play a significant role in determining the average spacing of arrays, and the spacing is determined primarily by the applied strain, the toughness and thickness of the film and the modulus of the film and substrate 5,9,13,23 . Thus, in such materials, controlled crack formation relies on the shielding of crack propagation from unwanted sites, rather than actively initiating cracks from artificial flaws. Our current work falls under this latter category, and a detailed analysis of the interaction between the mechanics and the statistics of these two approaches has been presented elsewhere 21 . To be more specific, whether intrinsic flaws might trigger random cracks to initiate would depend on the spacing between the V-notch stress concentrators and the intrinsic defect density. In our system, the notch spacing is much larger than the spacing between intrinsic flaws. Conversely, in the system described by Nam et al 18 . the ''natural'' crack spacing, which assumes abundant flaws, can be estimated as being about 4 mm, using the data given in that paper 18 and the analysis of Thouless et al 11 . This is much smaller than the notch spacing of 100 mm that was used in that study, and the fact that the notches resulted in a crack spacing much larger than the natural spacing indicates that a very low density of intrinsic flaws was achieved by the careful processing. The systems we present in this paper are much closer to an equilibrium configuration. Therefore, the fabrication of uniform crack arrays requires not only creation of stress-concentrating features, but coordination of spacing between the features with the applied strain. The beneficial consequence of this design approach is that the crack patterns are stable against the introduction of additional flaws making it more robust and applicable to a broader range of materials and processing parameters including curved and 3D soft structures. In summary, we demonstrate robust guidance of fracture in multilayer soft materials via selective activation of intrinsic defects based on matching spacing design and strain application with material properties of the system. In defect-rich materials such as utilized in this paper, precision fracture fabrication requires undesired stress suppression first, then localized stress focusing to initiate cracks second. Here, we balance these two opposing requirements of stress relief and focusing using alternating V-notch features, slow strain rates, and most critically the use of a strain where the V-notch spacing matches the equilibrium average crack spacing of the material system under study. This simple and versatile approach achieves surprisingly precise micro-and nanoscale features in a variety of readily-processed materials and in a variety of geometries including curved surfaces. Methods Saw-tooth structure design and device fabrication. V-notch structures were fabricated using standard soft lithographic techniques. Patterns were designed in AutoCAD, and printed on transparent films (CAD Art inc.). SU-8 lithography was used to prepare a mold on silicon wafers. A 551 mixture of curing agent and PDMS (Sylard 184, Dow Corning) was mixed and cast against the molds, and were cured at 60uC for at least 4 hours, before peeling the finished structures from the mold. Two different thin film layers were produced on the cured PDMS: i) plasma treated oxidized PDMS layer: the cured PDMS having the transferred microstructures were directly exposed to plasma treatment for 6 min at 200 W (COVANCE-MP, Femtoscience inc.); ii) cured PDMS was coated with 10 nm of Cr layer first as a adhesion layer and then 40 nm of Au layer using e-beam deposition. A tensile strain was applied using specially designed stretchers. Microscopy and image analysis. The bright field images were acquired using a Nikon Ti-U microscope. Laser interferometer (LEXT, Olympus OLS4000) at 100x magnification was used to observe 3D images of cracks. The LEXT software was used to measure width, spacing, and depth of the cracks. Strain rate dependent crack formation. Defined strain rate stretching was performed using a programmed, electric stepping motor driven stretcher (Scholar Tec Corp., Osaka, Japan. Model: NS-500; a similar system is available from Strex ST-140). This system was used to compare crack formation between paired, un-paired, and alternating V-notch structures. Crack images were taken by an inverted microscope.
2016-08-09T08:50:54.084Z
2013-10-23T00:00:00.000
{ "year": 2013, "sha1": "e52d63a64c2a151f8dded51de473b9ef79a0d02e", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/srep03027.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e52d63a64c2a151f8dded51de473b9ef79a0d02e", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
245537030
pes2o/s2orc
v3-fos-license
Reduced NCK1 participates in unexplained recurrent miscarriage by regulating trophoblast functions and macrophage proliferation at maternal-fetal interface Abstract Recurrent miscarriage (RM) seriously affects the physical and mental health of women of childbearing age, and 50% of the causes are unknown. Thus, it is valuable to investigate the causes of unexplained recurrent miscarriage (uRM). Similarities between tumor development and embryo implantation make us realize that tumor studies are informative for uRM. The non-catalytic region of tyrosine kinase adaptor protein 1 (NCK1) is highly expressed in some tumors, and can promote tumor growth, invasion and migration. In this present paper, we firstly explore the role of NCK1 in uRM. We find that the NCK1 and PD-L1 are greatly reduced in peripheral blood mononuclear cells (PBMC) and decidua from patients with uRM. Next, we construct NCK1-knockdown HTR-8/SVneo cells, and find that NCK1-knockdown HTR-8/SVneo cells exhibit reduced proliferation and migration ability. Then we demonstrate that the expression of PD-L1 protein is decreased when the NCK1 is knocked down. In co-culture experiments with THP-1 and differently treated HTR-8/SVneo cells, we observe significantly increased proliferation of THP-1 in NCK1-knockdown group. In conclusion, NCK1 may be involved in RM by regulating trophoblast proliferation, migration, and regulating PD-L1-mediated macrophage proliferation at the maternal-fetal interface. Moreover, NCK1 has the potential to be a new predictor and therapeutic target. Introduction Recurrent miscarriage (RM) is defined as two or more spontaneous abortions before 20 weeks of gestation.RM is a multifactorial disease that affects a great number of couples.Miscarriage occurs in about 15% of normal couples (Alijotas-Reig and Garrido-Gimenez, 2013), and the incidence of RM in pregnant women is up to 2-4% (Khalife et al., 2019). Researchers have grappled with the causes of RM, and found that the main causes include chromosomal abnormalities, structural abnormalities, infections, immune dysfunction and thrombophilic disorders (Rai and Regan, 2006).However, approximately 50% of cases still cannot be explained by existing causes, and these cases are known as unexplained recurrent miscarriages (uRM) (Alijotas-Reig and Garrido-Gimenez, 2013).Therefore, it is worth the effort to reveal the causes of uRM. Embryo implantation and placenta formation, which depend on the proliferation, migration and invasion of trophoblast cells, as well as immune homeostasis at the maternal-fetal interface, are essential to maintain pregnancy.Reduced trophoblast proliferation and migration were closely associated with RM (Ding et al., 2019).Macrophages are the second-largest immune cell population at the maternal-fetal interface, and macrophage-derived growth factors promote uterine artery remodeling (Shapouri-Moghaddam et al., 2018).Abnormal macrophage numbers and imbalance of M1/M2 subtype polarization are both potential causes of RM (Zhang J et al., 2022;Huang et al., 2021).The similarity of tumor cellinduced immune responses and the immune microenvironment at the maternal-fetal interface suggests potential similarities in pregnancy-related disease and tumor development (Mor et al., 2017).Thus, the exploration of uRM can be referred to the study of tumors. The non-catalytic region of tyrosine kinase adaptor protein 1 (NCK1) contains three SH3 domains at the amino terminus and one SH2 domain at the carboxyl terminus (Paensuwan et al., 2016).Recent studies have found that NCK1 had the ability to positively regulate cell proliferation and migration (Dubrac et al., 2018;Liu et al., 2020).Moreover, NCK1 played a key role in enhancing TCR signal strength (Roy et al., 2010), and in the differentiation of CD4 + helper T cells (Lu et al., 2015), implying that NCK1 has immunomodulatory functions. Programmed cell death 1 ligand 1 (PD-L1) is a transmembrane molecule, which is strongly associated with immune modulation via interaction with its receptor programmed cell death-1 (PD-1) (Beenen et al., 2022).Upregulated PD-L1 on tumor cells has the ability to inhibit the proliferation and infiltration of T cells (Hu et al., 2022).Moreover, PD-L1 can transmit a negative signal to tumorassociated macrophages to inhibit macrophage proliferation (Hartley et al., 2018).In addition, PD-L1 affected the proliferation, migration and invasion of tumor cells (Eichberger et al., 2020;Cao et al., 2022).Studies have shown the importance of PD-L1/PD-1 interaction in the maintenance of pregnancy, and PD-L1 expression was decreased in decidual immune cells from RM patients compared with normal early pregnancy (Meggyes et al., 2019). In the current study, we firstly reported reduced NCK1 and PD-L1 in decidual tissue and peripheral blood mononuclear cells (PBMC) from uRM patients compared with normal control.Furthermore, we found that NCK1 may regulate trophoblast proliferation and migration, and PD-L1-mediated macrophage proliferation. Patients Human samples from patients with uRM (n=20) and normal pregnant women (n=20) in the first trimester were collected from the operating room of the outpatient family planning department of Shanghai First Maternity and Infant Health Hospital from June 2020 to June 2021.The study was conducted in accordance with the Declaration of Helsinki, and approved by the Medical Ethical Committee of Shanghai First Maternity and Infant Hospital.Informed consent was obtained from all subjects involved in the study.Clinical characteristics of two group women were summarized in Table 1.The patients with a history of more than two consecutive pregnancy losses of unknown cause were include in uRM group.Exclusion criteria for the patients with uRM include fetal and parental chromosomal disorders, infectious diseases, genital tract malformation, autoimmune disorders, endocrinological dysfunctions and deficiencies in coagulation factors.The women with normal early intrauterine pregnancy were included in the control group.The women in the two groups had no history of exposure to sexually transmitted infection, no smoking or drinking habits, no exposure to radioactive substances or toxic and harmful chemicals during early pregnancy, and no chronic diseases. Tissue collection Peripheral blood mononuclear cells (PBMC) and aborted tissue (villous tissue and decidual tissue) were collected from patients with uRM and normal pregnant women with active induced abortion in the first trimester.The informed consent was obtained from each patient before the study.Peripheral blood was collected from the outpatient biochemical laboratory, then were used for the extraction of PBMC.After induced abortion, most of the villous and decidual tissues were naturally separated.Trophoblast-rich villous tissues were identified among the aborted tissues, washed and collected, and the rest was decidual tissues.After residual villous tissues and blood clots on the surface of decidual tissues were removed, decidual tissues were collected.Then, the collected samples of villous and decidual tissues were stored in liquid nitrogen. Western blotting Total protein was extracted using RIPA buffer (Sigma, USA) and protein concentration was determined using BCA protein kit (Thermo, USA). 5 µg of each protein was added to sodium dodecyl sulfate-polyacrylamide gel (SDS-PAGE) for electrophoresis.The proteins separated in the gel were transferred to a 0.22 µm diameter Immobilon-P membrane (Millipore, USA), and then the membrane was blocked in 5% skim milk (Beyotime, China) for 2 h at room temperature.Membranes were incubated with NCK1 (Abcam, USA), PD-L1 (Abcam, USA), GAPDH (Abways, China) antibodies overnight at 4 °C, and then added with Horseradish peroxidaseconjugated goat anti-rabbit/goat anti-mouse antibodies were incubated for 1 h at room temperature.Then ECL (Tanon, China) was used to detect the amount of protein. Real-time quantitative reverse transcriptionpolymerase chain reaction (RT-qPCR). The cDNA of human PBMC samples was prepared using Takara reverse transcription kit, and mRNA expression was determined by real-time PCR machine (Life technologies, USA) using Real-time PCR kit (Yeasen, China).All data were normalized to Actin expression.The primer sequence is designed and synthesized according to the NCBI gene sequence, and the sequence is shown in Table 2. Fluorescence immunohistochemistry The decidual tissues were embedded in OCT (Sakura, USA), wrapped in tin foil, frozen in liquid nitrogen, and sliced using fast freezing microtome.Sections were fixed with 4% paraformaldehyde, washed and blocked with 5% NCK1 in unexplained recurrent miscarriage 3 BSA, and sequentially incubated with a primary antibody for 2 h and a secondary antibody for 1 h.After adding DAPI (Abcam, USA), they were photographed using a fluorescence microscope (Leica, TCS SP8 CARS). Cell proliferation assay HTR-8/SVneo cells under different interventions were plated at 2000 cells per well in a 96-well plate, and cultured in a 37 °C cell incubator for 24 h, 48 h, and 72 h.Cell proliferation was measured using a Cell Counting Kit-8 (CCK-8; Beyotime, China) according to the manufacturer's instructions.The absorbance of each sample was measured at 450 nm wavelength in a microplate reader. Co-culture of HTR-8/SVneo and THP-1 cells 5×10 5 HTR-8/SVneo cells under different interventions were plated in 24-well plates, and 5×10 5 THP-1 cells was added in the 24-well plates the next day.After co-culture at 37 °C for 24 h, 48 h, and 72 h, the cell suspension containing THP-1 cells was aspirated and spread into a 96-well plate, and the absorbance at 450 nm wavelength was detected after Cell Counting Kit-8 (CCK-8; Beyotime, China) was added. Cell migration assay Transwell 24-well plates (Corning, USA) were used for the cell migration assay.200 μl of serum-free cell culture medium containing 1×10 5 cells were added to each upper chamber of the Transwell plate, and 500 μl 10% FBS medium was added to each well of the lower chamber of the Transwell plate.The cells were incubated for 24 h in a cell incubator.The bottom of the upper chamber of the Transwell plate was fixed with 4% paraformaldehyde, stained with crystal violet, and no migrated cells were removed with a cotton swab, then the migrated cells were photographed under an inverted microscope. Statistical analysis All data analyses were performed, and graphs were drawn using GraphPad Prism 8.0 software.Results were displayed as mean ± SD.The significant difference between groups was calculated by Student's t-test; P < 0.05 was considered to be statistically significant. Results We collected villous and decidual tissues from normal pregnant women and patients with uRM, and found that there was no statistical difference in age between the two groups, but the length of pregnancy was significantly longer in patients with uRM than in normal pregnant women.Table 1 shows the ages of pregnant women (years) and the length of pregnancies (days).The expression of NCK1 and PD-L1 proteins (Figure 1A-C) in PBMC of early normal pregnant women was greatly higher than those of patients with uRM.In addition, the expression of NCK1 and PD-L1 mRNA (Figure 1D,E) in PBMC of early normal pregnant women was also greatly higher than those of patients with uRM.Moreover, the expression of NCK1 and PD-L1 proteins in decidual tissues of early normal pregnant women was significantly higher than those of patients with uRM (Figure 2A-D).However, the expression of NCK1 protein in villous tissues of early normal pregnant women was not significantly different from those of patients with uRM (Figure 2E,F).In addition, fluorescence immunohistochemistry showed that the expression of NCK1 and PD-L1 proteins in the decidual tissues of early normal pregnant women was higher than those of patients with uRM (Figure 3A-D). siRNA was successfully transfected into HTR-8/SVneo cells and we detected HTR-8/SVneo proliferation via CCK8 assay.The results showed that the HTR-8/SVneo proliferation was significantly decreased when NCK1 was knocked down (Figure 4). We explored the effect of NCK1 on the migration of trophoblast cells via Transwell migration assay.The HTR-8/ SVneo cells under different interventions were cultured in the Transwell upper chamber for 24 hours, and the changes of cell migration ability were observed under a microscope (Figure 5A).We found that the number of migrated cells was significantly decreased in HTR-8-siNCK1 group than HTR-8-NEG group (Figure 5A,B). The expression of NCK1 and PD-L1 proteins in HTR-8-siNCK1 group and HTR-8-NEG group was detected via Western blotting.The results showed that compared with the HTR-8-NEG group, the expression of NCK1 and PD-L1 proteins in HTR-8-siNCK1 group was greatly decreased (Figure 6A-C). After co-culture with the THP-1 and HTR-8/SVneo cells under different interventions, we detected the proliferation of THP-1 via CCK8 assay.The results showed that the proliferation of THP-1 co-cultured with HTR-8-siNCK1 group was significantly increased than co-cultured with HTR-8-NEG group (Figure 7). Discussion RM seriously affects the physical and mental health of pregnant women.Since the causes of miscarriage are poorly understood, no effective treatment has emerged.Thus, it is of great significance to explore the pathological mechanism to prevent and treat uRM.In recent years, studies have confirmed that NCK1 is highly expressed in some tumors, and is associated with enhanced tumor proliferation, migration and invasion (Liu et al., 2020;He et al., 2022).The similarity between embryo implantation and tumor development is widely recognized (Mor et al., 2017).Thus, the study of NCK1 in tumors may expand our knowledge of uRM.We included 20 normal pregnant women and 20 patients with uRM in this study.Abortions in the control group were performed earlier than occurred in the group of recurrent abortions.This difference may be due to the delayed artificial abortion in patients with uRM who try to maintain their pregnancy via continuous treatment.Although the samples from the elective abortion group were obtained a little earlier (approximately 10 days) than those of uRM, we believe this difference did not interfere with the results obtained, considering both are included in the first trimester of pregnancy.Our results showed that protein and mRNA expressions of NCK1 and PD-L1 were significantly reduced in PBMC from uRM than from control, implying reduced NCK1 in PBMC may become a new predictor for uRM.Moreover, in immunofluorescence reactions, the tissue level of both PD-L1 and NCK1 was reduced in the decidua, which corroborated the data obtained for expression of NCK1 and PD-L1 proteins in the decidua.Between villous tissues from patients with uRM and control, we did not find differences in the expression of NCK1 protein. Based on these results, we proposed a hypothesis that NCK1 and PD-L1 might be intrinsically linked. HTR-8/SVneo is a cell line commonly used to explore trophoblast function (Zhang R et al., 2022;Zhang D et al., 2023), and in this study we found that HTR-8/SVneo expressed NCK1 via Western blotting.Thus, to further explore the possible mechanism of NCK1 in uRM, we selected the trophoblast cell line HTR-8/SVneo.We used siRNA to knock down the expression of NCK1 in HTR-8/SVneo.Then we examined the effect of NCK1 on trophoblast proliferation using CCK8 assay, and found that NCK1 promoted trophoblast proliferation.Moreover, we examined the effect of NCK1 on trophoblast migration using Transwell migration assay and found that NCK1 promoted trophoblast migration.In HTR-8/ SVneo, when NCK1 was knocked down, the expression of PD-L1 protein was significantly reduced compared with negative control.This result suggested a regulatory interaction between NCK1 and PD-L1.However, the exact regulatory mechanism has not been reported.Next, in first-trimester pregnancy, some cytotrophoblasts migrated towards the decidua, and invasive cytotrophoblasts interacted with immune cells in the decidua (Zhang D et al., 2022).Moreover, previous studies have found that macrophages were the second most abundant immune cell in the decidua, and that dysregulated interaction between macrophages and trophoblasts might be a cause of RM (Zhang D et al., 2022).Thus, we explored the effect of NCK1 on the proliferation of macrophages at the maternal-fetal interface using co-cultures of THP-1 and differently treated HTR-8/ SVneo.We found that the proliferation of macrophages was significantly increased in the HTR-8-siNCK1 group.Previous studies revealed that PD-L1 has the ability to regulate macrophage proliferation (Hartley et al., 2018;Zhang Y et al., 2019).Thus, these results suggested that NCK1 may regulate PD-L1-mediated macrophage proliferation.Previous studies have reported that RM was closely associated with immune dysfunction, such as imbalance of macrophage polarization and dysfunction of macrophage cytokine secretion (Zhao et al., 2022).Thus, enhanced proliferation of macrophages may be detrimental to pregnancy maintenance by causing immune dysfunction at the maternal-fetal interface. Overall, in this study, we found that reduced NCK1 may be involved in RM by inhibiting trophoblast proliferation and migration, as well as enhancing PD-L1-mediated macrophage proliferation at the maternal-fetal interface (Figure 8).However, the exact regulatory mechanism between NCK1 and PD-L1 is still unclear.Moreover, NCK1 has the potential to be a new predictor and therapeutic target. Figure 1 - Figure 1 -Protein and mRNA expressions of NCK1 and PD-L1 in PBMC of normal pregnant women and patients with uRM.(A-C) Expression of PD-L1 and NCK1 proteins in PBMC of normal pregnant women and patients with uRM detected by Western blotting.For data analysis, the Western blotting was repeated three times.(D-E) mRNA expression of NCK1 and PD-L1 in PBMC of normal pregnant women and uRM patients (N1-N9 were normal pregnant women; R1-R9 were patients with uRM) detected by RT-qPCR.(*p<0.05,**p<0.01). Figure 2 - Figure 2 -Expression of NCK1 and PD-L1 proteins in decidual and villous tissues of early normal pregnant women and patients with uRM.Expression of NCK1 (A, B) and PD-L1 (C, D) proteins in decidual tissues of early normal pregnant women and patients with uRM (N1-N6 were normal pregnant women; R1-R6 were patients with uRM) were detected by Western blotting.Expression of NCK1 protein (E, F) in villous tissues of normal pregnant women and patients with uRM were detected by Western blotting (N1-N6 were normal pregnant women; R1-R6 were patients with uRM).(**p<0.01,***p<0.001). Figure 3 - Figure 3 -Fluorescence immunohistochemistry in decidual tissues.Fluorescence immunohistochemistry showed the expression of NCK1 (A) and PD-L1 (B) proteins in decidual tissues of early normal pregnant women.Immunohistochemistry showed the expression of NCK1 (C) and PD-L1 (D) protein in the decidual tissue of patients with uRM.Purple is the nuclear staining and red is the target protein staining. Figure 8 - Figure 8 -Possible mechanisms of NCK1 involvement in recurrent miscarriage.Reduced NCK1 on trophoblast cells may be a cause of recurrent miscarriage.On the one hand, reduced NCK1 inhibits the migration and proliferation of trophoblast cells, on the other hand, reduced NCK1 promotes macrophage proliferation by attenuating PD-L1 expression, ultimately leading to an immune imbalance in the maternal-fetal interface. Table 1 - Clinical characteristics of normal pregnant women and patients with uRM. Table 2 - List of primers used for RT-qPCR.
2023-06-28T06:17:25.855Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "6bfed77fd07ab75e02a9afdb6cd3e55c7c30bdc4", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/gmb/a/xz9vcnvfhXMRMV3cXhHkLbQ/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6bfed77fd07ab75e02a9afdb6cd3e55c7c30bdc4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
149753696
pes2o/s2orc
v3-fos-license
Changes in Vegetation and Geomorphological Condition 10 Years after Riparian Restoration Riparian restoration is an important objective for landscape managers seeking to redress the widespread degradation of riparian areas and the ecosystem services they provide. This study investigated the long-term outcomes of ‘one-off’ restoration activities undertaken in the Upper Murrumbidgee Catchment, NSW, Australia. The objective of the restoration was to protect and enhance riparian vegetation and control erosion, and consequently reduce sediment and nutrient delivery into the Murrumbidgee River. To evaluate the outcomes 10 years after restoration, rapid riparian vegetation and geomorphological assessments were undertaken at 29 sites spanning the four different restoration methods used (at least five replicates per treatment), as well as at nine comparable untreated sites. We also trialed the use of aerial imagery to compare width of riparian canopy vegetation and projective foliage cover prior to restoration with that observed after 10 years. Aerial imagery demonstrated the width of riparian canopy vegetation and projective foliage cover increased in all restored sites, especially those with native plantings. The rapid assessment process indicated that 10 years after riparian restoration, the riparian vegetation was in a better condition at treated sites compared to untreated sites. Width of riparian canopy vegetation, native mid-storey cover, native canopy cover and seedling recruitment were significantly greater in treated sites compared to untreated sites. Geomorphological condition of treated sites was significantly better than untreated sites, demonstrating the importance of livestock exclusion to improve bank and channel condition. Our findings illustrate the value of ‘one-off’ restoration activities in achieving long-term benefits for riparian health. We have demonstrated that rapid assessments of the vegetation and geomorphological condition can be undertaken post-hoc to determine the long-term outcomes, especially when supported with analysis of historical aerial imagery. Introduction Riparian zones encompass the interface between aquatic and terrestrial ecosystems [1]. The vegetation in riparian zones is critical for river health, as it traps sediments, slows water movement and increases water infiltration and nutrient cycling [1,2]. Riparian vegetation health in turn influences in-stream hydraulic processes [3], and larger-scale fluvial and morphological river processes [4]. Globally, widespread modification of riparian vegetation has degraded many riparian zones [5] affecting geomorphological processes [6] and reducing their functional efficiency [7,8]. In many parts of Australia, agricultural land use has led to extensive modification and degradation of riparian vegetation [9]. Livestock are one of the main contributors of this degradation along with vegetation clearing. The presence of livestock can significantly degrade riparian vegetation [10,11]. The direct effects of livestock in riparian zones are the erosion and compaction of river banks [10,11], Riparian Restoration Program Between 2000 and 2004, a large scale publicly funded riparian restoration project (the Bidgee Banks Restoration Project: hereafter BBRP) was undertaken across the UMRC. The BBRP was a collaborative on-ground restoration partnership between the NSW Government, Greening Australia (a conservation Non-Government Organisation), and private landholders. The aim of the BBRP was to reverse vegetation loss and deteriorating water quality in the UMRC by protecting and rehabilitating degraded riparian areas [33]. An initial assessment identified 104 sites across the UMRC as potential sources of high sediment and nutrient loads due to poor riparian vegetation and active erosion, and thus these sites became the priorities for restoration in the BBRP [34]. Implementation of the BBRP involved a single ('one-off') restoration event using one of four restoration methods: (1) fence only; (2) fence and direct seed; (3) fence and tubestock (planting seedlings), and (4) fence, direct seed and tubestock ( Table 1). The 104 sites were assigned one of the Riparian Restoration Program Between 2000 and 2004, a large scale publicly funded riparian restoration project (the Bidgee Banks Restoration Project: hereafter BBRP) was undertaken across the UMRC. The BBRP was a collaborative on-ground restoration partnership between the NSW Government, Greening Australia (a conservation Non-Government Organisation), and private landholders. The aim of the BBRP was to reverse vegetation loss and deteriorating water quality in the UMRC by protecting and rehabilitating degraded riparian areas [33]. An initial assessment identified 104 sites across the UMRC as potential sources of high sediment and nutrient loads due to poor riparian vegetation and active erosion, and thus these sites became the priorities for restoration in the BBRP [34]. Implementation of the BBRP involved a single ('one-off') restoration event using one of four restoration methods: (1) fence only; (2) fence and direct seed; (3) fence and tubestock (planting seedlings), and (4) fence, direct seed and tubestock ( Table 1). The 104 sites were assigned one of the four restoration methods based on an initial site evaluation. Sites devoid of vegetation were fenced and planted and/or direct seeded while sites with remnant vegetation were often only fenced. Thus, the restoration methods (i.e., treatments) applied during the BBRP were not randomly applied. While initial assessments considered the BBRP to be successful, based on the strong community-government partnership and high seedling survival in spite of severe drought conditions during and following project implementation [29,35], further evaluations were needed to determine the longer term outcomes. Evaluation 10 Years Post Restoration In April/May 2014, 10 years post restoration, 29 of the 104 BBRP-restored sites were revisited to determine the current vegetation and geomorphological condition. We could only sample a subset of the original sites because permission to access some sites could not be obtained. Priority was given to ensuring that a representative sample of the four restoration methods (Table 1) and the original site distribution (Figure 1) was encompassed during the selection of the 29 sites. Whilst this resulted in some clustering of sites, they were at least 1000 m apart. As the BBRP did not include baseline data, it was impossible to ensure the sites were representative of the range of initial conditions. Thus, the fact that we sampled a pre-existing study and have incomplete information about the original site conditions (see above) needs to be considered when interpreting our evaluation of the outcomes. Nine untreated sites from within the original BBRP area were added to the sampling in 2014 because untreated reference sites were not part of the BBRP. These untreated sites were selected to provide a proxy benchmark for degraded sites. Advice from the Greening Australia (Capital region) BBRP manager was used to guide the selection of untreated sites to those of a similar character to sites which would have been targeted for restoration as part of BBRP. These untreated sites were selected based on the following criteria: (a) representative of the vegetation condition and type used in the BBRP with no evidence of prior restoration, (b) livestock could readily access the riparian zone, (c) the type of livestock encompassed the two main species farmed in the region (cattle (n = 4 sites) and sheep (n = 5 sites)), (d) there was evidence of active bank and gully erosion, and (e) the presence/absence of remnant vegetation (Table 1). While the authors attempted to select representative untreated sites, it is acknowledged that the condition of these sites may differ from the original condition of the treated sites prior to restoration. Assessing Vegetation Condition The condition of the riparian vegetation was evaluated using a combination of field assessments and analysis of aerial photographs. The riparian vegetation condition was assessed using the Rapid Appraisal of Riparian Condition method (RARC: Jansen et al. [36]), which was initially developed as a tool to determine the impacts of grazing management practices on riparian condition in NSW [37] and has been used since to determine riparian vegetation condition [9,38]. The RARC method is a field assessment which uses indicators: (1) longitudinal continuity of riparian canopy vegetation; (2) proximity to nearest patch of intact native vegetation; (3) width of riparian canopy vegetation; (4) groundcover; (5) mid-storey cover; (6) canopy cover; (7) native groundcover; (8) native mid-storey cover; (9) native canopy cover; (10) leaf litter; (11) native leaf litter; (12) standing dead trees; (13) hollow-bearing trees; (14) coarse woody debris; (15) mid-storey species recruitment (i.e., seedling <1 m tall); (16) canopy species recruitment (i.e., seedling <1 m tall), and; (17) abundance of native tussock grasses and reeds. These indicators provide an overall appraisal of the vegetation condition at a site, which collectively provides a RARC score (out of 50); with a heathy site having a score of 43 [37]. Data for RARC indicators longitudinal continuity and proximity were given single values for the whole site, while all other indicators were assessed along four transects positioned perpendicular to the channel and evenly spaced across the site following the methods established by Jansen et al. [36]. We used historical aerial imagery to retrospectively assess the sites to determine the baseline condition. Aerial photographs have previously been used to assess changes in riparian vegetation before and after a management action within riparian areas [39]. We obtained two series of digital aerial photographs from the NSW Land and Property Information Department (2014) for each site (derived from GPS coordinates of the sites). The first series of images were taken prior to restoration (between 1996 and 2000), and the second corresponded to the 2014 field evaluations. Each site was located on the aerial photographs using distinguishing features like streamlines, trees and fence lines supported by field observations. On each of these two sets of aerial site images (i.e. (1) prior (baseline 1996-2000) and (2) post restoration (2014)), two RARC indicators were recorded: (a) canopy cover for 26 sites encompassing the following restoration methods -6 untreated, 4 fence only, 5 fence and direct seed, 5 fence and tubestock, and 6 fence, direct seed and tubestock, and (b) width of canopy vegetation for 16 sites encompassing the following restoration methods -4 unrestored, 2 fence only, 3 fence and direct seed, 4 fence and tubestock, and 3 fence, direct seed and tubestock. The 16 sites outlined in (b) were a sub-set of the 26 sites in (a). Measurements for only these two RARC indicators could be readily extracted from the aerial images. Unfortunately, some sites could not be assessed because of the quality of the images or missing site coverage. Width of canopy vegetation was calculated using a digital ruler to calculate transect lengths. Canopy cover was calculated from the total projective foliage cover (using defined site boundaries) based on the tones of woody vegetation in each image using the image recognition software WinDIAS 3.2. WinDIAS 3.2 is an image analysis software program which measures leaf area. Through this process, we could determine the visible vegetation changes that occurred at each site. Aerial imagery has been previously used to measure canopy cover and width of riparian canopy vegetation [40]. Assessing Geomorphological Condition Stream geomorphological condition was assessed using a modified version of the Ephemeral Stream Assessment method (ESA: Machiori et al. [41]), which is a field assessment method used to estimate bank stability and attributes of erosion. Seven indicators were used: (1) vegetation on the drainage-line floor; (2) vegetation on the drainage-line walls; (3) particle size of materials available for erosion; (4) longitudinal morphology of the drainage-line; (5) nature of drainage-line materials; (6) shape and aspect ratio of the drainage-line cross-section, and; (7) lateral flow regulation, to provide an overall geomorphological condition assessment. The original ESA method incorporates an eighth indicator (the shape of the stream bordering flat land and/or slopes) which was not used here as restoration actions do not affect this indicator. Data for each of the seven indicators were collected every 25 m along the drainage line of each site using transects and following the methods established by Machiori et al. [41]. The maximum achievable ESA score is 1. The image quality of the aerial photos was not of sufficient resolution to assess the geomorphological condition, and thus comparisons between 2004 and 2014 could not be undertaken. Data Analysis Normally distributed data were tested using a factorial analysis of variance (ANOVA). Where possible, non-normally distributed data were transformed to meet the assumptions of an ANOVA. Significant results were tested with a Tukey-Kramer multiple comparison test, to identify the source of significance at <0.05. A non-parametric Kruskel-Wallis analysis of variance was performed on RARC indicators: hollow bearing trees, coarse woody debris, mid-storey recruitment, and canopy species recruitment and the seven ESA indicators as these metrics have ordinal data. Error estimates represent one standard error (SE) from the mean. Riparian Vegetation Assessment The mean riparian vegetation condition (RARC) score across all restored sites and restoration methods was significantly greater at 21.3 ± 1.3SE 10 years after restoration, compared to 12.4 ± 1. established by Machiori et al. [41]. The maximum achievable ESA score is 1. The image quality of the aerial photos was not of sufficient resolution to assess the geomorphological condition, and thus comparisons between 2004 and 2014 could not be undertaken. Data Analysis Normally distributed data were tested using a factorial analysis of variance (ANOVA). Where possible, non-normally distributed data were transformed to meet the assumptions of an ANOVA. Significant results were tested with a Tukey-Kramer multiple comparison test, to identify the source of significance at <0.05. A non-parametric Kruskel-Wallis analysis of variance was performed on RARC indicators: hollow bearing trees, coarse woody debris, mid-storey recruitment, and canopy species recruitment and the seven ESA indicators as these metrics have ordinal data. Error estimates represent one standard error (SE) from the mean. Riparian Vegetation Assessment The mean riparian vegetation condition (RARC) score across all restored sites and restoration methods was significantly greater at 21.3 ± 1.3SE 10 years after restoration, compared to 12.4 ± 1. Table 1). Untreated site scores are provided as a comparison. Error bars represent ± SE. A heathy site RARC score is 43. Table 1). Untreated site scores are provided as a comparison. Error bars represent ± SE. A heathy site RARC score is 43. Before and after Restoration from Aerial Imagery There was an overall increase in the width of the riparian vegetation 10 years after restoration across all restoration methods (Figure 4) which, on average, doubled following restoration ( Figure 4a). The greatest increase in the width of the riparian vegetation occurred at sites where fencing and direct seeding were used, and the lowest increase was for fence, direct seed and tubestock ( Figure 4b). The untreated sites showed no apparent change in the width of riparian vegetation over the same 10-year period (Figure 4b). Before and after Restoration from Aerial Imagery There was an overall increase in the width of the riparian vegetation 10 years after restoration across all restoration methods (Figure 4) which, on average, doubled following restoration (Figure 4a). The greatest increase in the width of the riparian vegetation occurred at sites where fencing and direct seeding were used, and the lowest increase was for fence, direct seed and tubestock (Figure 4b). The untreated sites showed no apparent change in the width of riparian vegetation over the same 10-year period (Figure 4b). There was a strong positive correlation between the width of canopy vegetation measurement taken from the aerial photographs and field assessments (R² = 0.73, F(1,10) = 27.66, p < 0.001). There was also a positive correlation between the projective foliage cover measurement taken from aerial imagery and field assessment of the canopy cover measurement (R² = 0.45, F(1,17) = 13.85, p = 0.002), which was strengthened once field assessments of canopy cover and mid-story cover were combined (R² = 0.49, F(1,17) = 15.99, p < 0.001). The mean projected foliage cover almost doubled following restoration (Figure 5a). The increase was observed to a greater extent where active restoration (tubestock planting and direct seeding) methods were used (Figure 5b). Projected foliage cover did not change in untreated sites over the same timeframe (Figure 5b). The mean projected foliage cover almost doubled following restoration (Figure 5a). The increase was observed to a greater extent where active restoration (tubestock planting and direct seeding) methods were used (Figure 5b). Projected foliage cover did not change in untreated sites over the same timeframe (Figure 5b). There was a strong positive correlation between the width of canopy vegetation measurement taken from the aerial photographs and field assessments (R² = 0.73, F(1,10) = 27.66, p < 0.001). There was also a positive correlation between the projective foliage cover measurement taken from aerial imagery and field assessment of the canopy cover measurement (R² = 0.45, F(1,17) = 13.85, p = 0.002), which was strengthened once field assessments of canopy cover and mid-story cover were combined (R² = 0.49, F(1,17) = 15.99, p < 0.001). There was a strong positive correlation between the width of canopy vegetation measurement taken from the aerial photographs and field assessments (R 2 = 0.73, F(1,10) = 27.66, p < 0.001). There was also a positive correlation between the projective foliage cover measurement taken from aerial imagery and field assessment of the canopy cover measurement (R 2 = 0.45, F(1,17) = 13.85, p = 0.002), which was strengthened once field assessments of canopy cover and mid-story cover were combined (R 2 = 0.49, F(1,17) = 15.99, p < 0.001). Geomorphological Assessment The mean ephemeral stream assessment (ESA) scores across all restoration methods were significantly higher at 0.69 ± 0.01SE, compared to 0.52 ± 0.03 for untreated sites (F = 8.45, df = 4,33, p < 0.001: Figure 6). While there was no significant difference between restoration methods (Figure 6), differences were observed between the seven individual ESA indicators (Figure 7), five of which were significantly higher for the treated sites than for untreated sites: vegetation on the drainage-line floor (χ 2 = 6.18; p = 0.012: Figure 7a Discussion Despite the restoration of riparian zones being a major component of river management [21], there have been limited long-term evaluations of the outcomes of such actions, especially involving multiple sites and comparisons across restoration methods [42]. While implementing appropriate monitoring for current restoration programs can address these issues in the future, understanding the legacy of past restoration programs is needed. The retrospective assessments of the BBRP that we undertook using rapid appraisal methods successfully illustrated the changes that occurred at these sites, especially when combined with (a) assessments from untreated sites, and (b) reconstruction and assessment of the baseline from aerial imagery of the restored sites prior to restoration. This study Discussion Despite the restoration of riparian zones being a major component of river management [21], there have been limited long-term evaluations of the outcomes of such actions, especially involving multiple sites and comparisons across restoration methods [42]. While implementing appropriate monitoring for current restoration programs can address these issues in the future, understanding the legacy of past restoration programs is needed. The retrospective assessments of the BBRP that we undertook using rapid appraisal methods successfully illustrated the changes that occurred at these sites, especially when combined with (a) assessments from untreated sites, and (b) reconstruction and assessment of the baseline from aerial imagery of the restored sites prior to restoration. This study shows that these approaches can successfully be used to retrospectively assess prior restoration programs. Sites restored as part of the BBRP have better riparian vegetation and geomorphological condition compared to the untreated sites. Analysis of aerial imagery before and 10 years after restoration demonstrated improvements in projective foliage cover and an increase in the width of riparian vegetation at restored (treated) sites while no change occurred at untreated sites. The BBRP appears to be tracking towards meeting its objective of reversing vegetation loss. The width of the riparian area 10 years after restoration was largely defined by the width of the fenced area, with riparian vegetation truncated to the fence line. Naiman, et al. [43] suggested that a seven metre riparian vegetation buffer strip is adequate to provide bank stability, and Wenger [44] showed that a 30 metre wide riparian zone is sufficient to trap sediment. The average riparian vegetation width observed in the BBRP was 19.2 metres which is nearly double that prior to restoration, but just short of the initial aim to achieve a minimum 20 metre wide riparian buffer. While it should be noted that the width of the fenced area was, in part, determined by the amount of land the landholder was prepared to fence off, it highlights the value of establishing clear goals for riparian buffer width as part of the broader program. We were unable to evaluate the outcomes of the restoration for instream sediment and nutrient loads because site scale water quality data were not available. It is likely, however, that the removal of livestock and the observed improvements in riparian vegetation condition have increased the ability of the riparian zone to filter and process nutrients, (see [25,26]) and thus, reduce sediment and nutrient loads in the rivers of the UMRC. The removal of livestock from the riparian zone in the BBRP was observed to improve vegetation condition, a result observed elsewhere; for example, Spooner and Briggs [45] found significantly more seedlings and shrub cover in fenced than unfenced areas and attributed this to an absence of herbivores. Seedling recruitment and mid-storey cover were much higher in restored (treated) sites (including fence only sites) compared to the untreated sites in this study. The outcomes observed from combining active and passive restoration at degraded sites illustrates how targeting the method to the site can accelerate recovery [46]. The variability in outcomes of active restoration observed, however, may be attributed to the level of site degradation prior to restoration, as active restoration was applied to more degraded sites. The outcomes of direct seeding of natives were more variable than those of planting seedlings, and in many instances the combination of these two approaches resulted in a worse outcome than either approach individually. One possible explanation is that sites in very poor condition received the combined restoration method (i.e., both tubestock planting and direct seeding) based on prior evaluation [34], and despite such efforts these highly degraded sites may require additional investment in both time (i.e., more than a 'one-off' event) and resources (i.e., additional plantings). The variability in outcomes between sites highlights the need for monitoring. Funding for restoration activities in Australia is frequently in the form of 'one-off' investments enabling a single treatment. This study clearly shows the value of a 'one-off' restoration treatment, but it is possible that greater benefit may be achieved if multiple restoration works were undertaken. However, it is not clear if the marginal benefit would be worth the investment and the expense of an untreated site. This would be an area for future research. While we showed that restored (treated) sites had a better riparian vegetation condition score 10 years after restoration compared to untreated sites, the scores recorded were still less than half that of healthy sites (i.e., a RARC score of 43 [37]). The 10 year timeframe appears sufficient for changes in indicators such as width of riparian canopy vegetation and canopy cover to occur, similar to that found by Hale et al. [25], after a similar timeframe. The timeframe, however, may be insufficient to result in measurable changes in leaf litter, hollow bearing trees and coarse woody debris which reflect the presence of mature vegetation [47]. Mature trees contribute litter, hollows and woody debris to riparian zones and their replacement is important for ecological restoration and it has been suggested that indicators for litter, hollows and woody debris could take between 50 and 100 years to reach 'healthy levels' [10,48]. There was a significant difference in geomorphological condition between restored (treated) and untreated sites. The exclusion of livestock through fencing appears to be a major contributor to geomorphological condition (as observed elsewhere [6,26,49]) as sites in all restoration treatments demonstrated a better geomorphological condition (regardless of the inclusion of tubestock or direct seed) than untreated sites. This was especially evident for the index representing the shape of cross-section of the bank (i.e., a measure of bank stability). The reduced stock movement in accessing the stream has reduced the direct effects of livestock on the geomorphological condition (such as trampling and loosening soil) [11] and the geomorphological condition appears to be improving. The presence of mature vegetation contributes to the geomorphological condition of the riparian zone by maintaining bank stability [50] and increasing inputs of organic matter and debris. Many of the BBRP sites restored with passive restoration (i.e., fence only) contained higher levels of remnant vegetation before restoration occurred (i.e., higher canopy cover and width of riparian canopy vegetation). This different starting point was still evident 10 years after restoration with higher scores for ESA metrics: vegetation on the drainage-line wall, nature of drainage-line materials, and shape of cross section of the bank at sites treated with fence only compared to sites treated with active restoration. As discussed above, the absence of mature remnant vegetation may limit future improvements in restoration outcomes in the short-to medium-term (i.e., until they can be re-established on site). Thus, highly degraded sites (i.e., with no or limited remnant vegetation) may experience substantial lags in achieving a healthy site assessment following restoration. The results from our untreated sites illustrated the current poor condition of both the riparian vegetation and geomorphology in the presence of livestock, a finding reported elsewhere [10,37]. Given the RARC and ESA scores observed, these untreated sites are unlikely to provide the 'normal' riparian ecosystem functions of sediment trapping, nutrient cycling and flood mitigation [2,8]. Moreover, the presence of such sites across the catchment shows that despite restoration actions and some successes as outlined here after 10 years, improvement to the riparian zones of the Murrumbidgee River and Catchment requires additional resources and effort. Conclusions One of the common challenges for evaluating long-term outcomes from restoration programs is a lack of pre-assessment data [42]. While such challenges can lead to inaction associated with undertaking long-term evaluations, our results show that alternatives can be found. Successful retrospective evaluations for vegetation using historical aerial imagery (especially when combined with image analysis software) can overcome such data shortfalls (including a lack of control sites). The changes in canopy cover and width of riparian vegetation that we observed were sufficient to aid management decisions and provide evaluations of programs in the absence of other assessments. Theoretically, aerial imagery could also be used to assess channel bank erosion using orthophotos and the increasing availability of satellite imagery will provide better options for future evaluations, particularly as image and spectral data resolution improves. The improvements in riparian and geomorphological condition at sites restored as part of the BBRP are encouraging and are testament to the hard work and planning undertaken by the project managers and ongoing maintenance by the landholders, as well as the investment of public funds. This study demonstrates the value of 'one off' restoration actions, but also highlights the need for monitoring and project evaluation to identify sites where further work may be required. A 'healthy' riparian site may be an unrealistic 10-year target when restoring degraded sites depending on the starting condition. However, our results show that 10 years after restoration, the restoration sites are on an improving trajectory, and that successful riparian restoration is being achieved using a range of approaches tailored to site conditions.
2019-05-12T14:10:10.958Z
2019-04-25T00:00:00.000
{ "year": 2019, "sha1": "49b1f5f2fe8ce63d36f079b4e54782e7a9635721", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/11/6/1252/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "bf17ddb1fdd18356015487c4962bffe2636269df", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
255712753
pes2o/s2orc
v3-fos-license
Ganglionic Cyst of Periosteal Origin Mimicking as Pes Anserine Bursitis – A Rare Entity and Literature Review Introduction: Periosteal ganglion is a form of cystic swelling commonly seen around long bones of lower extremities. Case Report: A 55-year-old male presented to outdoor clinic with gradually increasing swelling around anteromedial aspect of the right knee joint with intermittent pain on prolonged standing and walking for 8 months. Magnetic resonance imaging was suggestive of ganglionic cyst which was confirmed later by histopathological examination. Conclusion: Ganglionic cyst of periosteal origin is a rare entity. Complete excision is the recommended treatment option; if not performed appropriately chances of recurrence are high. Thereafter, MRI of knee joint was performed which showed well-defined cystic lesion with internal septations arising from medial aspect of tibia with soft-tissue extension and measured around 4 × 3 cm (Fig. 2). Informed consent was taken before starting the treatment. Preoperatively, examination showed cystic swelling on anteromedial aspect of the right knee with fixity to underlying bone; there was no restriction of range of motion. Thereafter, excisional biopsy was planned. The patient was laid supine in operation theater, neuraxial anesthesia was given, painting and draping were done, and tourniquet was inflated to 320 mm Hg pressure. Then, lazy-S skin incision was made around anteromedial aspect of knee, subcutaneous dissection was carried out along with complete en bloc excision which showed cystic mass arising from periosteum of proximal tibia. Intergrity of medial collateral ligament was confirmed after the procedure ( Fig. 3 and 4). Thorough wash was given followed by closure in layers, range of motion and weight-bearing was allowed from next day onwards. Size of excised tissue measured around 4.2 × 3 × 1 cm. It was sent for HPE, gram's staining, and fungal culture (Fig. 5). Diagnosis was confirmed with HPE reports showing gelatinous material within cyst along with presence of pseudo-synovial cells and periosteum with fibrous tissue on outer aspect of cavity suggestive of ganglionic cyst of periosteal origin (Fig. 6). Knee joint ROM was normal at final follow-up. The patient was followed for 1-year postoperatively and period was uneventful. Discussion Around 18th century, Poncet and Olliers coined the term "periostitis albuminosa" for ganglionic cyst of periosteal origin. Although rare, it occurs most commonly around proximal tibia around pes-anserinus region, while some other locations are [6,7]. We searched PubMed and Google Scholar with the following key words: "Periosteal ganglionic cyst" and "long bone". We included the search results from 2012 to 2022 and included all types of studies. Out of the search results, we found five relevant case reports. Out of five cases, four of them were males [7,8,9,10] and one female [11] aged 35-62 years with mean age (49.6 years). The disease duration was from 1 month to 10 years. In all the cases, long bones were involved except one in which iliac crest involvement was noted. The most common noted symptom was swelling which was progressive with time. In all cases, surgery for cyst removal was successful except in one case for which aspiration was performed. The complete details related to onset, signs, symptoms, local examination, radiological findings, HPE findings, and complete follow-up results are shown in Table 1. The studies [8,9,11] in review of literature showed involvement of some rare locations such femur condyle, iliac crest, and distal end radius. This entity showed slight male p r e d o m i n a n c e a n d s e e n commonly around 4th-5th decade of life as shown in the present case scenario [12]. Although the etiology of this rare condition is still unknown, some suggest trauma and repetitive injury as probable causes, which [3]. In the present case scenario and review of literature mentioned above, prior history was noted in only one case [11]. This leads to p resentat i o n o f p er i o stea l ganglion as diffuse cystic swelling with tenderness, soft-tissue extension, cortical erosion, and periosteal reaction [4,12]. Although several treatment modalities for periosteal ganglion are available, complete excision along with the surrounding periosteum is the recommended option. In this rare entity, chances of recurrence are high if myxomatous degeneration continues in the connective tissues around surgical area or if communication between nearby joint and cyst is not removed properly [5,13]. Conclusion Ganglionic cyst of periosteal origin is considered as benign cystic lesions commonly affecting long bones of lower limbs and it has a favorable prognosis. Radiological investigations such as digital radiographs and MRI along with HPE are of value for proper diagnosis. 1 : R e v i e w o f l i t e r a t u r e f o r p e r i o s t e a l g a n g l i o n i c c y s
2023-01-12T17:29:38.417Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "6299b52dc40673297b14d33c5b2e6c8df9f1a438", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.13107/jocr.2022.v12.i09.3010", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a55bebd2bf55dea4dc5cceef94275d61e46ef5e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }