id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
1225470
pes2o/s2orc
v3-fos-license
Mosquitoes established in Lhasa city, Tibet, China Background In 2009, residents of Lhasa city, Tibet Autonomous Region (TAR), China reported large numbers of mosquitoes and bites from these insects. It is unclear whether this was a new phenomenon, which species were involved, and whether these mosquitoes had established themselves in the local circumstances. Methods The present study was undertaken in six urban sites of Chengguan district Lhasa city, Tibet. Adult mosquitoes were collected by bed net trap, labor hour method and light trap in August 2009 and August 2012. The trapped adult mosquitoes were initially counted and identified according to morphological criteria, and a proportion of mosquitoes were examined more closely using a multiplex PCR assay. Results 907 mosquitoes of the Culex pipiens complex were collected in this study. Among them, 595 were females and 312 were males. There was no significant difference in mosquito density monitored by bed net trap and labor hour method in 2009 and 2012. Of 105 mosquitoes identified by multiplex PCR, 36 were pure mosquitoes (34.29%) while 69 were hybrids (65.71%). The same subspecies of Culex pipiens complex were observed by bed net trap, labor hour method and light trap in 2009 and 2012. Conclusion The local Culex pipiens complex comprises the subspecies Cx. pipiens pipiens, Cx. pipiens pallens, Cx. pipiens quinquefasciatus and its hybrids. Mosquitoes in the Cx. pipiens complex, known to be, potentially, vectors of periodic filariasis and encephalitis, are now present from one season to the next, and appear to be established in Lhasa City, TAR. Background Once established in high altitude regions, some mosquito species may threaten the health of humans and vertebrates due to their ability to transmit numerous diseases [1][2][3][4][5]. With a permanent resident population of 559, 423 at the 2010 Chinese Census, Lhasa city, which is the administrative capital of the Tibet Autonomous Region (TAR), China is situated on the northern bank of the Lhasa River, a tributary of the Yarlung Zangbo, in the mid-south of TAR. To the east and southeast of Lhasa are the regions of Nyingchi and Sharman; Nagqu neighbours Lhasa on the north and west; Xigaze lies on its southwest. Among a total of 29,518 square kilometers, the urban area of Lhasa is just 50 square kilometers. Standing on a plain over 3,650 meters (13,000 feet) above sea level and surrounded by towering mountains, Lhasa is known as the "city of the sun." With an annual average temperature of 7.5°C, its average temperature in January is 2.3°C and 15.4°C in July. The climate here is of the temperate plateau monsoon type. Lhasa has an annual precipitation of 426 millimeters with rain falling mainly in July, August and September. In China, Cx. pipiens complex consists of four subspecies [20], including Cx. pipiens pipiens, Cx. pipiens quinquefasciatus, Cx. pipiens pallens and Cx. pipiens molestus. Cx. pipiens quinquefasciatus cannot be considered as a separate species and Cx. pipiens pallens is not an intermediate form. Cx. pipiens molestus is present in the underground water system in Beijing and Shenyang, China [21]. The usual altitude of Cx. pipiens pipiens is lower than 3,000 m. In eastern Colorado, Cx. pipiens pipiens activity occurs primarily in the populated valleys at lower elevation, diminishing rapidly at higher levels (>3,000 m) [22]. Alvaro Diaz-Badillo et al. reported that Cx. pipiens pipiens, Cx. pipiens quinquefasciatus, and their hybrids were all present in Mexico City (2,200 m) [7]. In China, Cx. pipiens pipiens has been identified in Xinjiang Uygur Autonomous Region. Cx. pipiens quinquefasciatus occurred in areas south of 32°N. Xiaohong Sun et al. collected seventy-five Cx. pipiens quinquefasciatus in northeastern Yunnan Province (2,500-3,000 m) during 2005 and 2006 [23]. Cx. pipiens pallens distributed in areas north of the Yangtse River [24]. The highest elevation at which Cx. pipiens pallens has been observed in China is 2,900 metres, in Mainling County, Nyingchi area, Tibet [25]. Identifying members of the Cx. pipiens complex and other sibling species by morphologic methods is timeconsuming and restricted to adult males [12,26]. Other techniques, such as allozyme analyses [17], restriction fragment length polymorphism (RFLP) analysis of PCR products [27], only distinguish between the two major taxa of the complex: Cx. pipiens and Cx. quinquefasciatus. To solve this problem, Smith and Fonseca developed assays that use polymorphisms in the second intron of the acetylcholinesterase-2 (ace-2) locus to identify members of the Cx. pipiens complex and other sibling species. The same method may be used to detect introgression between Cx. pipiens and Cx. quinquefasciatus [12,27,28]. Extensive population level examination of most of the species shows they consistently generate unique fragments that may be easily resolved by electrophoresis on agarose gels. This method permits the rapid and reliable identification of local mosquitoes. In recent years, there have been numerous changes that might assist mosquitoes to reach Lhasa and become established there. These include: global warming [29][30][31][32][33][34][35], increasing international trade and tourism, population growth and mobility [36], transport improvements (such as completion of the Qinghai-Tibet Railway in 2006, the Qinghai-Tibet Highway, Sichuan-Tibet Highway and China-Nepal International Road, and the construction of the Gonggar Airport) [37], changing rainfall patterns [38], and developments in agriculture, urbanization and industrialization [39]. There are no official records to show whether mosquitoes existed in Lhasa city before 2009. In 2009, reports appeared in public media concerning the emergence of mosquitoes in Lhasa city. In addition, approximately 85.3 percent of local respondents said they were bitten by mosquitoes from the beginning of 2009 to the end of 2012, and almost one in 20 (4.5%) had to attend hospital for treatment for severe inflammation and local complications (Qiyong Liu et al., unpublished questionnaire survey in Lhasa in 2012). Therefore, this phenomenon is already perceived to be a serious public health problem. However, it is unclear which species of mosquitoes were involved, and whether these mosquitoes have indeed established themselves locally. This study was undertaken to test the media reports and to determine whether mosquitoes are now established in the city. The results provide the first scientific assessment of mosquitoes in Lhasa and provide a foundation for development of measures to control mosquito-borne diseases in Lhasa in the future. Study sites The present study was undertaken in six urban sites of Chengguan district during August 2009 and August 2012. The sites were selected to be broadly representative of the geographic conditions and socio-economic characteristics of urban Lhasa. They included Tibet Center for Disease Control and Prevention (Tibet CDC), Longwangtan Park, Tibet Post Hotel, Gamagongsang Community, Xiashasu Community and Jiacuo Community ( Figure 1). Tibet CDC lies to the northeast of the Potala Palace. The campus includes many family dormitory buildings and well-established trees (cypresses). Leaks from water pipes and the irrigation of lawns provide potential breeding sites for mosquitoes. Longwangtan Park lies to the northeast of the Potala Palace and features dense vegetation and a lake, with many fish and water birds. Tibet Post Hotel lies to the southeast of the Potala Palace, close to Longwangtan Park, it has many cypresses in its courtyard. Gamagongsang Community lies to the east of the Potala Palace with a population of 2,149 people within 837 households. There is limited infrastructure such as drainage systems and roads, and there are no parks or other urban green spaces. Xiashasu Community lies to the southeast of the Potala Palace with a population of 1,519 people within 907 households. Because of the famous "Dazhao Temple", this community is the most crowded in urban Lhasa. Residents tend to be less educated, with lower incomes, restricted living spaces and poor dwelling conditions. Jiacuo Community lies to the northwest of the Potala Palace and adjacent to a large park. Residents are relatively wealthy and mainly live in self-built single family houses with small yards. The bed net traps were applied between 19:00 and 24:00 (the peak time for mosquitoes), taking account of the time for sunset in Lhasa city (generally 20:00 in August). Bed net traps were placed close to potential breeding habitats, at intervals of 100 m. The distance from the bed net traps to the nearest resident's house was also about 100 m. The size of bed net traps was 1.5 m × 1.2 m × 1.5 m, with twentyfive centimeters between the floor and the bottom of the bed net traps. Some members of staff at Tibet CDC and China CDC were selected as human baits. These members (under double bed net traps to avoid mosquitoes bites) were also used repeatedly throughout the entire duration of the study [40]. Every hour, all mosquitoes inside the bed net traps were collected by an electrical aspirator for 15 minutes per hour throughout the 5 hour period. In 2009, the bed net traps were carried out on Aug.3rd -4th in Tibet CDC (Lawn) (total of 3 bed net traps) and Aug.3rd in Longwangtan Park (total of 3 bed net traps). In 2012, the bed net traps were carried out on Aug. 7th -8th in Tibet CDC (Lawn) (total of 4 bed net traps) and Aug. 10th -11th in Longwangtan Park (total of 4 bed net traps). Mosquito collection and initial morphological identification An electric aspirator was employed for 15 minutes to collect mosquitoes inside an outpatient building and the residential area of Tibet CDC. In 2009, the labor hour method was carried out on Aug.4th in Tibet CDC (outpatient building) (total of 1 person) and Aug.3rd in Tibet CDC (residential area) (total of 1 person). In 2012, the labor hour method was carried out on Aug.8th in Tibet CDC (outpatient building) (total of 1 person) and Aug.7th in Tibet CDC (residential area) (total of 1 person). Kung Fu Xiaoshuai miniature light traps (Photocatalytic Miewen Ying supply device; Wavelength: 2537Å; Power: 8W; Corporation: Wuhan Environmental Protection Technology Co., Ltd. Gemstar) were used to collect adult mosquitoes. The light traps were placed in the campus of Tibet Post Hotel, Tibet CDC and Gamagongsang Community, Xiashasu Community and Jiacuo Community. Traps were hung away from interference by light sources, 1.5 m above the floor. They were turned on 1 hour before sunset (20:00) and turned off 1 hour after sunrise (08:00). In 2012, the light traps were employed from Aug.5th -12th in Tibet Post Hotel (total of 18 light traps), on Aug.7th -9th in Tibet CDC (Lawn) (total of 9 light traps), on Aug.10th in Gamagongsang Community (total of 2 light traps), on Aug.12th in Xiashasu Community (total of 4 light traps) and on Aug.9th in Jiacuo Community (total of 4 light traps), respectively. Information on temperature (°C) and relative humidity (%) was obtained from http://www.weather.com.cn. During collections, ambient outdoor air temperature and relative humidity was recorded hourly using a WS-1 Thermo-Hygrometer device. Mosquito species identification Each morning, the trapped adult mosquitoes were initially counted and identified according to morphological criteria using the key developed by Lu BL [24]. All collected mosquitoes were put into 1.5 ml centrifuge tubes individually and then transported to the laboratory of the Department of Vector Biology and Control in China CDC for further molecular identification. Genomic DNA was extracted from individual mosquitoes. A Qia Amp DNA Mini Kit (Qiagen Inc., CA) was adopted and DNA was extracted from the thorax of mosquitoes according to the manufacturer's instructions. To reveal the species composition of mosquitoes in Lhasa city, a multiplex PCR protocol was adopted using polymorphisms in the second intron of the acetylcholinesterase-2 (ace-2) locus, developed by Smith, J. L. & Fonseca, D. M [12]. Three forward primers (ACEquin, ACEpall and ACEpip) and one backward primer (B1246s) were adopted simultaneously. Each of the three primers was used in conjunction with the reverse primer B1246s [12,26], ( Table 1). Because of limited distribution of Cx. pipiens molestus in China [21], the primer of Cx. pipiens molestus was not included in this study. Approximately 105 (14.4%) mosquitoes that were selected from four sites (two institutions and two communities) in 2012, were further identified to sub-species level. The PCR assay was optimized for 25 ul volumes. Reactions contained 10 × PCR buffer, 250 uM of each dNTP, one unit of Taq polymerase, and genomic DNA. The amplification program consisted of one cycle at 94°C for five minutes, followed by 35 cycles at 94°C for 30 seconds, 55°C for 30 seconds, 72°C for one minute, and one cycle at 72°C for five minutes. In addition, to further verify the subspecies of the Cx. pipiens complex, further sequence analysis of the Ace-2 gene for both some pure Cx. pipiens pipiens, Cx. pipiens quinquefasciatus, Cx. pipiens pallens and possible hybrids among them were conducted by Tsingke Company (Beijing, China) using the same mosquitoes as the multiplex PCR assay. Approximately three each of pure and possible hybrid mosquitoes were further sequenced in this study. Statistical analysis Information was recorded on the date of the collections, number of bed net traps, number of light traps, duration of mosquito catch (h), the presence and gender of Cx. pipiens complex mosquitoes. An independent-sample T test was adopted to compare the density of mosquitoes between 2009 and 2012 after a satisfactory check for normality of the distribution and homogeneity of variance of the data. Numbers of species identified by multiplex PCR at different sites in different years were recorded and calculated. Analysis was conducted by SPSS (Statistical Package for the Social Sciences) statistical software (version 17.0). Ethics statement We obtained ethical approval from the Ethical Review Committee of Chinese Center for Disease Control and Prevention for this study (No. 201214). Permission was also obtained from the Government, the Municipal Health Bureau and Tibet CDC in the Tibet Autonomous Region. Morphological identification In this study, 907 mosquitoes in total were captured including 595 female and 312 male mosquitoes (Table 2). Preliminary morphological identification demonstrated that all these mosquitoes belonged to subspecies of the Cx. pipiens complex [24]. Mosquitoes collected by bed net traps in different years Using bed net traps, 132 mosquitoes (132 females) were collected in Tibet CDC (Lawn) and 5 mosquitoes ( Mosquitoes collected by labor hour method in different years Using the labor hour method, 34 mosquitoes (34 females) were collected in Tibet CDC (Outpatient building) and 7 mosquitoes (4 females, 3 males) were collected in Tibet CDC (Residential area) in 2009. In 2012, 26 mosquitoes (17 females, 9 males) were collected in Tibet CDC (Residential area) ( Table 2). The mean mosquito density was 62.10 (mosquitoes per hour per person) and 8.54 (mosquitoes per hour per person) in 2009 and 2012, respectively (Table 3). There was no significant difference of mosquito density monitored by labor hour method in 2009 and 2012 (t = 1.291, df = 2, P = 0.326>0.05). Mosquitoes collected by light traps in 2012 In 2012, light traps collected 83 female and 58 male mosquitoes in Tibet Post Hotel and 12 females and 13 males in Tibet CDC (Lawn). 2 females were collected in Gamagongsang. 124 females and 53 males were collected in Xiashasu. 135 females and 171 males were collected in Jiacuo ( Table 2). The mean mosquito density was 17.59 (mosquitoes per trap per night) in 2012 (Table 3). Discussion This is the first investigation to verify media reports of mosquitoes in Lhasa. We observed subspecies of Cx. pipiens complex and its hybrids on two occasions, three years apart. Our findings were based on entomological investigations in the field and multiplex PCR methods in the laboratory. In this study, there was no significant difference of mosquito density monitored by bed net trap and labor hour method in 2009 and 2012. In urban Lhasa, we observed that the ecological and geographical factors did not change significantly three years later. However, it seemed that the mean mosquito density, both using bed net traps and labor hour method, were relatively higher in 2009 than in 2012 though no statistical significance was observed. We note that the summer of 2012 did not match the high temperatures of three years earlier: the maximum temperature in 2012 was 29.0°C compared with 30.4°C in 2009 (www.tianqi.com). In addition, the public health campaign may also play a major role in the relatively lower density in 2012. In recent years, Lhasa accelerated the process of establishing of the National Sanitary City in China. As a key indicator, mosquito density was controlled by local health authorities and related agencies using some insecticides and similar products. This campaign might have exerted some adverse impact on the density of mosquitoes in 2012. Furthermore, with the huge development of the economy and culture in Lhasa, local citizens focus more on their health status than ever before. A variety of insecticides have been adopted to protect themselves from mosquitoes bites. The presence of the mosquito populations in different collections during the mosquito season in Lhasa city, Tibet. "+" means the presence of Culex pipiens complex. Outpatient building, residential area and lawn lies in the courtyard of Tibet CDC. All of these Culex pipiens complex were collected during the season of peak activity. Previous studies showed that the most northerly area of Cx. pipiens pipiens is about 45°N in the New World and the southerly area is about 39°S, and the usual altitude of this subspecies is lower than 3,000 m [41,42]. In China, Cx. pipiens pipiens has been recorded only in Xinjiang Uygur Autonomous Region (northwestern China). However, previous data on the distribution of Cx. pipiens pipiens is limited: reliable identification depends on collection of males, which was not always the case. Furthermore, species and subspecies classification has been difficult because there were no populations of Cx. pipiens pipiens in Chinese laboratories. In the current study, pure Cx. pipiens pipiens (subspecies of Cx. pipiens complex) was definitively identified in urban Lhasa (an area of elevation higher than 3,600 m). This finding significantly extends present knowledge of the distribution of Cx. pipiens pipiens in China, and has important implications for the control of mosquitoes and mosquito-borne diseases in Lhasa city. In Eastern Asia Cx. pipiens pallens transmits lymphatic filariasis and canine dogworm and may act as a vector for West Nile virus [43][44][45][46]. Cx. p. pallens differs from hybrids of Cx. p. pipiens and Cx. quinquefasciatus [14]. In China, Cx. pipiens pallens has been found north of the Yangtse River [24], but not previously at an altitude of greater than 2,900 m [25]. This study has uncovered possibly extensive hybridization among subspecies of Cx. pipiens complex in Lhasa city. Natural hybridization is defined as "successful matings in nature between individuals from two populations, or groups of populations, that are distinguishable on the basis of one or more heritable characters" [47]. Combinations of this kind enhance rapid evolution and may lead to speciation [48]. According to the existing literature, recurring hybridization occurs in the Cx. pipiens complex mostly between the two most widespread species, Culex (Culex) pipiens and Cx. (Cx.) quinquefasciatus [14]. Hybrids have Table 3 The density of the Culex pipiens complex in different years during the mosquito season in Lhasa city, Tibet been reported in North America [17,42], Argentina [19], as well as in Madagascar [49]. A multilocus genotype analysis revealed current hybridization between Cx. p. pallens and Cx. quinquefasciatus in southern Japan, Republic of Korea, and China [12,50]. In the present study, primers specifically designed for East Asia by Smith & Fonseca were adopted. These primers were successfully designed for the identification of mosquitoes in Lhasa city. In this study, positive controls were also included, in other words, Cx. pipiens pipiens from Urumchi, Xinjiang, Cx. pipiens quinquefasciatus from Dali, Yunnan, and Cx. pipiens pallens from Beijing. The study of the indigenous populations of mosquitoes using molecular markers allowed us to confirm the occurrence of Cx. pipiens complex in Lhasa city (southwest of China). Of some interest is the discovery of hybrid populations including Cx. pipiens pipiens, Cx. pipiens quinquefasciatus, Cx. pipiens pallens and their hybrids (65.71%) in Lhasa city. Climate change may have played a part in the arrival of mosquitoes in Lhasa. Average temperatures increased over the Tibetan plateau from 1955 to 1996 by about 0.16°C/decade [51], much more than in China generally. From 1961 to 2000, the greatest increase in daily mean temperatures in summer (June to August) in Tibet occurred in Lhasa city [52,53]. In 2009, Tibet experienced unusually warm conditions and the maximum temperature in Lhasa reached 30.4°C, higher than the previously reported record (29.9°C in 1971). In other words, the first public reports of mosquitoes coincided with the warmest summer in Lhasa since records were first kept. It is possible that mosquitoes were introduced earlier, but numbers multiplied during the particularly hot summer of 2009. In the future, further warming is expected, and further economic development in Tibet will lead to even greater movement of freight and people. These conditions raise the risk of outbreak of mosquito-borne diseases in a population with no prior exposure to such infections [54][55][56]. Therefore, it is urgent to strengthen the detection and monitoring of mosquito-borne diseases in the region. Other factors, such as demographic and environmental factors, may also play a more important role in establishing the mosquito population in Lhasa. In the last 30 years, China has undergone enormous economic growth, largely due to greatly increased international trade. This burgeoning trade has triggered environmental threats from an expanding list of biological invaders and has already caused damage to China's environment and economy. Huge construction projects, such as the Qinghai-Tibet Railway [57], could further spread invasive mosquitoes to Lhasa city [37,58,59]. As to urbanization, Tibet entered a stage of accelerated urbanization after 1995. The large floating population from outside Tibet has become the driving force for urban expansion and the rising urbanization rate. Tibet's urbanization rate will be up to 43% by 2020 based on a website (http://chinatibet. people.com.cn/6828539.html). At present, Lhasa is claimed to be a modern city on the "roof of the world" with a forest of new buildings and luxury hotels, restaurants and stores. Previous research revealed that urbanization serves in the formation of appropriate habitat of culicines. In Macau, recent urbanization has provided optimal habitat for the population increase in culicines [60]. In Tanzania, urbanization resulted in some changes in mosquito populations [61]. Now, Lhasa, which has a large number of tourism resources, such as the Potala Palace, Jokhang Temple, Sara Monastery, and Barkhor Street, is a popular destination for both domestic and international travelers. By April, 2008, there were over 1,600 licensed tour guides in Lhasa according to The Chinese National Tourism Administration. Tourist aircraft or trains may carry mosquitoes to urban Lhasa and subsequently threaten the health and lives of local citizens. It was reported that labor flow and travelers are significant factors contributing to the spread of dengue virus infection and chikungunya fever [62]. In summary, our investigation provides insight into the new distribution of subspecies of Cx. pipiens complex and its hybrids in Lhasa, Tibet. The findings mentioned above have a significant implication in public health areas, both at policy making and practical levels. The multiplex PCR assay adopted in this study will be helpful to researchers and will aid vector control programs by facilitating the rapid and reliable identification of local Cx. pipiens complex and its hybrids. The future focus of the control and prevention of mosquito-borne diseases in Lhasa is West Nile virus, St. Louis encephalitis viruses, avian malaria, and filarial worms. Strengthened community health education and engagement should be conducted to better guarantee the health and life safety of local citizens. The results could provide a reference for development of varieties of strategies and measures to control mosquitoes and mosquito-borne diseases at high elevation regions in the world in future. This study has limitations since it was planned and implemented, initially, in response to public concerns, and includes information from only two time points. However, the results indicate that mosquitoes are established in a high altitude urban setting in Tibet. Further studies are needed to confirm the continuing presence of mosquitoes, to clarify the patterns of hybridization, and to shed further light on likely origins and factors influencing their distribution and establishment in Lhasa city [63]. Conclusion In summary, the results revealed subspecies of Cx. pipiens complex and its hybrids on two occasions, three years apart in urban Lhasa. There was no significant difference in mosquito density monitored by bed net trap and labor hour method in 2009 and 2012. Mosquitoes in the Cx. pipiens complex appear to be established in Lhasa City, TAR. Strengthened community health education and engagement should be conducted to better guarantee the health and life safety of local citizens. Competing interests The authors declare that they have no competing interests. Authors' contributions QL, XL, AW and CC planned the project and wrote the paper. QL, XL, C, P, FW, B, LB, YG, D, GL, JW, SS, D and X conducted the field survey. XL, LL, L and HW carried out the multiplex PCR assay in the lab. XL, QL, CC, and AW contributed to data analysis. All authors read and approved the final manuscript.
2017-04-13T10:21:36.666Z
2013-08-06T00:00:00.000
{ "year": 2013, "sha1": "3e6888d5f93feb60dbb3cb439f29a870468cb73b", "oa_license": "CCBY", "oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/1756-3305-6-224", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e6da0e28863fac69655f9638a698d436c947389", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234871195
pes2o/s2orc
v3-fos-license
The host phenotype and microbiome varies with infection status, parasite origin and parasite microbiome composition Megan Hahn Stony Brook University https://orcid.org/0000-0001-9266-8232 Agnes Piecyk Max Planck Institute for Evolutionary Biology: Max-Planck-Institut fur Evolutionsbiologie Fatima Jorge University of Otago Robert Cerrato Stony Brook University Martin Kalbe Max Planck Institute for Evolutionary Biology: Max-Planck-Institut fur Evolutionsbiologie Nolwenn M. Dheilly (  nolwenn.dheilly@stonybrook.edu ) Stony Brook University https://orcid.org/0000-0002-3675-5013 7 determining the host immune response [46,47]. In particular, anti-inflammatory interleukins, 112 foxp3 and tgf-β appear to be at the heart of the interplay between bacteria and the immune system 113 [48,49]. For example, the microbiome modulates the innate immune response of mice exposed to 114 influenza [50]. In this study, a microbiota-induced expression of IL-1β and IL-18 was found to be 115 associated with better outcome, and a distal inoculation of LPS to the colon was sufficient to 116 restore the immune response to influenza virus in the lung. Thus, to understand the impact a 117 parasite has on its host holobiont, it is essential to consider both immune function and microbiome 118 composition, and to investigate how they interact with each other. 119 Parasites can also host microbes and for some parasites, the role of bacterial symbionts in 120 virulence and pathogenesis has been thoroughly investigated. Many nematodes depend on 121 Wolbachia for normal development and fertility, and the bacteria also contributes to inflammation 122 and adverse reaction to anti-filarial drugs [51]. Similarly, the bacteria Neorickettsia has high 123 prevalence among digenean trematodes and is often transmitted to parasitized hosts causing Tail), and GPS (Blue Tail) separated by a 164 mesh divider from 5 Wolf fish , 5 GPS fish, and 4 Walby fish that had been exposed to parasites 165 (right). (D) Upon dissection, the success of infection was assessed, and exposed individuals were 166 classified as exposed non-infected (ENI) or exposed successfully infected (ESI). All successfully 167 infected fish were processed, and corresponding ENI and control non-infected fish from the same 168 tanks and fish origin were processed as controls. on plerocercoid surface confirmed that the parasite harbors an endomicrobiome ( Figure S6). 188 Cross infection experiment 189 Following experimental infections with hosts and parasites of different origin (Figure 1), 190 we quantified and sequenced the 16S genes of a total of 42 control non-infected sticklebacks (CNI), 191 35 exposed but non-infected sticklebacks (ENI), and 71 exposed and successfully infected 192 11 sticklebacks (ESI), and corresponding S. solidus (Ss). Our results confirmed that S. solidus and G. 193 aculeatus harbor a distinct microbiome, and that exposure and infection alters the host microbiome 194 ( Figure S7-8). 195 The microbiome of threespine sticklebacks varied with exposure, infection, host origin, 196 and parasite origin ( Figure S9, Figure 2). Comparisons of the microbiome composition of CNI fish 197 revealed constitutive differences between Alaskan and European sticklebacks ( Figure 2A). 198 Exposure to S. solidus was associated with small changes in beta diversity ( Figure 2B), but resulted 199 in an increase in differences in microbiome diversity metrics among fish of all three origins ( Figure 200 2C). Parasite origin played a less profound role and limited differences in diversity were found 201 between fish exposed to Walby and SKO parasites ( Figure 2C). Successful infection with S. 202 solidus was associated with an increase in bacterial load that varied with parasite origin ( Figure 203 2B and 2D). The microbiome of infected fish was dominated by more taxa than non-infected fish 204 ( Figure 2B). Finally, parasite origin, but not host origin, was associated with differences in 205 microbiome composition among infected sticklebacks ( Figure 2D Unifrac Axis 3 p=0.041). Differences were found between alpha diversity of fish exposed to Walby their corresponding fish host revealed an absence of relationship ( Figure 3A, Figure S8). In total, 245 93% of the ASVs, and 9.4% of the families present in ESI sticklebacks were never found in S. We used DESeq2 to identify differentially abundant bacteria phylotypes ( Figure S11). Cyanobacteria. 272 Host genotype and parasite genotype both contributed to differences in relative abundance 273 of bacterial families ( Figure S11). In CNI, host genotype was associated with variation in origin, but not host origin was associated with significant differences in the strength of the 298 correlation between immune gene expression and microbiome composition ( Figure 4B). These 299 correlations appear to be driven by a subset of bacterial families, among which some were 300 positively correlated with gene expression whereas others were negatively correlated ( Figure 4C). 301 The most significant correlations involved Treg-inducing genes stat4, stat6 and il16, Treg 302 associated gene foxp3, complement factor cfb, anti-microbial innate regulatory genes cd97 and 303 marco, and the regulator of inflammation tnfr1 ( Figure 4C, Figure S12). More specifically, the 345 Our results provide the first set of evidence of an endomicrobiome in the cestode S. solidus. 346 We collected S. solidus plerocercoids from the body cavity of G. aculeatus, so that the parasite 347 was no longer in contact with the host gut microbiota, limiting the potential for contamination [76]. 348 We did not culture any bacteria after spreading freshly sampled plerocercoids on agar suggesting 349 the absence of a surface microbiome. impact the maintenance of these microbial communities. 382 Both exposure and infection influence the host microbiome composition 383 We observed an impact of both exposure and infection by S. solidus on the fish gut For dissection, an incision was made along the lateral line of the fish body, around the bony pelvis. 531 The cut extended from the pectoral fins to just anterior of the anus to avoid cutting the intestine. 532 The sex of the fish was assessed by visual inspection of gonads at the time of dissection and then 533 confirmed using PCR with sex specific primers as described in [100]. The presence of S. solidus that any bacteria found in S. solidus were indeed part of an endomicrobiome and not contamination 539 from the fish body cavity or surface of the parasite. All samples were stored at -80°C until use. 541 In Spring of 2016, threespine sticklebacks were caught, as described above, from Wolf lake albidus copepods from a laboratory stock were singly infected with S. solidus procercoids as 561 previously described [102]. Fish were starved for 24 hours before being fed either one singly 562 infected copepod or one non-exposed copepod (sham control). After 2 days, fish were transferred 563 into 16L aquaria. Fish exposed to a given parasite family were held together in the same tank. Each 564 tank held five exposed fish from Wolf, five exposed fish from GPS and four exposed fish from 565 Walby, in addition to one control fish per fish population (17 fish/tank). The common garden also calculated if the fish was infected [107]. 585 Head kidney RNA was extracted with a NucleoSpin® 96 kit (Macherey-Nagel) following 586 the manufacturer's protocol. RNA concentration and purity were determined 587 spectrophotometrically (NanoDrop1000; Thermo Scientific). We used the Omniscript RT kit 588 (Qiagen) according to the manual but used 0.2 µl of a 4-unit RNase inhibitor (Qiagen) per reaction. individuals) that were either (i) sham-exposed (42 individuals), (ii) exposed but non-infected by S. MiSeq following the manufacturer's guidelines. Sequence data were initially processed to join 629 forward and reverse reads and remove barcodes. bacterial load) as the dependent variable for a given model [120]. We began by testing the impact 681 on fish population in control non-exposed fish (CNI) (Figure 2A). We conducted model selection 691 To investigate the impact of infection status on these indices, we used infection status, 692 which included the levels CNI (control non-infected), ENI (exposed non-infected), and ESI 693 (exposed and successfully infected), sex, and their interaction as potential fixed factors, and 694 random factors were fish population, parasite population, tank, and round. Model selection was 695 performed as described above and the best-fit model was chosen to calculate p-values ( Figure 2). 36 Following this we tested the impact of fish and parasite population in exposed non-infected fish 697 (ENI), in successfully infected fish (ESI), and parasites (Ss). After conducting model selection as 698 described above, we tested for the role of fish population and parasite population separately ( Figure 699 2). When testing fish population effects, we used fish population, sex, and their interaction as fixed 700 factors, and random factors were parasites population, tank, and round.
2021-05-21T16:56:57.742Z
2021-04-15T00:00:00.000
{ "year": 2021, "sha1": "51ad8d856f1c06646a22b61b3af57492e7b663bf", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-323107/v1.pdf?c=1631877170000", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "29ac79f8a38f55869696b1b152416ae77814c474", "s2fieldsofstudy": [ "Biology", "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
251195416
pes2o/s2orc
v3-fos-license
Parameters Influence on the Dynamic Properties of Polymer-Matrix Composites Reinforced by Fibres, Particles, and Hybrids In this paper, we present an extensive experimental study on the dynamic mechanical properties of composites with polymer matrices, as well as a quantification of the parameters that influence these properties. Polymer-composite matrices make it possible to form any reinforcement arrangement of fibres, particles, and layers, which makes it possible to form composite materials with certain dominant mechanical properties according to the internal arrangement for the application. In this study, we focused on the dynamic properties (i.e., damping parameters, such as the loss factor (tan d), logarithmic decrement (λ), storage modulus (E′), and loss modulus (E″)) of composites with polymer matrices, including parameters such as the fibre material, fabric weaving, fibre orientation, temperature, frequency, particle size, volume of short fibres, and epoxy resin type. If other articles focus on one type of composite and 1–2 parameters, then the benefit of this article lies in our analysis of 8 mentioned parameters in the experimental analysis of 27 different types of composites with polymer matrices. The tested fibre materials were glass, aramid, and carbon; the tested woven fabrics were twill, plain, unidirectional, and satin; the temperature range was from −50 to +230 °C; the frequency was 1 Hz and 10 Hz; the particle size was 0.1–16 mm; the volume percentages of the short fibres were 3, 6, and 12 vol.% of the hybrid polymer composites and the type of polymer matrix. We used the free-damped-vibration method with vibration dynamic signal analysis and the forced-damped vibration of dynamic mechanical thermal analysis for testing. We ranked the parameters that influence the dynamic vibration properties according to the effects. Among sets of results provided in the paper, considering the storage modulus, loss modulus, and loss factor, the best results of the fibre composites were for aramid-fibre-reinforced polymers, regardless of the weave type, with an advantage for unidirectional fabric. The best results of the particle composites were for those with fine filler sizes that incorporated the short fibres. Introduction The source of the macroscopic mechanical behaviour of a material is its internal arrangement. For multilayered laminates, we determine the internal arrangement by reinforcement fabric patterns woven with uni-, bi-, and multidirectional weaving fibres, as well as the particular dimensional characteristics of the fibres and layup of the laminate. For particular composites, the internal architecture can be variable mainly for reinforcement dimensions and shapes. In the classical approach, the materials are supposed to be homogeneous; however, each material has its own meso-, micro-, or nanostructure. The material homogeneity represents the idealisation of the material continuum. For example, in steel, we distinguish the components ferrite, martensite, perlite, and austenite, which may also include alloys. We can arrange the components differently, with different grain sizes and different mechanical microproperties, which fundamentally affect the macroscopic mechanical properties and behaviour of steel, which is the same for other anorganic materials and biological tissues (wood, bones), which are all of a hierarchical structure with the internal arrangement, configuration or architecture of the material [1][2][3]. In the current publications, researchers focus mainly on the strength and stiffness of composite materials, the method of the analytical prediction of these quantities, and multilevel simulations and experiments for individual types of composites. In the numerical and experimental studies, researchers not only analyse the static properties, but also the impact or combined ones [4][5][6][7][8][9]. However, in industrial applications, the composites are not only subjected to static loads, but also to the dynamic components of the loads, which are substantial, along with vibrations as natural parts of the mechanical-system operation. The utilisation of vibration dynamic characteristics [6,10] for applications is an important scope of the current research. Computations of composite materials and hybrid-structure materials require advanced models and computation (i.e., multilevel models, representative volume element models, micromechanical models, etc.) that use finite element analysis, boundary element methods, or meshless methods that focus on linear and nonlinear static and dynamic analyses and optimisation. The numerical simulations represent a demanding and complex process of modelling complex composite structures of multilayer laminates, as well as particle composites [11][12][13][14]. Recently, researchers have investigated the dynamic mechanical properties, as well as their utilisation and control. Researchers tested high-damping sandwiches, and they found that the mechanical and damping properties deeply depend on the fibre orientation of the lamina, the property of the rubber sheet (involved in the sandwich laminate), and its layer number and sequence [15]. In the development of the field of polymer composites, researchers have focused on testing different volume fractions of individual composite constituent phases, filler sizes, and their optimum compositions (more in [16][17][18]). Researchers found that polymer composites (both layered and particulate) had a significant effect on the damping for applications of high-frequency vibrations, and they measured a reduction in the resonant peak up to 30%, mainly by using polymer matrices (more in [19,20]). Researchers have investigated the field of applications that use polymer concrete for a machine base frame or to fill existing machine designs to improve the dynamic stiffness in advanced applications of production-machine components for precision-tool machines [16,21], a grinding machine [22], and a machine-tool worktable [23]. Polymermatrix composites can be both machined [24] and a component part of machine-tool frames. The vibration dynamic characteristics affect the noise emissions [25], which is an important factor in traffic and production factories that influences everyday life. In the last years, carbon nanofibers (CNFs) have shown wide applications in the fields of materials science, nanotechnology, energy storage, environmental science, biomedicine due to their unique structures and functions, and a lot of achievements on the synthesis and application of CNF-based nanomaterials have been obtained [26]. CNFs as reinforcing phase of composites can form various CNFs with porous, stackedup, helical, and tubular structures [26] and, thus, allow for the creation of even more complex internal material structure. Moreover, due to the rich intertube contacts and various ways to dissipate energy, carbon nano-tubes assemblies have shown great advantages for developing novel high-damping materials, which can also show high strength or modulus for engineering applications [27]. A general comprehensive view of the parameters that influence the dynamic properties, and a quantification of the contributions of the individual parameters, are lacking in the previous work. We can state that, in general, for fibre and particle composites, the internal structure and arrangement contribute to the influence and control over the mechanical properties. Moreover, frequency and temperature can influence the dynamic mechanical properties of materials. The contribution that we make in this study is the focus on the determination and ranking of the highest and lowest effects and contributions of each of the following parameters to the dynamic mechanical properties (i.e., the storage modulus (E ), loss modulus (E ), and damping measures as the loss factor (tan d) and logarithmic decrement (λ)): • For the fibre composites, the effects of the fibre material, fabric weaving, fibre orientation, temperature (large range: −70 to 200 • C), and frequency (1 Hz and 10 Hz); • For the particle composites, the effects of the particle size, short-fibre-volume percentage, and type of polymer matrix. Furthermore, we used two experimental methods for the estimation of the mentioned dynamic properties, depending on the composite type: dynamic mechanical thermal analysis and vibrodiagnostics. Researchers use dynamic mechanical analysis as a general evaluation method to research the viscoelastic behaviour of polymers. Vibrodiagnostic methods allow for the monitoring and analysis of the dynamic signal to research the vibration-damping characteristics. Materials and Methods According to the shapes and dimensions of the samples of the two types of composite materials, we chose two experimental measurement methods, which we describe in the following sections, to determine the dynamic properties. Dynamic Mechanical Thermal Analysis: Forced Vibration Test Dynamic mechanical thermal analysis (DMTA) is one of the thermal analysis methods that is designated specifically for polymers because of the substantial impact of temperature on mechanical properties. For most polymer materials and their composites, the glass transition temperature is important because of the change in the structure and stiffness. The DMTA test is based on forced oscillations and the measurement of the force change during the three-point-bending deflection of the sample, which, in the present study, we assessed over a wide range of temperatures. The equipment provides the possibility of different states of deformation (free bending, dual cantilever, tension, and compression). Sensors read the tested material response during the deformation of the sample and measure the force and time of the response between the dynamic force of the pushrod and the sample. We present the schema of the measurement elements of DMTA and the sample holder that we used in the experiment in Figure 1. The major parameters that define the viscoelastic nature of polymer composites are the storage modulus (E ), loss modulus (E ), phase angle (δ), and loss factor (tan δ) (notated as tan d in the rest of the paper), which we can obtain during DMTA investigations. We schematically show the relations among the mentioned quantities in Figure 2, which are as follows: and Vibration Diagnostics and Bump Test: Free-Vibration Test We performed the bump test with dynamic vibrational signal analysis at room temperature, as the shapes of Samples A-I and the capabilities of the DMTA analyser did not allow us to determine their damping parameters. The tested material sample represents the SDOF (single degree of freedom) massdamper-spring vibration system. The bump force excites the damped-free-vibration response (more in [28,29]). The samples freely vibrate on a natural frequency(ies) and are damped by the internal friction of the material. The equation of the underdamped motion (damping ratio: ζ < 1) for the free-damped vibrations of an SDOF mechanical system is: where x is the underdamped displacement (amplitude, distance from equilibrium) of the mass for the free-vibration effect of the mechanical system, X 0 is the amplitude, ω n and ω d are the undamped and damped natural frequencies, respectively, and ϕ is the phase angle. According to (3), we can write the amplitudes of two successive vibration cycles (x 1 and x 2 ) as: where t 2 = t 1 + T d , and T d is the damping period. We can define the logarithmic decrement (λ), which represents the rate at which the amplitude of a free damped vibration decreases, over a number of cycles (n): In Figure 3b, we present the sample and vibrometer during measurement. The red dot is the contact point of the laser beam (red line). The time-domain-response functions of the sample were the output signal. To estimate the values of the logarithmic decrements, we made two time records, i.e., each time record for one bump excitation. The average values are provided in the results. Samples and Materials The testing samples were polymer-matrix-composite materials. In the following section, we provide a description of the fibre and particle composite samples that we used in the experimental study. We incorporated continuous glass, aramids, and carbon into the polymer matrix in the form of fabrics in Samples 1-12, as they are the most common fibre reinforcements. Samples A-I consisted of spherical particles (preferably silica) and spherical particles with discontinuous fibres (carbon), which we distributed in the polymer matrix. Individual samples differ in the various materials, shapes, sizes, arrangements, orientations, volume percentages of reinforcement, etc. Samples 1-12 ( Figure 1) consisted of six layers of fabrics in the polymer matrix. Each layer had the same orientation, characterised by the warp fibres situated in the longitudinal dimension of the samples and corresponded to 0 • . The dimensions of samples were 10 × 50 mm, with a thickness range of 0.2-1.6 mm. The polymer matrix was epoxy resin (LG285) with an HG285 curing agent at a mixing ratio of 100:40. Resin LG285 is a high-tech laminate designed for hand lamination without heat postcuring. The mentioned resin is mainly used in the aviation industry. The producer in [5,30] describes the mechanical properties of the epoxy resin (LG 285) with HG286. In Figure 4, we present three of each type of sample that were tested. Moreover, we provide the basic characteristics of the 2D fabrics in the figure (i.e., letters G, A, and Care for glass, aramid, and carbon, respectively). The mass per unit area of fabrics was in the range of 80-173 g/m 2 , and the types of woven fabrics were twill, plain, unidirectional, and satin. Particle-Reinforced Composites We uniformly distributed the particle reinforcement (filler) with a random orientation, and of a spherical shape. The particles were in the forms of silica gravel, sand, and dust of various fractions in the range of 0.1-16 mm. We can recognise them as spherical; however, the particle shapes are not ideal spheres and, even at the macroscopic level, we could identify roundness, sphericity, and regularity. Samples A-I are of a cube shape, with dimensions of 100 × 100 × 100 mm. We made the first set, Samples A-C (Table 1), to test the dimensions of the filler dependent on dynamic quantity (i.e., damping measured by logarithmic decrement). In the second set, Samples D-F (Table 2), we focused on the influence of the volume percentage of chopped short carbon fibres in the range of 3-12 vol% on the damping, and we used the last sample set, Samples G-I (Table 3), to test the influence of various resins and fixatives on the damping. We present the samples in Figure 6. Table 1), (D-F) with different short fibre volume and same fillers (more in Table 2), and (G-I) with different epoxy resins and constant fillers mixture (more in Table 3). Results of DMTA Tests of Multilayered Laminates The presented results are for the samples that we describe in more detail in Section 2.3.1. We analysed the variations in the storage and loss modules and the loss factor along the temperature at a constant frequency. The temperature while testing was in ranges from −50 to 200 • C and from −50 to 230 • C, depending on the thermal conditions, with a heating rate of 2 K/min. The results that we provide in Figure 7 are for a dynamic load force of 6 N, a frequency of 10 Hz, and an oscillation amplitude of 120 µm. The nature of the fibre reinforced polymer plots in Figure 7 is typical for native polymers without reinforcement. It indicates very strong influence of the polymer matrix on the mechanical nature of the polymer matrix composite behaviour. However, the individual laminate composites show different values for the loss factor and storage modulus, which significantly depend on temperature. In the range of the standard operation conditions (i.e., 20-50 • C), the storage modulus is that which differs the most when comparing the individual samples. At a temperature of about 70 • C, the storage modulus begins to decrease and the loss factor increases. The unidirectionally oriented fibres improved the stiffness the most, as well as the damping capacity in the glass transition (Tg) temperature. We processed the results in Figure 8 as the average values in the temperature range from −50 to 60 • C for the storage modulus (E ), loss factor (tan d), and loss modulus (E ), which we compared. By making a comparison according to the material of the fibre (Figure 8a), regardless of the weave type and weight per m 2 , the GFRP samples had the lowest values of the storage modulus (E , and the AFRP and CFRP samples had average values in the mentioned temperature range that were 1.35-times and 3.07-times higher, respectively. The ability to store energy elastically (i.e., the ratio of the elastic (in phase) stress to strain) was the largest for the CFRP. Another parameter for comparison is the loss factor (tan d), which indicates the damping of the material (i.e., degree of energy dissipation) (Figure 8b). The lowest loss factor was for the CFRP, and the GFRP and AFRP samples had 2.52-times and 3.48-times larger loss factors in the range of −50 to 60 • C, respectively. Moreover, we can evaluate the loss modulus (E ) (Figure 8c) according to (1). The loss modulus reflects the energy-dissipation capacity and the material's ability to dissipate stress through heat. This means that the ratio of the viscous (out of phase) component to stress was the largest for the AFRP. The GFRP and CFRP samples had average values that were 0.6-times and 0.73-times lower in the evaluated temperature range from −50 to 60 • C. The influence of the weaving type (Figure 8d), regardless of the material, was obvious after evaluating the storage modulus (E ). The orientations of the warp fibres were the same in all the layers, and so the orientation towards the direction of the loading force was the same. The unidirectional-fibre-orientation samples (i.e., 9, 11, and 12) had the highest values. The twill, plain, and satin weavings had similar values, which were slightly better for the plain weave. CFRPs are characterised by a special relationship among the measured quantities. CFRPs have a low loss factor, even though the loss modulus (E ), which represents the ability to dissipate energy, is high and comparable to AFRPs. In our case, the large storage moduli of the CFRPs of Samples 11 and 12 were amplified because of the unidirectional fibres. The source of the low loss factor of the CFRP is its large storage modulus (E ), as a measure of the material stiffness and the ability to store energy during one load cycle. Thus, the carbon and aramid fibres are a good combination for hybrid carbon-aramid-fibre-reinforced composites (more in [32]), which are characterised by both high storage and high loss moduli, as well as a high loss factor. Bidirectional and multidirectional fabrics are intended for the multiaxial stress that is applied for structural components. However, the composites of unidirectionally oriented fibres are characterised by fibre-direction-dominant properties that are larger than those of bi-and multidirectional fabrics. The unidirectional woven fabric, according to the unidirectional (dominant) stress in the final component subjected to a unidirectional load, allows us to maximise its mechanical properties. The dynamic properties of materials depend on the frequency of excitation force (Figure 9). After we evaluated the loss factor (tan d) at the glass transition (Tg) temperature for 1 Hz and 10 Hz for Samples 1-12, the increase in the loss factor (tan d) was in the range of 0-22.2% (Table 4). Each woven fabric had warp and weft fibres that we could orient differently with respect to the load. We present the significant effect of this parameter in Figure 10a,b. The nature of the curves in Figure 10a,b is the same as for those samples in the whole range of −50 • C to 60 • C. The storage modulus (E ) is minimal for 45 • orientation and loss modulus (tan d) is the maximal for that orientation. In our case, difference between storage modulus E and loss factor tan d of 0 • (sample 13) and 54 • (sample 16) is significant, i.e., by 85%. We present the DMTA plots of the storage modulus (E ) (green) and the loss factor (tan d) (blue) of Sample 14 in the range from −70 • C to 170 • C and for frequencies of 1 Hz (continuous) and 10 Hz (dashed) in Figure 10c as representatives of all the plots made for evaluation. Results of Bump Tests of Particle-Reinforced Composites Samples A-I differ in the filler size and shape ( Figure 11). The smallest fractions of filler for A and B provided the better logarithmic decrements. In terms of damping, smaller fractions are more suitable. Thus, the larger filler-matrix contact area (i.e., interfacial surface) and the larger number of fillers substantially contributed to the larger dissipation of the vibration energy. Sample C had the largest fraction. However, its damping was larger than those of the others. The positive source of the damping in Sample C was silica sand (0.3-1 mm), which had 30 vol% of filler (Table 1). We incorporated chopped carbon fibres of 3, 6, and 12 vol% of the medium fraction filler into Samples D-E (more in Table 2). The higher volume percentage of the chopped fibres improved the damping. By comparing a 3 vol% and 12 vol%, the damping was increased by 8%. We used Samples G-I to compare the logarithmic decrements of the three different epoxy resin matrixes with the same filler (Table 3), but with different rates of solidification during processing. The maximal difference is 3%. The type of epoxy resin matrix is the least influential parameter for damping. The Nature of Time-Domain-Response Curves The time-domain responses of the multilayered laminates and particle composites of same boundary conditions were substantially different in terms of the number of vibration cycles and amplitudes. We compared the curves in Figure 12 over the same time (i.e., 0-1.2 s). In the case of multilayered laminate samples (Figure 12a), the response curve is characterised by the number of cycles in the mentioned time period. The number of cycles (n) is in the order of tens to hundreds. However, we can see from the character of the time-domain curves of the particle samples in Figure 12b that only three-six cycles are necessary to damp the bump force, although the first amplitude is 20 times larger. The time required to damp the free vibrations is very short. Such a composite material is characterised by a substantial damping time and measures, which we cannot achieve with the traditional construction materials. Microfield Distribution and Sources of the Composite Material Damping The mechanism of composite materials damping is still object of research. To understand the source of the vibration-energy dissipation and obtain the mechanical microfield distribution, we analysed the unit cells in static numerical analyses by using the finite element method. The numerically simulated unit cells consisted of a single fibre/particle surrounded by a matrix. We obtained the axial-stress distributions for the short cylindrical fibre and spherical particle by using the symmetry boundary conditions and prescribed displacement in the fibre/particle axis. The ratio of the Young's modulus of the matrix and fibre (E m /E f ) was 1:100. The aspect ratio between the fibre diameter and its length (D/L) was 10. The particle was of a spherical diameter (D). Regardless of the volume fraction of the fibre/particle, the character of the axial-stress distribution was the similar. (Figure 13, grey section is axial-stress distribution in half of fibre/particle, distribution is symmetrical in the second half). Figure 13. Axial stress in half of (a) fibre/(b) particle: L/2 is half of fibre length, D/2 is half of particle diameter, ∆ is gap between fibres/particles. Comparing the characters of the axial-stress distribution in the fibre and in the particle axes, they are different. For the spherical-particle composites, the gradient of the axial stress, as well as its maximum value, is the largest in the matrix in the vicinity of the particle/matrix interface. Applying the dynamic periodic force, the maximum of that stress is damped by the material of the matrix itself which is one of the sources of damping. In contrast, for short-fibre-reinforced composites, the maximal axial stress is in the fibre which has lower material damping as s matrix. However, a source of damping is the maximum shear stress at the surface of fibre ends and it is the initiator of slip caused by decohesion and recohesion that increases the damping capacity. During the slip process, energy is dissipated through heat caused by the frictional sliding at the interface [33]. The fibre/matrix interfaces are subjected to elastic as well as plastic deformation under vibration conditions [34]. We recognize the frictional damping mainly due to slip in the unbound regions between the fibre/matrix interface as damping due to the damage [35]. The fibre/matrix interface is a stress-transfer medium, and it determines the composite performances in failures that are initiated by the accumulation of interfacial cracks [36]. Conclusions The polymer matrix secured the position of the reinforcement or filler; however, the nature of that material directly contributed to the dynamic properties of the tested composites. In this paper, we offer the results of a broad experimental analysis of the polymer matrix based on free-and forced-damped vibration tests, in which we mainly focused on factors that affect the dynamic properties, such as the fibre material, fabric weave, fibre orientation, temperature, frequency, particle size, short-fibre-volume percentage, and type of epoxy resin matrix. We tested 27 samples of composites. We highlight the following conclusions from the experimental study: For the multilayered laminates, the fibre orientation had the greatest effect, followed by the fibre material; The fibre orientation had a greater effect than the type of weaving; The type of weaving affected the storage modulus (E ) more than the tan d; In the range of temperatures from −50 to 60 • C, the loss factor (tan d) was about 10-times lower than at the Tg temperature; The loss factor (tan d) at Tg temperature was in the range of 0-22.2% when we compared loading frequencies of 1 and 10 Hz of multilayered laminate composites samples; The particle size had the greatest effect on the particle composites damping. The time-domain responses of the multilayered laminate and particular composites were very different and were shown in the number of cycles and amplitudes, as well as in other resulting quantities, such as the damping time. For particular composites, the number of vibration cycles was from one to two orders lower. Considering the storage modulus (E ), loss modulus (E ), and loss factor (tan d), the best results among the 12 presented fibre-reinforced laminate-composite samples are for multilayered AFRPs, regardless of the fabric weave type, with an advantage for unidirectional fabric. The advantage of increasing the stiffness is the involvement of carbon. The largest loss factor of six CFRPs of bidirectional fabrics was for a 45 • orientation of the warp fibres towards the direction of the acting load. The best results of nine particle and hybrid composites were for those with fine filler sizes that incorporated the short fibres. In applications of dynamic loaded composite components, the material damping is a factor that is significant compared to steel as standard material in mechanical engineering. Thus, the passive damping of composite materials is an important and notable aspect of designing and the designers can control and maximize it by influencing the parameters evaluated in the presented paper.
2022-07-31T15:13:10.210Z
2022-07-28T00:00:00.000
{ "year": 2022, "sha1": "40265ff58eeca783e2ab24abefdc8af44f03545b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/14/15/3060/pdf?version=1659340682", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79cd7650db27552cfe0464711ac14f3e2cecb968", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
79812009
pes2o/s2orc
v3-fos-license
Medicinal plants used against typhoid fever and toothache in Pir-Panchal Range of the Shopian district of Kashmir Himalaya From ancient days, plants are used for habitat, food, and medicine. The use of plants for medicinal valve is as old as our culture. The first known record of remedial plants was Sumerian herbal of 2200 in the 5 century BC. The Greek doctor Hippocrates lists out some 400 herbs of common use. In AD 77, the Greek surgeon Dioscorides published “De Materia Medica” plants, which are used particularly for their medicinal value. This descriptive medical book on medicinal plant treatment contained data on how and when each plant was accumulated, even if it was poisonous or not or it was edible or not. He highlighted the economic efficacy of plants. For long periods, herbs have been used for various times for several purposes such as healing the sick and ailing. Most of the herbs keep the body in harmony with nature and maintain proper balance. Human has always been conscious of the effects of plants on the body, mind, and feelings. For example, fragment plants were used to cure the body and give a scene of prosperity. The most precious flowers are accorded to Gods, and use of aromatic odor is recorded from a long times. Human has undoubtedly always been concerned with the question of health and survival and has sought within the framework of his knowledge and solution to the problems of illness. INTRODUCTION From ancient days, plants are used for habitat, food, and medicine.The use of plants for medicinal valve is as old as our culture.The first known record of remedial plants was Sumerian herbal of 2200 in the 5 th century BC.The Greek doctor Hippocrates lists out some 400 herbs of common use.In AD 77, the Greek surgeon Dioscorides published "De Materia Medica" plants, which are used particularly for their medicinal value.This descriptive medical book on medicinal plant treatment contained data on how and when each plant was accumulated, even if it was poisonous or not or it was edible or not.He highlighted the economic efficacy of plants.For long periods, herbs have been used for various times for several purposes such as healing the sick and ailing.Most of the herbs keep the body in harmony with nature and maintain proper balance.Human has always been conscious of the effects of plants on the body, mind, and feelings.For example, fragment plants were used to cure the body and give a scene of prosperity.The most precious flowers are accorded to Gods, and use of aromatic odor is recorded from a long times.Human has undoubtedly always been concerned with the question of health and survival and has sought within the framework of his knowledge and solution to the problems of illness. The WHO has recognized the role of traditional system of medicine and considered them a part of strategy to provide health care to the masses.The need for medicinal plant-based raw material is increasing annually worldwide.International market size for herbal and medicinal plants is estimated at US $60 billion and is featured to reach US 5 $ Trillion by 2050 (WHO, 2002).About 75-80% of the total exports of raw drugs came from India (Malik et al, 2011).India is the home of about 17,000 species of plants, out of which 7500 are known for their therapeutic uses.Ayurveda has reported about 2000 medicinal plant species, followed by Siddha and Unani.The "Charak Samhita," and ancient written document with rich literature regarding herbal therapy, describes the production of around 340 herbal drugs and their native use for treating various problems and diseases. ABSTRACT The current study was undertaken with a view to explore the possibilities of utilizing the plant resources of the district Shopian.A total of plants/specimens along with detailed information and their uses would serve as a valuable record for future reference and study.Most of these plants are wild and some plants are cultivated.The current study reveals that 22 medicinal plants belonging to 14 families are being used for treating typhoid fever and toothache in the Shopian district of Kashmir Himalaya.Of these 22 medicinal plants, 5 plant species are used both for treating fever and toothache.These medicinal plants have been arranged alphabetically.Despite the extensive use of medicinal plants by the people of this region, extensive work has not been done yet on ethnomedicinal and other aspects.The current study is an effort to promote a realm among the people regarding the possibility of natural alternatives in preventing typhoid fever and tooth diseases in the study area. Location and Study Area District Shopian is situated on the latitude of 33°, 44 N and Longitude of 74, 50 E.It lies on the southwest of Kashmir.It is at one time called as "Shen-e-vann" meaning "Forest of Snow."Shopian is commonly known as the apple bowl of Kashmir.The district is at a distance of 50 km from the state capital Srinagar.Beset with considerable topographic, altitudinal climatic variation, it depicts a great habitat diversity and harbors a rich flora.The district is mainly agrarian, and most of the plants grow luxuriantly as weeds in waste lands, fallow lands, cultivated fields, etc (Raza et al, 1978). Methods Frequent field trips and ethnomedicinal surveys of the selected areas of Shopian, Keller, Zainapora tehsils of Shopian district of Jammu and Kashmir were undertaken during 2015-2016 as per the guidelines suggested by Schultes (1962), Jain (1967).The information about the use of plants as medicine and folklore was recorded by personal interviews with tribals (Gujjars and Bakarwals), Paharis, shepherds (chopans), and old experienced villagers under study.An inventory of plants and plant products used by the people of rural and tribal areas in their day-to-day life was prepared.Almost all the plants were collected in different seasons with the help of tribal and rural people.Parts of the plants used in the treatment of various problems and other related information were recorded.The information of plants was written in the field book.The data obtained from different localities, pertaining to local medicinally important plants, were carefully recorded.The information collected from the local people was further verified and checked by some knowledgeable person of the study area.Every such plant was studied for its identification.The chemical constituents written for each species of plant in the enumeration have been taken from the Glossary of Indian medicinal plants (Chopra et al, 1956). RESULTS The present study reported that 22 ethnomedicinal plant species belonging to 14 families are being used in the treatment of typhoid fever and toothache.The botanical name, family, local name, chemical constituents, parts used, and ethnomedicinal uses of these medicinal plants have been compiled and shown in Table 1. DISCUSSION Specifically, the Shopian district of Kashmir Himalaya harbors a good proportion of endemic as well as nonendemic flora; based on its endemicity and unique geography, it has attracted the attention of explorers and botanists from the time when journey was most tedious and quite unsafe.The purpose of the current investigation is to explore the flora of this floristically rich area with a special emphasis on gathering information from the tribals and rural people living in the forest areas pertaining to ethnobotanical uses of plants, which are so bountiful in their ambience.Especially the rural people and the tribals of this selected area depend on the surrounding forests for almost everything.They prefer to use their folk medicines practiced by their elderly persons who enriched their knowledge by long experience.According to informants, they are capable of healing and curing various diseases with home medicines. CONCLUSION The study area is very rich both floristically and ethnomedicinally.Further research is required on the phytochemistry of such plants which are effective in the treatment of a particular disease.Moreover, the study has brought to some useful medicinal plants which are subjected to pharmacological and clinical trials on experimental animals, and if found efficacious, they can be recommended for human use.For this purpose, it would be better if the active ingredient or active principle is isolated by further researches so that more effective use of these plants may be made.
2018-12-05T04:10:24.715Z
2017-03-24T00:00:00.000
{ "year": 2017, "sha1": "275fc085d9294d5e08996fc11f4c010591a79523", "oa_license": "CCBY", "oa_url": "https://updatepublishing.com/journal/index.php/jp/article/download/3171/3067", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "275fc085d9294d5e08996fc11f4c010591a79523", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geography" ] }
248965211
pes2o/s2orc
v3-fos-license
On the X-ray pulsar HD 49798: a contracting white dwarf with debris disk? HD49798/RX J0648.0-4418 is a peculiar binary including a hot subdwarf of O spectral type and a compact companion in a 1.55 day orbit. According to the steady spin period derivative $\dot{P}=(-2.17\pm0.01)\times10^{-15} ~\rm s\,s^{-1}$ , the compact object was thought to be a contracting young white dwarf (WD). However, the X-ray luminosity producing by the wind accretion of massive WD is one order of magnitude smaller than the observed value. In this work, we propose an alternative model to account for the observed X-ray luminosity. If the WD was surrounded by a debris disk, the accretion from the debris disk can produce the observed X-ray luminosity and X-ray pulses. Based on the time-varying accretion rate model, the current mass of the debris disk is constrained to be $3.9\times10^{-6}~\rm M_{\odot}$. Comparing with the contraction of the WD, the accretion torque exerting by such a debris disk can only influence the spin evolution of the WD in the early stage. According to the accretion theory, the magnetic field of the WD is constrained to be $\sim (0.7-7)\times10^{4}$ G. The calculated conventional polar cap radius of the WD is larger than the observed emitting-zone radius, which probably originate from the existence of strong and small-scale local magnetic field in the polar cap surface. We expect that further multiband observations on this source can help us to confirm or rule out the existence of a debris disk. Introduction HD 49798/RX J0648.0−4418 is a peculiar binary including a hot subdwarf of O spectral type and a compact companion in an orbit with an orbital period P orb = 1.55 day (Thackeray 1970;Kudritzki & Simon 1978). When this source was discovered, it was the brightest hot subdwarf detected (Jaschek & Jaschek 1963), and is still one of the brightest hot subdwarf so far (Mereghetti et al. 2011). Bisscheroux et al. (1997) suggested that an intermediate-mass star that entered into a common envelope while on the early AGB stage is the most likely progenitor of HD 49798. Israel et al. (1995Israel et al. ( , 1997 had detected a 13.2 s period X-ray pulse, which probably originated from the spin period (P) of a magnetic compact object accreting from the weak wind of subdwarf, in which the wind loss rate is about 3 × 10 −9 M ⊙ yr −1 (Hamann 2010). XMM-Newton data from 2002 to 2014 derived a relatively low X-ray luminosity L X ≈ (1.3 ± 0.3) × 10 32 (d/520pc) 2 erg s −1 (d is the distance of the source, Mereghetti et al. 2016). Comparing the observed X-ray luminosity with the accretion luminosity estimated by the wind capture rate of the compact object, Israel et al. (1996) proposed that the X-ray pulsator should be a neutron star (NS) rather than a white dwarf (WD). However, a very soft blackbody of temperature (kT ∼ 30 eV), hard power-law tail, and large emitting area radius (R BB ∼ 32(d/520pc) km) derived from the blackbody spectral fit tended to a WD compact object (Mereghetti et al. 2009(Mereghetti et al. , 2011. Most recently, a relatively precise parallax obtained with Gaia EDR3 measured the distance of this source to be 521 ± 14 pc (Brown et al. 2020). Based on the data from XMM-Newton satellite, Mereghetti et al. (2009) obtained an X-ray mass function, and an orbital plane inclination angle (79 • to 84 • ) by detecting an eclipse in the X-ray light curve, and constrained the mass of X-ray pulsator to be 1.28 ± 0.05 M ⊙ , the mass of the hot subdwarf to be 1.50 ± 0.05 M ⊙ . Adopting the optically thick wind assumption, Wang & Han (2010) proposed that HD 49798/RX J0648.0C4418 could produce a type Ia supernova by the accretion of CO WD in the future. Recently, Liu et al. (2015) argued that the Xray pulsar companion of HD 49798 is a CO WD rather than a ONe WD by the binary population synthesis simulation. If HD 49798 accompanied by a NS, this source will appear as an ultraluminous X-ray source by the mass transfer triggered by Roche lobe overflow in the future, and eventually evolve into a wide intermediate-mass binary pulsar (Brooks et al. 2017). Wu & Wang (2019) found that the WD would experience an off center carbon burning and form a neutron star via Fe core collapse supernova if the compact companion of HD 49798 is a CO WD. However, this source is unlikely to form a neutron star from an accretion-induced-collapse process if the compact object is a ONe WD (Liu et al. 2018). It is still controversial that the X-ray pulsator companion of HD 49798 is NS or WD. Mereghetti et al. (2016) performed a phase-connected timing analysis for XMM-Newton, Swift, and ROSAT data spanning more than 20 yr, and obtained the spin-period derivative of the X-ray pulsator to bė P = (−2.15 ± 0.05) × 10 −15 s s −1 . Recently, the compact companion was reported to be still spinning up at a steady rate oḟ P = (−2.17 ± 0.01) × 10 −15 s s −1 according to the new XMM-Newton data (Mereghetti et al. 2021). In principle, an accretion process of compact object can result in a steady spin-up rate. The orbital separation of HD 49798 is about 8.0 R ⊙ (Mereghetti et al. 2009), the effective Roche-lobe radius of the donor star can be estimated to be ∼ 3.1 R ⊙ (Eggleton 1983). The radius of the subdwarf is 1.05 ± 0.06 R ⊙ (Krticka et al. 2019), hence it is impossible to transfer material by the Roche-lobe overflow. However, this source is a member of the few hot subdwarfs offering obvious evidence for a stellar wind (Hamann et al. 1981;Hamann 2010). The observed spin-up rate favors a NS accreting from the wind of the hot subdwarf. However, if the compact companion of HD 49798 is a NS, there still exist three puzzles in the NS model: first, the inferred low magnetic field (∼ 10 10 G) is unusual for a NS without millisecond period (Mereghetti et al. 2016); second, it is impossible to obtain such a large emitting area (R BB ∼ 32 km) fitted by the blackbody spectral; third, such a steady spin-up rate is difficult to interpret for a NS accreting from the stellar winds, in which variations of wind accretion rate captured by the NS would cause the changes of accretion torque, and influence its spin period (Mereghetti et al. 2021). If the compact companion of HD 49798 is a WD, it is very difficult to produce the observed spin-up rate by the stellar wind accretion, unless the disk accretion occurs (Mereghetti et al. 2016). Population synthesis simulations on hot subdwarf binaries also shown that the number of the systems hosting WDs much more than those with NSs (Yungelson & Tutukov 2005;Wu et al. 2018). Recently, Popov et al. (2018) provided a novel model, in which the contraction of a young WD with an age of ∼ 2 Myr can successfully explain the observed spin-up rate. However, the wind accretion model predicated an X-ray luminosity of L X = 1.3 × 10 31 erg s −1 (Krticka et al. 2019) 1 , which is one order of magnitude smaller than the observed value L X,obs ≈ (1.3 ± 0.3) × 10 32 erg s −1 (Mereghetti et al. 2016). Therefore, It still remains a puzzle for this peculiar X-ray source. In this work, we propose an alternative model invoking a debris disk to account for the observed X-ray luminosity. (Li et al. 2017). Recently, the WD G29-38 was reported to be currently accreting planetary material from a debris disk according to the X-ray observations of Chandra X-ray Observatory (Cunningham et al. 2022). Although the compact companion of HD 49798 is in a binary system, we assume that it was surrounded by a debris disk similar to these isolated WDs. Debris disk accretion model The magnetic accretion process of the WD from a debris disk is tightly related to the magnetospheric (Alfvén) radius r m , at which the ram pressure of the infalling material is balanced with the magnetic pressure. The magnetospheric radius can be written as (Davidson & Ostriker 1973) where G is the gravitational constant, ξ a dimensionless parameter of the order unity, M the WD mass,Ṁ the mass inflow rate at r m in the debris disk, µ = B p R 3 /2 (B p is the surface dipole magnetic field) the dipolar magnetic momentum of the WD. Taking ξ = 0.52 for the disk accretion case (Ghosh & Lamb 1979), and inserting some typical parameters in equation (7), 1 It is worth emphasizing that the wind accretion rate strongly depends on the wind velocity at the vicinity of the accreting WD (Krticka et al. 2019). Actually, the effect of ionization by the X-ray flux might decrease the wind velocity, thus increasing the accretion rate (Sander et al. 2018;Krticka et al. 2018). thus r m = 1.2 × 10 9Ṁ−2/7 14 M −1/7 1.28 µ 4/7 30 cm, whereṀ 14 in units of 10 14 g s −1 , M 1.28 in units of 1.28 M ⊙ , and µ 30 in units of 10 30 G cm 3 . The observed X-ray luminosity of HD 49798 is L X ≈ (1.3 ± 0.3) × 10 32 (d/520pc) 2 erg s −1 (Mereghetti et al. 2016). Ignoring the X-ray luminosity producing by the stellar winds accretion, the accretion rate (i.e. the mass inflow rate at the inner edge of the debris disk) of the accreting WD in HD 49798 can be estimated to beṀ where R is the WD radius. In this work, we take R = 3000 km. Whether could a debris disk around the WD provide such an accretion rate? After a debris disk forms, the accretion rate should decrease self-similarly in accordance withṀ ∝ t −α due to the influence of viscous processes (Cannizzo et al. 1990). In our debris disk model, an evolutionary law of the accretion rate same to Chatterjee et al. (2000) is adopted as followṡ where T is of order the dynamical timescale in the inner regions of the debris disk, andṀ 0 is a constant accretion rate. The initial mass of the disk can be written as Therefore, we haveṀ 0 Chatterjee et al. 2000). The dynamical timescale in the inner regions of the debris disk is given by In the following calculations, we take T = 1 s, and α = 19/16 that the opacity is dominated by electron scattering (Cannizzo et al. 1990). To account for the observed spin-up rate, Popov et al. (2018) proposed that the compact companion of HD 49798 is a contracting WD with a cooling age of ∼ 2 Myr. Similar to G29-38, we also assume that RX J0648.0 -4418 experienced an accretion for 10% of the cooling age (Jura 2003b), i.e. the age of the debris disk is t 0 = 2 × 10 5 yr. To explain the observed Xray luminosity, the accretion rate from the debris disk should bė M = 2.3×10 14 g s −1 when t = t 0 = 2×10 5 yr, hence the evolution of the accretion rate when t ≥ T satisfieṡ M = 2.3 × 10 14 t 2 × 10 5 yr −19/16 g s −1 . It yieldsṀ 0 = 3.6×10 29 g s −1 from equations (4) and (7), and the initial mass of the debris disk is estimated to be M d,i ≈ 0.001 M ⊙ according to equation (5). The current mass of the debris disk is derived to be If the gas-to-dust ratio is 100 (Jura 2003a), the current dust mass of the debris disk is M dust ≈ 3.9 × 10 −8 M ⊙ . For reference, we also calculate the debris-disk mass of G29-38 using this model. The current accretion rate of G29-38 is about 1.63 × 10 9 g s −1 (Cunningham et al. 2022), and it has been actively accreting for 4 × 10 7 yr, i.e. t 0 ≈ 40 Myr (Jura 2003b). Therefore, the accretion rate from the debris disk isṀ = 1.63 × 10 9 t/40 Myr −19/16 g s −1 . Similar to equation (8), the current debris-disk mass can be estimated to be 5.5 × 10 −9 M ⊙ , which is not in contradiction with the minimum disk mass of ∼ 10 −10 M ⊙ estimated by Jura (2003b). Based on three-dimensional radiation-hydrodynamics stellar-atmosphere models, Cunningham et al. (2021) roughly estimated the debrisdisk lifetimes around WDs to be log(t/yr) = 6.1±1.4. Therefore, the debris-disk age of G29-38 is probably overestimated. If so, the current disk mass predicted by our model will correspondingly decline. Taking M = 1.28 M ⊙ , R = 3000 km, the X-ray luminosity of the WD accreting from the debris disk can be derived from L X = GMṀ/R and equation (7). Figure 1 plots the evolution of X-ray luminosity (we ignore the radius change of the WD in the contraction stage). It is clear that a WD accreting from a nascent debris disk can appear as luminous X-ray source (∼ 10 38 erg s −1 ) with a lifetime up to 2 yr, which is similar to a black hole accreting from a fallback disk as an ultraluminous X-ray source (Li 2003). However, its maximum X-ray luminosity is difficult to exceed ∼ 10 39 erg s −1 due to the Eddington luminosity L Edd = 1.9 × 10 38 erg s −1 (for the accreting material is hydrogen, King et al. 2019). With the decline of the mass inflow rate, the magnetospheric radius will firstly exceed the corotation radius, and the accreting WD transitions to a low X-ray state that lacks X-ray pulsation (Campana et al. 2016). Subsequently, the magnetospheric radius will then exceed the light cylinder radius (R lc = cP/2π), and produce a radio emission (Illarionov & Sunyaev 1975;Campana et al. 1998). According to the critical luminosity that WDs transitions from accretion to propeller regime determined by Campana et al. (2018), the accreting WD of HD 49798 will transition to the propeller phase if the accretion luminosity declines to the limiting luminosity L lim = 0.7 +0.4 −0.3 × 10 32 erg s −1 , which depends the the dipolar magnetic momentum of the accreting WD 2 . Similar to the neutron stars, the spin evolution of the WD depends on the interaction between magnetic field lines and disk plasma, which can give rise to a continuous exchange of angular momentum between the WD and the disk. If the magnetospheric radius is smaller than the corotation radius (at which the Keplerian angular velocity equals the spin angular velocity of the WD) r co = 3 GMP 2 4π 2 = 9.1 × 10 8 M 1/3 1.28 cm. the WD accretes the specific angular momentum of material at r m . The maximum accretion torque receiving by the WD is T acc =Ṁ √ GMr co . Therefore, the maximum spin-up rate of the WD due to the accretion from a debris disk can be expressed aṡ ≈ −1.1 × 10 −17 P 2 13.2Ṁ 14 M 2/3 1.28 I −1 50 s s −1 , (10) where I 50 is the moment of inertia of the WD in units of 10 50 g cm 2 , P 13.2 = P/13.2 s. For some typical parameters, the maximum spin-up rate producing by an accretion from the debris disk is 1-2 orders of magnitude lower than the observed value. 2 We take µ 30 = 0.8, see also equation (7) of Campana et al. (2018). Fig. 1. X-ray luminosity produced by the accretion from a debris disk as the function of the debris-disk age t. We take M = 1.28 M ⊙ , and R = 3000 km. The horizontal dashed line represent the observed X-ray luminosity range L X = (1.0 − 1.6) × 10 32 erg s −1 . Therefore, the accretion from a debris disk can only account for the observed X-ray luminosity of HD49798/RX J0648.0-4418. Figure 2 shows the evolution of the spin-period derivative producing by the accretion from the debris disk. According to our assumption, the debris disk should exist when the WD age is in the range of 1.8 Myr to 2.0 Myr. Comparing with Figure 2 in Popov et al. (2018),Ṗ producing by the debris disk is smaller than that resulting from the WD contraction for the debris-disk age t = 5000 − 2 × 10 5 yr. However, a young (with an age less than 5000 yr) debris disk plays an important role in influencing the spin evolution of the WD. To support a steady accretion, the inner radius of the debris disk (i.e. the magnetospheric radius r m ) should satisfies the following relation as R < r m ≤ r co . Therefore, the surface dipolar magnetic field of the WD is in the range of (0.7 − 7) × 10 4 G. For a magnetic WD, the accretion flow along the magnetic field lines would form an accretion column inside the polar cap (Shapiro & Teukolsky 1983). The polar cap opening angle of the last open field line is (Ruderman & Sutherland 1975) where R LC = cP/2π is the radius of the light cylinder. So we can estimate the polar cap radius of the WD in HD 49798 to be This radius is six times as large as the observed radius of the emitting area R BB ≈ 32(d/520pc) km at a distance of 520 pc (Mereghetti et al. 2016;Brown et al. 2020). Although the estimated polar cap radius is larger than the radius of the emitting zone derived from the blackbody spectral fit, it is already noted that the radius R dp of the conventional polar cap is ten times larger than the one of the radiation area in the neutron star field (Hermsen et al. 2013;Szary et al. 2017;Geppert 2017). Strong and small-scale local magnetic field structures in the polar cap surface was thought to be responsible for the small actual radius R pc of polar cap (Szary et al. 2015;Sznajder & Geppert 2020). According to the magnetic flux conservation law, if the magnetic field at the actual polar cap of the WD B s ≃ 36B p = (2.5 − 25) × 10 5 G, the small radius of the emitting zone can be easily understood. Summary and discussion Stellar wind accretion from the hot subdwarf is insufficient to produce the observed X-ray luminosity of HD 49798 (Krticka et al. 2019). In this work, we propose an alternative model to account for the observed X-ray luminosity of HD 49798. If the compact companion of HD 49798 is a WD surrounding by a debris disk, by the interaction between the magnetic field and the debris disk, the accretion flow along the magnetic field lines would produce an accretion column on the polar cap of the WD, thereby naturally resulting in the observed X-ray pulses. Based on the model of time-varying accretion from a debris disk given by Chatterjee et al. (2000) and the observed X-ray luminosity, the initial mass and the current mass of the debris disk are constrained to be ∼ 0.001 M ⊙ and 3.9 × 10 −6 M ⊙ , respectively. Based on the accretion theory, the surface magnetic field of the WD is constrained to be B p = (0.7 − 7) × 10 4 G, while the small polar cap zone requires a relatively strong local magnetic field (B s = (2.5 − 25) × 10 5 G) to account for a small emitting area. Comparing with the contraction of the WD, the accretion torque exerting by the proposed debris disk can only influence the spin evolution of the WD when the debris-disk age is less than 5000 yr. Therefore, the debris disk cannot spin the accreting WD up to the observed rate in the current stage, which should arise from a change of the moment of inertia of the WD at the contraction stage (Popov et al. 2018). The blackbody spectral fit for HD 49798 inferred a radius of the emitting area to be R BB ≈ 32(d/520pc) km (Mereghetti et al. 2016). This emitting area should be the real polar cap zone resulting from the accretion column on the surface of the WD. However, our calculated polar cap radius is 200 km. Difference between the real polar cap zone and the theoretical vale probably originate from strong and small-scale local magnetic field structures in the polar cap surface (Szary et al. 2015;Sznajder & Geppert 2020). The debris disks around some isolated WDs probably originate from the tidal disruption of either comets (Debes & Sigurdsson 2002) or asteroids (Jura 2003b). Our scenario predicts a heavy debris disk with a mass of ∼ 10 −6 M ⊙ , which is four orders of magnitude higher than that in the WD G29-38 (Jura 2003b). This mass discrepancy should arise from different origin of debris disks. Since HD 49798 may experienced a common envelope evolutionary phase (Bisscheroux et al. 1997), the debris disk around the WD may originate from the engulfment of the progenitor envelop of the hot subdwarf. For example, the engulfment of a low mass companion star of HD 233517 when it evolved into a red giant results in a heavy debris disk of ∼ 0.01 M ⊙ (Jura 2003a). However, it is challenging to confirm the debris disk by detecting the infrared excess from RX J0648.0 -4418 like G29-38. First, the distance of RX J0648.0 -4418 is longer than G29-38 by a factor of 40; second, the detecting radiation flux from the debris disk should be slight due to a large orbital plane inclination angle 3 (79 • to 84 • , Mereghetti et al. 2009). In the other hand, it will also confirm the existence of a debris disk if a low X-ray state from RX J0648.0 -4418 or a spin-down rate are luckily detected in the future. We expect that further multiband observations on this source can help us to confirm or rule out the existence of a debris disk.
2022-05-23T01:15:42.294Z
2022-05-20T00:00:00.000
{ "year": 2022, "sha1": "0d699ab840ef294ac127f088f349801bf8b1e8ad", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0d699ab840ef294ac127f088f349801bf8b1e8ad", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
51877626
pes2o/s2orc
v3-fos-license
MEETING BOT: Reinforcement Learning for Dialogue Based Meeting Scheduling In this paper we present Meeting Bot, a reinforcement learning based conversational system that interacts with multiple users to schedule meetings. The system is able to interpret user utterences and map them to preferred time slots, which are then fed to a reinforcement learning (RL) system with the goal of converging on an agreeable time slot. The RL system is able to adapt to user preferences and environmental changes in meeting arrival rate while still scheduling effectively. Learning is performed via policy gradient with exploration, by utilizing an MLP as an approximator of the policy function. Results demonstrate that the system outperforms standard scheduling algorithms in terms of overall scheduling efficiency. Additionally, the system is able to adapt its strategy to situations when users consistently reject or accept meetings in certain slots (such as Friday afternoon versus Thursday morning), or when the meeting is called by members who are at a more senior designation. Introduction One of the most frequently performed tasks in an organization is the scheduling of meetings between employees across different designations and timezones. This scheduling of meetings is frequently performed over email, or by human assistants, often involving several back and forth negotiations over the actual time slot of the meeting. In this paper we present a learning system whereby a user is able to converse with a virtual assistant to convey his/her desire to initiate a meeting with a set of participants at some preferred range of timeslots. The virtual assistant then interacts with the meeting participants (including the initiator if necessary) via dialogue to converge on an agreeable timeslot. The system attempts to schedule the meeting while trying to optimize two different objectives; the first objective being to schedule all the meetings in the system efficiently i.e. maximizing the number of meetings scheduled, and the second being to minimize the number of interactions with the meeting participants, to avoid annoying them or wasting their time. The meeting is considered scheduled when all participants agree on a slot(s). There are several natural language and learning components to such a scheduling system. Initially the virtual as-Copyright c 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. sistant has to extract the initiators desired timeslots from his/her utterances, in addition to the names of the meeting participants. While the names of the participants can be extracted using a standard Named Entity Recognition (NER) engine like Stanford NER (Finkel, Grenager, and Manning 2005), the extraction of the correct timeslots can be extremely challenging as the initiator may express the desired time via vague natural language utterances such as 'please schedule a meeting for Thursday morning' or 'Friday or Monday is preferred but avoid scheduling in the morning'. We propose a multi-label learning approach to map initiator utterances to multiple possible time slots. The approach utilizes a Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber 1997) based multi-label learning (Tsoumakas and Katakis 2006) model that utilizes independent loss functions over the output units and successfully learns to map initiator utterances to correct timeslots with high accuracy. Once the desired set of timeslots is determined, the virtual assistant employs policy gradient based reinforcement learning (Sutton 1998) to decide on the correct slots to schedule the meeting. The reinforcement learning agent is aware of all the meetings waiting to be scheduled in the system and the current availability of different users in the system for all the slots over the coming week. Based on the meeting traffic and the prior experience of scheduling meetings with different users in the system, the agent attempts to choose slots in which it estimates the maximum probability of the participants agreeing. The state space for the reinforcement learner comprises the information about 1) the current occupancy of the slots, 2) the meetings waiting in the queue, 3) the initiator IDs of all the meetings, and 4) the designations of all the participants of the meeting. The environment automatically decides upon a meeting and a slot based on a bin packing scheduling heuristic. At each step, the agent has to decide between two possible actions i.e. whether or not to attempt to schedule the selected meeting at the chosen slot by issuing a request to the meeting participants. If the agent decides to schedule, a dialogue is initiated wherein the agent must request the participants to attend the meeting at the chosen slot. The reply is processed to determine whether the user agreed and the resulting experiences are rewarded with both immediate and delayed rewards to balance the tradeoff between scheduling effectively avoiding unnecessary meeting requests. Experiments are performed to demonstrate the scheduling efficiency of the system and the ability of the system to modify its policy based on differing meeting arrival rates. Additionally, we examine the adaptability of the system to user preferences for certain slots, and to the designations of the meeting initiators. Results are promising and demonstrate that the system learns a robust adaptive policy for scheduling meetings via dialogue while minimizing the number of meeting requests to users. The primary contributions of this paper are as follows: 1. A recurrent multi label classification model to map natural language user utterances to desired time slots. 2. A policy gradient based reinforcement learning framework with global reward to learn a versatile adaptive policy for scheduling meetings. 3. The use of both immediate and delayed rewards to learn user based preferences for time slots and initiator designations. 4. To the best of our knowledge, this is the first attempt at utilizing reinforcement learning to schedule meetings based on user preferences. Related Work This paper draws upon several ideas from work in planning and scheduling, deep learning for natural language understanding, and reinforcement learning. Reinforcement Learning for Scheduling Reinforcement learning (RL) approaches involve an agent in an environment learning a policy for choosing appropriate actions in different states so as to maximize the agents environmental reward. More formally, given a state s t , the policy determines which action a t the agent should perform to transition to state s t+1 and receive reward r t from the environment. The policy is defined as a probability distribution over the actions π(s, a) and it is hard to learn the correct action for each state by experience simply because the state space for most real world problems is exponentially large. Hence, function approximators such as deep neural networks have been used for approximating the policy function, and allow for generalization of the learnt policy function across states. Thus, the problem reduces to finding the correct parameters θ for the chosen policy function approximator (in this case a deep feedforward neural network). Despite the recent promise shown by deep reinforcement learners in complex gaming environments like Go (Silver et al. 2016) and for increasing energy efficiency of cooling systems (Evans and Gao 2017), the use of RL for scheduling problems has been limited(Zhang and Dietterich 1995), (Mao et al. 2016). We contend that utilizing reinforcement learning has advantages over traditional heuristic based solutions to such problems because: 1. RL based learners can utilize knowledge from past schedules and learn patterns that lead to good solutions 2. RL based learners can adapt to changing conditions in the scheduling environment such as user preferences, novel constraints. In this paper we utilize the method of policy gradients to learn a policy function that maximizes future reward. The idea is that we attempt to move the policy towards stateactions in proportion to the expected cumulative reward obtained by following the policy thereafter. Assuming that future rewards are discounted by a discount factor γ, the net future discounted reward from time t is given by ∞ t=0 γ t r t . The policy objective is to maximize the expected cumulative discounted reward. To achieve this policy gradient methods evaluate the gradient of the policy objective given by (Williams 1992), (Sutton 1998): (1) In order to evaluate the gradient a popular approach is to sample the reward obtained from several trajectories or episodes where the agent follows the given policy, and use the empirical cumulative discounted reward r t as an unbiased estimator for the value function Q. Gradient descent then yields the updated policy parameters: System Architecture There are two dialogue based interactions that take place in our system: • A dialogue with the initiator to obtain the details about the meeting, such as the names/ids of the participants and the proposed meeting time. • An RL based dialogue with the different participants to request their availability for the time slot in question. Dialogue for Meeting Initiation Here the sentence entered by the user requesting a meeting is fed into an LSTM model that is trained to map sentences to appropriate slots. For instance the sentence "Please schedule a meeting with Gautam for Wednesday afternoon" is mapped to the slots corresponding to Wednesday afternoon. Since one sentence can refer to multiple slots, the problem becomes a multi-label classification problem (Tsoumakas and Katakis 2006). The training data is generated by using several template sentences based on regular expressions, with different times of the day and with many linguistic variations, along with the target slot mappings. Each word of the sequence is represented as a one hot encoding and the sequence of one hot word vectors is fed directly to the LSTM. A depiction of the network utilized for solving the multilabel classification problem that utilizes LSTM units and over each output timeslot). While using sigmoid activation function, any value greater than 0.5 is considered as 1 and less than 0.5 is considered as 0. The dataset is created with different possible template phrases for conveying time information in english before using these to train the model. A dataset consisting of 1056 samples. This was divided into training, validation and testing with a ratio 0.6:0.2:0.2 1 . Some sample phrases used to generate the data: early morning, morning, late morning, afternoon, early afternoon, late afternoon, after lunch, before lunch, evening, early evening, late evening Reinforcement Learning for Scheduling We design a policy gradient based reinforcement learning system that interacts with the different users to determine an agreeable slot for all the participants. The state space of the RL system encodes 1) the participants designation as a one hot encoding, 2) the current occupancy of the slots, 3) a waiting queue of the next seven meetings and the duration (number of slots) of each meeting waiting in the queue. The environment continuously introduces 1,2,4 and 6 slot meetings into the backlog and as meetings get scheduled meetings are popped from the backlog into the waiting queue at every timestep. Formulation As shown in figure 3, the meeting scheduling system consists of 40 slots which can be mapped to a week (5 days x 8 slots), two row matrices to indicate the slot to schedule a meeting and current day of the week. Initially, the meetings are sent into a backlog queue as they arrive and then they are let into waiting queue. The entire system except the backlog queue is given as input state to the RL agent. Initially, the waiting queue will be filled with meeting requests. The environment selects a slot so that the meeting at the first entry of the waiting queue is schedulable and the slot indicator is placed at that position. Then the RL-agent makes a binary decision about whether or not to schedule the meeting at that position. Accordingly, the meeting will either be requested for all the participants or will be pushed back to backlog. Once the decision is made on all the meetings in the waiting queue, the current day indicator is moved by 8 slots in a circular fashion and the 8 slots are freed up for any meetings still waiting to get scheduled. Basically, the system schedules meetings for the next 5 days of the week and clears the slots of the past day. The fresh meetings then arrive into the waiting queue and scheduling them is treated as one step for the learning environment. Algorithm 1: Training the RL-Agent for each episode do for each timestep do The dataset is available at https://github.com/ vishwa15/timephrase_data.git. Persistent Exploration As with most reinforcement learning approaches, a balance has to be maintained between exploration of new trajectories and exploitation of the knowledge of the state action space learned so far. In our formulation we allow for exploration by allowing the agent to choose a random action with a probability of 0.1 at every step. In general after training an RL-agent, the policy is usually fixed and the agent is then deployed with the learnt policy. However, in our application, the agent is allowed to continuously learn and explore with a constant probability. The reason for this is that the environment may change in unforseeable ways and the agent must be able to adapt continuously. The motivations for this will become clearer in the experiments section. Experiments and Results The performance of the RL-agent is evaluated based on the number of meetings scheduled per episode, as compared to the optimal policy (which in this case corresponds to shortest job first), a first come first serve policy, and a random policy which schedules meetings in a random order. A meeting can request for 1, 2, 4, or 6 slots and these are generated with a probability (0.4, 0.2, 0.2, 0.2) respectively (this is not necessary but these probabilities allow us to see some interesting behaviours of the RL agent). The meeting requests are let into the waiting queue from backlog based on the current arrival rate. The state space is fed as input to a neural network which has two hidden layers of length 128x32. The degree of fullness of the backlog queue is represented with a backlog vector of size 5. We utilize binary crossentropy as our loss function, Adam as our optimizer and a softmax output activation function. Note that learning is taking place continuously as mentioned in the previous Section. Also, the RL agent with same neural architecture can optimise itself to perform different tasks for a given reward function and that the state space remains the same while achieving different objectives. Objective 1: Scheduling maximum no. of meetings with delayed reward A benchmark is calculated at the begining of each timestep which calculates the ratio of free slots to average slots per meeting. The RL-agent makes a decision on all the meetings in the waiting queue and the (state, action) pair is recorded. The number of meetings scheduled by the RL-agent in that timestep is noted. If this number is greater than or equal to the benchmark calculated then all the (state, action) pairs are given a +1 reward else the pairs receive a -1 reward. The successful experiences/timesteps which were rewarded positively are stored in a replay buffer of size 20 and used for training the RL-agent at the end of each episode. The old experiences which are more than 20 episodes old are popped out of the replay buffer to make sure the RL-agent is learning new experiences. The number of meetings scheduled by the RL agent over multiple timesteps is recorded and the average is calculated. With the same conditions, the average number of meetings scheduled using different policies are calculated for comparision. As shown in the figure 5 initially the RL-agent schedules the meetings as they come and the number of meetings scheduled is less than the benchmark calculated. So the agent receives a -ve reward and modifies its action to learn a better policy to accommodate more meetings in the available slots. Its clear from the figure that the RL-agent tries to learn the optimal policy in order to get the maximum positive rewards. When the available slots are fewer than those required, the RL-agent pushes the heavy (4 and 6 slot) meetings into the backlog and all the heavy meetings end up getting scheduled at the end. The episode is run until all the meetings are scheduled but for measuring the performance of the model, the timesteps which have new meetings are considered. The same experiment is conducted with different meeting arrival loads. As shown in the Figure 7, when the load is just 30 to 70% of the scheduling capacity, the number of meetings that get rejected are very low. When the load is increased to 140-160% there is a sharp increase in the number of 4-slot and 6-slot meetings getting rejected and this The Reinforcement learning formulation and reward function is kept constant but the meeting arrival rate is suddenly changed. As shown in the Figure 8, the load of meeting arrival was kept at 190 -210% of the scheduling capacity at the beginning and at episode-1000 the arrival rate was decreased to 30 -70 %. Soon within a couple of episodes, the RL-agent learns to accept all types of meeting requests since the load is lighter. Once again at episode-2000, the arrival rate was increased and the RL-agent takes a couple of more episodes to learn a better policy to suit the environment. Objective 3: Avoid uncomfortable slots and adapt to changing preference with immediate reward In the experiments so far, the RL-agent picks a vacant slot where a meeting can be scheduled and the participants will be requested for that slot. However, all the participants need not necessarily agree on a meeting slot if they are busy or the slot is otherwise inconvenient (Monday mornings for example may be busy). In this experiment, we will program the environment to make a few slots uncomfortable for the users and when the RL-agent requests a meeting in those slots, the participants refuse. In this case, an immediate negative reward (-1) is given to the agent for asking to schedule a meeting at an unsuitable slot. Conversely, when a participant accepts the meeting request, the agent receives an immediate positive reward (+1). Initially slots -6, 15, 27, 36 were made uncomfortable and the RL-agent was given an immediate reward whenever these slots were requested. As shown in Figure 10, the no. of requests that were made on those uncomfortable slots reduces drastically as the agent adapts and learns to avoid them. Within the same experimental setup, we change the uncomfortable slots from slots 6, 15, 27, 36 to slots 3, 9. We can see in Figure 11, the RL-agent can adapt to this changing environment Figure 10: Average no. of asks for all slots when slots 5, 14, 26, 35 was considered uncomfortable by people Objective 4: Adapt to changing preference when a senior designated person asks for a meeting Participants may have their own preference for slots but when a person with a senior designation asks for a meeting, participants usually agree regardless. This behavior was implemented in the environment and the RL-agent is able to adapt to this behavior. Meeting requests consist of three fields: participants, initiator ID and slot type. The RL-agent will pick up a signal from the initiator ID and when a senior designated person requests for a meeting slot, the agent goes ahead and requests all the participants for the slot since the environment is programmed for them to agree to that slot Figure 11: Average no. of asks for all slots when uncomfortability was changed from slots 5, 14, 26, 35 to slots 2, 9 if the initiator has a senior designation. As shown in Figure 12,slots 5,14,26,35 were made uncomfortable for the participants and yet the RL-agent utilizes these slots when a senior designated person requests a meeting. Figure 12: Average no. of asks for all slots when a person with higher designation requests for a meeting Conclusion In this paper, we have designed the architecture of a meeting bot which can schedule meetings through dialogue. A multi-label classification model to convert english phrases which have time information into slots was employed with seperate output loss functions for each time slot. Due to time constraints, scheduling the maximum number of meetings is important and a model using reinforcement learning is trained to schedule them efficiently. The model can adapt to new situations with varying meeting arrival rates and the performance of the model is compared with standard schedulers. We have also shown that the RL-agent can adapt to user preferences and schedule meetings accordingly and can also change its policy when the meeting is called by members who are at a more senior designation. This adaptive behavior cannot be replicated via a fixed scheduling policy like first come first serve or shortest job first.
2018-07-31T13:05:40.663Z
2018-12-28T00:00:00.000
{ "year": 2018, "sha1": "2feca9e97af563538c8c6aa81d5d31526d25e38f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2feca9e97af563538c8c6aa81d5d31526d25e38f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
55099353
pes2o/s2orc
v3-fos-license
Investigation of Sewage and Drinking Water in Major Healthcare Centres for Bacterial and Viral Pathogens Water is a major source of microbes, including pathogens that can cause critical pathological conditions and outbreak of epidemics. Due to lack of proper medical waste-management system in Peshawar, most of the waste is disposed of near sewage lines which run parallel to drinking water supply increasing the chances of water contamination. This study was undertaken to examine bacterial and viral pathogens in fresh and waste water in major Health care units. Conventional culturing techniques were used to identify bacterial pathogens followed by biochemical analysis, whereas viral pathogens were detected by Polymerase Chain Reaction (PCR). Analysis of sewage and drinking water supply in major health care facilities of Peshawar city indicated that Klebsiella pneumoniae and Staphylococcus aureus were found in all water samples whereas serious health risk causing bacteria including Mycobactirium tuberculosis were also detected in some regions. Two viral pathogens, Hepatitis C virus (HCV) and Hepatitis B virus (HBV) were found in open sewage water of Khyber Teaching Hospital and Dabgari Garden (G). The presence of these pathogens in water is a serious threat to public health and the environment and calls for immediate action to enforce proper medical waste-management to eliminate the risks to human health. Introduction Water pollution is one of the most pervasive problem afflicting people throughout the world. Waterborne illness and multiple epidemics are related to the consumption of contaminated or inadequately treated water is a global public health concern. As a developing country, Pakistan has poor water treatment system and ranked 80th among 122 nations in terms of providing good quality drinking water [1]. However, scenario is even worst in many cities of Pakistan where drinking water is unsafe for direct human consumption and severely contaminated with bacterial and viral pathogens. In Pakistan, waterborne diseases and parasitic infections due to contaminated water accounts for nearly 60% and 80% of children's death respectively. Every year approximately 250,000 children die due to water-borne diarrhoea solely and 1.2 million people get affected by waterborne pathogens in Pakistan [2]. Furthermore, scientific data and evidence considering role of waterborne pathogens in the epidemiology of hospital-acquired infections are insufficient [3]. Recent reports on pathological conditions of identified water borne pathogens have provided novel insights into the understanding of pathology and effect of diseases [4][5][6], which persist in numerous aquatic systems due to the advantage of resistance to various environmental factors [7]. Health facilities, mainly health care centres, hospitals, clinics and laboratories pose higher risk of water contamination since these are more likely to be the sources of viral and bacterial pathogens [8][9][10][11]. Although numerous research has been conducted to address the detection and origination of pathogens in both drinking water and wastewater [12,13], however, insufficient studies have done explicitly about tracing the occurrence of pathogens in water sources near healthcare facilities [14,15]. Our current study was conducted to detect the presence of pathogenic microbes both in drinking and waste water samples collected from Khyber Teaching Hospital (KTH), Hayatabad Medical Complex (HMC) and Dabgari Garden (DBG/DG), the major healthcare facilities of Peshawar, capital city of province Khyber Pakhtunkhwa (KP), Pakistan. Everyday these hospitals provide health facilities to thousands of local people, patients coming from far flung areas of KP, and from Afghanistan as well. During personal visits to these hospitals for sample collection improper sewage systems allowing stagnant water retention for several days and mass of untreated disposed materials were observed, which may stir up risk of contamination in drinking water. In addition, the disposal of waste materials from diagnostic laboratories and pharmaceutical centres poses a significant threat to public health. Inadequate information is available about sewage and drinking water quality near major health care units of Peshawar, KP, Pakistan and no investigation for pathogens has been done particularly considering water sources of the healthcare centers. Additionally, no pre-defined rules and laws presented by WHO are set and applied by healthcare management and higher authorities for such investigation and providing good quality treated water. Therefore, the focus of current study is determination of viral and Filtration and DNA/RNA isolation Water samples were taken to lab soon after collection in ice containers and filtered through sterile filter membranes, 0.22 µm, (Science laboratory, Islamabad, Pakistan) to concentrate the samples for investigation of bacterial pathogens. The whole water samples were processed through DNA/RNA isolation kit (Norgen Biotek, Canada) for the detection of viral pathogens. Screening and selective media preparation for incubation Screening was done for the samples collected (nutrients media) and positive samples were further tested through selective media. Selective media (Merck, Rawalpindi, Pakistan) were prepared (according to the prescription provided by Merck, Rawalpindi, PK) for the culturing of bacterial colonies (selective media shown in Table 1). Sterilized media were poured into Petri plates, followed by spreading of concentrated water samples with the help of a sterilized loop. Petri plates were incubated at 35°C for 24 h (Also 48 hours to get the correct density) and 5 weeks for MTB, followed by sub-culturing of colonies on fresh selective media at 34°C and 36°C [14]. This step of colony sub-culturing was repeated 3 times for confirmation of the resultant colonies. The colonies which showed consistent growth were noted and non-consistent growing bacterial pathogens were neglected to avoid false positive results. Biochemical analysis Biochemical tests conducted for identification of bacterial species grown previously on selective media were Catalase, Oxidase, Tube coagulase, Alkaline phosphatase, Motility, Arginine, bacterial pathogens in drinking and sewage water of major health care units of Peshawar to highlight critical role contaminated water plays in waterborne diseases. Culture techniques and Polymerase Chain Reaction (PCR) as the most commonly used methods for monitoring and detection of bacterial and viral pathogens [16][17][18][19], are applied in this research. Study site description and sampling Khyber Pakhtunkhwa (KP) province -with population of 26.9 million and area of 74,521 square kilometer-is located in the north-western region of Pakistan and by the size of population is the 3 rd biggest province of Pakistan. Peshawar (33° 99ʹ 16 ʺN, 71° 51 ʹ36 ʺE) is its provincial capital and largest city and hub of hospitals where patients come not only from all around the KP but also from neighbor country Afghanistan. It is crucial that hospitals providing health facility to thousands of patient's everyday have accessibility to pathogen free drinking water. In this study, total of 252 drinking and sewage water samples were examined over a period of one year from January 2013 to December 2013, samples were collected 3 times a year in 2013, in January, June and October respectively. First session of sampling was completed during last two weeks of January, second in first two weeks of June and third time samplings were done during the last two weeks of October. Totally, 126 drinking and 126 sewage water samples were collected from three major hospitals of Peshawar i.e., KTH, HMC and DG, with the interval of 4 months except the rainy days. All samples were collected in sterile bottles from the premises of these health care sites (42 different sites in total, 6 samples were collected from Pyruvate, Mannitol, Sucrose and Ornithine, Esculin, fermentation of Sucrose and Lysine decarboxylase (Table 1). Sterile loop was used to pick the bacterial colony from the selective media and tested for biochemical test. The result was noted and process was repeated for three times. Only those bacterial pathogens were noted which gave same result every time. The instructions provided by Merck science lab Rawalpindi were followed to get the results. For Lactose test color change was noted as positive after broth culture. Indole test was noted as positive by appearance of pink red layer. Red color formation after addition of alpha-naphthol+sodium hydroxide while shaking the tube for 10 minutes was an indication of positive result for Voges-Proskauer test. Green color change to blue confirmed the positive result for Citrate test. For Nitrate test the color changed into dark red within 5-10 minutes. This test was carried with the addition of N,Ndimethyl-1-naphthylamine and sulphuric acid. The Oxidase test gave positive by appearance of purple color after applying 1% tetramethylp-phenylenediamine dihydrochloride on filter paper. For Catalase test the oxygen bubbles demonstrated positive result. Black precipitates affirmed positive test for H 2 S. Appearance of reddish color during Methyl Red test confirmed positive results for presence of E. coli. The yellow color was commuted to red for urease test indicating positive result. Visualizing under microscope, a hazy zone (irregular movement) formation confirmed the positive result for motile bacteria and a single line of growth formation indicated presence of non-motile bacteria. Regain of purple color from yellow color after 48 hours' incubation confirmed positive result for Ornithine test. Maltose test showed positive result after conversion of red color to yellow color. Here phenol red was used as PH indicator. In case of Mannose test the normal red color (phenol red indicator) commuted to yellow or pink, an indication of positive result. Similarly, Inositol test was noted positive by color transformation from red (phenol red indicator) to yellow or pink. For the Trehalose test the transformation of red color to yellow affirmed positive result. For sucrose test the color change from red to yellow was observed as an indication of positive result. For acetate test the clear zone formation was an indication of the acetic acid producing bacteria so it was considered positive for Acetobacter. For Triple Sugar Iron and Lysine Iron Agar tests the color change, butt and gas production was noted to the slants and compared the information available in the list provided by science lab Rawalpindi Pakistan. Gelatine hydrolysis test was performed and the starch hydrolization by making clear zones in surrounding was noted for positive results after addition of iodine. Lysine decarboxylase test was noted as positive by the color change to purple. A small amount of oil was added to prevent oxygen from moving out. In Coagulase test the clot formation indicated positive result. For Pyruvate test the change of blue green color to yellow was taken as confirmation of positive result. In Arginine test the purple color was changed to yellow, which is acquired as an indication of positive result. Tellurite test was confirmed as positive by the appearance of grey color on the growing colonies. In mannitol test the red color change to yellow confirmed the positive result. Furazolidone test was performed and the resistance or sensitivity was observed and compared to the list of information provided by science lab Rawalpindi Pakistan. RT-PCR and gel electrophoresis HCV RNA isolation from concentrated water samples were carried out using the Water RNA/DNA purification kit (NORGEN Biotek, Canada), according to manufacturer's instructions. The extracted RNA was reverse transcribed into cDNA. The amplified cDNA/DNA was subjected to PCR amplification. The PCR product was run on gel electrophoresis followed by observation of the obtained bands through gel documentation. Identification of bacterial pathogens Determination of pathogenic bacteria and viruses by conventional culturing and molecular techniques, respectively, is a reliable approach for assessment of water quality. Only those pathogens were included in final results which were found throughout the year in collected samples to make sure the presence of microbes regardless of physiological effect and environmental changes. Therefore, it is concluded that these pathogens made the studied sites their permanent habitat. Out of all the samples analysed, 42% pathogens were identified in drinking water and 58% pathogens in sewage water samples. Considering overall results (drinking and sewage water samples), KTH samples were highly contaminated (40%), followed by DG (31%) and HMC (29%) with little difference in results. Common bacterial pathogens traced in drinking water samples collected from all sites indicated that KTH water being highly contaminated had 10 different pathogenic species, HMC had 6, whereas, 4 different pathogenic species were detected in DG water. However, in case of sewage water, high species diversity was observed in DG samples that was contaminated with 14 different pathogenic species, as compared to 13 and 11 different pathogenic species investigated from HMC and KTH respectively. Besides common bacterial pathogens, some other important but seldom bacterial (Mycobacterium tuberculosis) pathogens were also identified in sewage water samples. Among the identified pathogens, Klebsiella pneumoniae and S. aureus were detected frequently, as compared to Proteus mirabilis, Psudomonas aeruginosa and Enterococcus faecalis, which were least common observed pathogens in all samples. Paradoxically, fresh water samples collected from DG had shown presence of Proteus vulgaris, and M. tuberculosis in sewage water that was present in almost 80% of all the samples collected from different locations of DG. The largest number of pathogenic bacterial species in fresh water systems was found in KTH samples, while the lowest number of pathogenic bacteria species in fresh water sources was found at HMC. However, in sewage water systems the largest numbers of bacterial species were observed at DG and the lowest numbers of bacterial pathogens were detected at HMC. A detailed list of bacterial pathogens identified in each sampling site is given in Table 2. Identification of viral pathogens Water samples collected from multiple sites of DG, KTH and HMC was further investigated for the presence of viral pathogens i.e., HCV and HBV. Sewage water samples collected from KTH and DG determined presence of HBV, whereas, HCV was only detected in the sewage water samples collected from KTH. However, no viral pathogens were detected in fresh water samples collected from any studied area (Table 2; Figure 1). Comparative analysis for pathogens identified in healthcare centres Based on type of species, comparatively more pathogens were detected in sewage water, that is, total 17 different types of pathogens were ascertained in sewage water and 11 in fresh water systems ( Figure 2). DG sewage water contains the most diverse species of pathogens while its fresh water sources contain the least pathogenic species, as shown in Figure 3. Most frequently observed pathogens in either fresh water or sewage water samples from all sample collecting sites were klebsiella and Staphylococcus epidermidis, whilst the least common pathogens were Proteus vulgaris, Providencia, Enterobacter faecalis and Mycobacterium tuberculosis (Figure 4). The overall result of both fresh and sewage water sources confirmed that KTH samples were comparatively more contaminated than DG/ DBG and HMC ( Figure 5). The least number of the bacterial species in DBG makes it safer than others and it might be due the privatized sector is taking better care to dispose the materials. Although there was no cleaning and burning systems but the lower bacterial burdens in water samples indicated the better treatment of wastes comparatively the other two sectors. KTH water indicated the most risk posing among all the investigated healthcare centers. It shows that lesser attention is provided to the treatment. Considering samples (both fresh and waste water) collected from each sample collecting site/health care units, the highest numbers of pathogens were observed in sewage water i.e., 11.77 a ± 0.57. Besides in sewage water the maximum numbers of pathogens were found in DG i.e., 13.33 ± 0.66 as compared to HMC i.e., 10.33 ± 0.66 where the lowest numbers of pathogens were detected. However, in case of fresh water, the maximum numbers of pathogens were found in KTH, whilst the minimum numbers of pathogens were identified in DG (Table 2). From overall result, it is evident that KTH water is highly contaminated and inadequate for consumption having highest number of pathogenic species i.e., 10.66 a ± 0.61, whilst the lowest number of pathogens identified in HMC i.e., 8.00 b ± 1.09 is also not safe to use. Discussion For all living organisms water is the most vital and important factor of survival. Inadequate access to clean water, inappropriate water treatment and bad sanitation systems is one of the most pervasive issues distressing people throughout the globe, causing waterborne infectious diseases, cause approximately 10 million deaths per year [3,20,21]. Human health is prone to microbial risks caused by enteric viruses and bacteria [22]. Studies have shown that contaminated drinking water has been source of several critical diseases, for instance, diarrhea, nausea, Cholera, typhoid, dysentery, abdominal pain and food poisoning. Situation is even worst at health care centres, where drinking water is source of pathogens transmission showing negligence of managerial authority towards supplying properly treated water. Variant pathogens are observed in ground and surface water, flood and dam water [23][24][25]. Furthermore, presence of bacterial pathogens is associated with physiochemical characters and location of drinking water sites [26]. To the best of our knowledge, the present study is the first systematic analysis on water sources of healthcare centres of Peshawar, KPK, Pakistan highlighting the presence of multiple substantial bacterial pathogens in hospital's drinking and sewage water [27]. List of variant infectious bacterial and viral pathogens identified in water samples that were present consistently throughout the year at KTH, DB and HMC are given in Table 1. The abundancy of these pathogens in water sources calls out for appropriate initiatives to be taken to curb outbreak of waterborne epidemics associated with contaminated water consumption [28]. In different sites the variation is characterized by physiochemical differences of the water sources [26]. Presence of Hepatitis B and C viruses in open water sources causes death of 60% of the affected people if persists for a longer time and proliferate continuously [29,30], are associated with serious public health issues [29,31]. Most frequently reported pathogenic species considering all water samples are E. coli, S. auerus, K. pneumonia, S. typhi and P. aeruoginosa [2,26,32], on the basis of current study it is suggested that consumption of such water is threat to public health. In our analysis the pathogens investigated can cause severe health problems in humans [9,33]. Most of the bacterial pathogens detected have been reported previously to be present in common water sources or home based drinking water sources [23,26] but their presence in the water sources of healthcare centers was not considered to be investigated. Furthermore, DG fresh water sources were contaminated with one third of pathogens number to that of KTH. We suggested that this high number of pathogens might be because of the improper water supply sources where sewage water can get entered into drinking water sources because of leakages in pipelines. Interestingly P. Vulgaris was only found in fresh water of DG but it was not detected in sewage water sources or other fresh water sources. Analysis of sewage water allowed us to detect diverse numbers of pathogens, the highest number in DG. The presence of K. pneumonia and S. auerus in all sites regardless of the water type is an indication that these are the permanent species as these were also reported in daily used water sources in surrounding regions [26]. P. mirabilis, P. aeruginosa and E. faecalis were unexpectedly found in the least sites as these are generally found in water sources of diverse locations [23,27]. In addition P. stuartii was interestingly found in almost all water sources, which was considered to be present at most in sewage water sources only. A. sobria presence was detected in fresh water of KTH too which is an unexpected result. Other bacterial pathogens (except M. tuberculosis) were found in diverse sites as they are generally considered to be found in water sources [26,28]. Surprisingly the viral species were detected in KTH and DG sewage water which is a threat to treatment seekers and patients care takers. We investigated that some highly pathogenic bacteria including M. tuberculosis were present persistently throughout the study period. Furthermore, the viral pathogens were also detection continuously throughout the year, which indicates that no proper treatment is carried to the water sources. To our knowledge, in hospitals, fluids from diagnostic tests and laboratories are improperly disposed of allowing pathogenic bacteria and viruses to contaminate water that runs off to the tap water and sewage systems, subsequently contaminating drinking water. The investigated pathogens, however, may be present in fresh water due to the lack of management interest in providing properly treated water wiped off from the pathogenic bacterial species; and treatment of hospital wastes accurately before disposing it. At the same time the laboratories owners are not admonished to throw the wastes in open places. Other sources of bacterial contamination of fresh water are surface runoff through hospitals and urban areas, pastures and agricultural lands, leakage of sewage disposal systems and septic tanks, overloaded sewage treatment plants, disposal systems and raw sewage deep well injection [2,9,34]. Similarly, we propose that contamination of drinking water observed during the present study involves factors like cross-connections, broken or leaking pipes, back-siphonage (backflow of polluted or contaminated water, from a plumbing fixture or cross-connection into a water supply line, due to a lowering of the pressure in the line) and intermittent water supply [9,34,35], and these pathogens have made the studied sites as their permanent habitats. Our approach offers an unbiased identification of those bacterial and viral pathogens which can lead to serious human health problems. The overall investigated pathogens in hospital's water samples are similar to the investigations of other water sources either drinking water, dam water, flood water or sewage water in KPK [26,27], somehow, our results differ in a way that through our investigation few uncommon bacterial pathogens like M. tuberculosis, and exceptional viral species are also identified. The reason might be the investigation site (hospitals) and consideration of only those pathogens in the results which were found in all sample collection sites and present throughout the year. Conclusions We came up with a conclusion that if current condition continued water borne illnesses will pose serious threat to public health. Addressing existence of disease causing pathogens in water sources for instance, E. coli, S. aureus, P. stuartii, K. pneumonia, H. influenzae, and P. sobira, calls out for a tremendous amount of research to be conducted to identify robust new water purifying techniques at lower The highest bacterial burdens were found KTH which is one of the biggest hospitals of the city. It had 40% of the total bacterial species burden whereas HMC had 31% and was ranked second. The safer among all was DBG where the total bacterial burden was 29%. cost, with minimal use of chemicals. These pathogens can enter into water pipelines through back-siphon age, cross-connections, broken or leaking rusted pipelines, thus intermittent water supply results in contamination of the distribution system. Hospital's waste and patient's fluid should be disposed of properly. It is encouraged to drink boiled water and have drinking utensils autoclaved, since most bacterial and viral pathogens cannot survive in boiled water.
2019-03-17T13:03:01.115Z
2017-04-18T00:00:00.000
{ "year": 2017, "sha1": "0474428d51e69d7afbe08b3d1c0549562f48249a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2157-7587.1000272", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9c180079916766d2b40db7a9e48dd0bd8f63a1e8", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53485299
pes2o/s2orc
v3-fos-license
George William Scott Blair -- the pioneer of factional calculus in rheology The article shows the pioneering role of the British scientist, Professor G.W.Scott Blair, in the creation of the application of fractional modelling in rheology. Discussion of his results is presented. His approach is highly recognized by the rheological society and is adopted and generalized by his successors. Further development of this branch of Science is briefly described in this article too. Foreword to the second extended version. After appearance of the first version of the paper in arXiv it has got an interest as from Mathematical as from Rheological Society. In March 2014 one of the authors (SR) made a report at the research seminar at Aberystwyth University. The content of this seminar was described later by Prof. Simon Cox in his note published at Rheological Bulletin [12]. For readers' convenience we present this note in the Appendix to the extended version of this ArXiv paper. We also add some additional references justifying the role of Scott Blair's results ( [23], [24], [30]). On this respect it is interesting to mention the "Bingham Lecture" [32] readers to our recent book [21] devoted to the description of the properties of the Mittag-Leffler function, the Queen Function of the Fractional Calculus. The interest to applications of fractional calculus in modelling of different phenomena in Physics, Chemistry, Biology is rapidly increasing in the recent three decades. First of all we have to point out the constitutive modelling of non-Newtonian fluids. The main reason is that the fractional models give us possibility for simple description of complex behaviour of a viscoelastic material. In pioneering (mainly experimental) works of 1940-1950th, it was discovered, for instance, that the relaxation processes in some materials exhibit an algebraic decay, which cannot be described in the framework of the Maxwell model based on exponential behaviour of the relaxation moduli. In order to see perspective in the development of fractional models it is important to understand how such models appear, and what was really done by the pioneers. Among the works which are in the core of the first period of the fractional modelling one can single out the series of articles and monographs by G.W. Scott Blair. His role is not overestimed by the rheological society (see, e.g. [5], [13]), but anyway, some details of his work are still of great importance. We propose here an analysis of the results by G.W. Scott Blair along with their influence on the modern development of the fractional modelling in rheology. Short biography by G.W. Scott Blair Dr. George William Scott Blair (1902Blair ( -1987 was born on 23 July 1902, of Scottish ancestry, in Weybridgein Surrey, England. After graduated a famous public school at Charterhouse he went to Trinity College Oxford in 1920, where he studied Chemistry, with Prof. Sir Cyril Hinshelwood as his tutor. He carried out a one-year research project in colloid chemistry to complete his master thesis with honour degree. After graduating Scott Blair was employed as a colloid chemist with a Manchester firm of Henry Simon, working there on the viscometry of flour suspensions, publishing his first rheology paper in 1927. In 1926 he was offered a post in the Physics Department of the Rothamsted Experimental Station, where he was working on the flow properties of soils and clays until 1937. It was there, where he made with his colleagues the first quantitative study of so called sigma-phenomenon, which was originally described by Bingham and Green in 1919. Schofield and Scott Blair (see [49]) studied this phenomenon from 1930 at Rothamsted for soil and clay pastes and named it "sigma effect". These studies were probably unknown to Fȧhraeus and Lidquist, who first discovered the sigma effect for blood, referred to as the "Fȧhraeus-Lindquist phenomenon". At this period some preliminary experiments were provided by Scott Blair which led him later to the necessity to consider anomalous relationship between stress, strain and time (see, e.g., [50] After returning to Rothamsted, Scott Blair made rheological research on honey and flour doughs. He also studied together with the well-known psychologist David Katz, psychophysical problems in bread making. His interest in psychology led him, together with F.M.V. Coppen, to initiate a new field, for which he coined the word "psychorheology". It is considered as one of the fields of biorheology. In 1936 he submitted his PhD thesis to the University of London, and it was examined by Prof. Freundlich. The same institution later awarded him a D.Sc. for his labours in rheology (probably the first ever rheology D.Sc.) In 1937 he joined National Institute for Research in Dairying, University of Reading as a head of Chemistry but soon took over the newly formed Physics Department and remained in that position until his retirement thirty years later. In 1940 the British Rheological Society was founded. Scott Blair played a prominent role and took active part in the development of rheology. He was a Founder-Member and, later, president of the British Society of Rheology (1949)(1950)(1951) [7]. For many years Scott Blair was the Chairman of the British Standard Institute Committee on Rheological Nomenclature. During almost a half of a century, George W. Scott Blair was one of the leading rheologists. Beginning from 1957 Scott Blair devoted his experimental and theoretical work entirely to hemorheology. Since the foundation of the International Society of Hemorheology in Reykjavik, Iceland in 1966, he was a member of its Council and acted as Chairman of its Committee on Standards and Terminology. After he retired he worked on the flow and coagulation of blood at the Oxford Haemophilia Centre. Scott Blair was very active in publication and editorial work. He was a co-founder of the Journal "Biorheology" and its Co-Editor-in-Chief from its inception in November 1959 to December 1978 (see [11]). The books and research papers of Scott Blair were donated to the British Society of Rheology and later deposited in the Library of Aberystwyth University in early 1980's. The collection has over 550 books and its aim is to develop this "into an up-to-date library of rheological literature available to all members of Society". Rheology Abstracts and the British Society of Rheology Bulletin are two journals published by/for the Society which form an important of the Collection. The books and journals catalogued online (access via http://primo.aber.ac.uk). Rheology and Psychophysics It was Professor Bingham who had chosen the name "Rheology" for this branch of the Science and gave the definition of it: "The Science of Deformation and Flow of Matter" (see [53]) motivated by Heraclitus' quote "παντ α ρει" ("everything flows"). Rheology is one of the very few disciplines having exact day of its birth, April 29, 1929, when the preliminary scope of the Society of Rheology was set up by a committee met at Columbus, Ohio. Anyway, the ancient Egyptian scientist Amenemhet (ca 1600 BC), who made the earliest application of the viscosity effect, can be considered as the first rheologist (see, e.g., [52]). The observables in rheology are deformations or strains, and the changes of strains in time. Changes of strains in time constitute a flow. Thus, these changes are generally associated with internal flow of certain kind. States of stress are inferred either from the comparative strain behaviour of complex and simple systems in interaction or from the behaviour of a known mass in the gravitational field. In physical testing, stresses (S), strains (σ) or their differentials with respect to time Ṡ ,σ are normally held constant, leaving either a length to be measured, or the time (t). There is a group of fluids which is characterized by a coefficient of viscosity for a specific temperature. These fluids, known as Newtonian fluids, were singled out by Newton who proposed the definition of resistance (or viscosity in modern language) of an ideal fluid. Pioneering work on the laws of motion for real (i.e. non-ideal) fluids with finite viscosities was carried out by Navier [36] and later by Stokes [66]. The Navier-Stokes equation enabled, among other things, prediction of velocity distributions and flow between rotating cylinders and cylindrical tubes (see [13]). Nowadays rheology generally accounts for the behaviour of non-Newtonian fluids, by characterizing the minimum number of functions that are needed to relate stresses with rate of change of strains or strain rates. This kind of fluids is called Newtonian since Newton's introduction the concept of viscosity. In practice, rheology is concerned with extending continuum mechanics to characterize flow of materials, that exhibits a combination of elastic, viscous and plastic behaviour by properly combining elasticity and (Newtonian) fluid mechanics. In [13] the main directions in the development of the rheology are described. First of all this is linear viscoelasticity. One of the most important early contribution in this area is the work by Maxwell [31]. In order to explain the behaviour of the materials which are neither truly elastic nor viscous he proposed a constant relaxation time (t r ) and justified implicitly the model of a dash-pot and spring in series. Anyway, he realized that for some materials the assumption of constancy of the relaxation time is oversimplification, in these cases t r has to be a function of stress. Meanwhile, the notion of Maxwell's units (i.e. pieces of a material having constant relaxation time) has been widely explored by rheologists. Later the conception of the "orientation times" τ has been developed (see, e.g., [2]). It is considered unit stress conditions, supposing that the strain is approaching to an equilibrium value. Thus, the immediate Hookean strain is first subtracted and τ is defined as the time taken for the remaining strain, resulting from the orientation of the chains. Another direction which was singled out in [13] is the study of generalized Newtonian materials. This type of fluid behaviour is associated with the work by Bingham [6] who proposed so called yield stress to describe the flow of paints. In [60], it has been pointed out the close similarity between the usual experimental Bingham curve and the curve of a high power-law. Thus, it shows possibility of existence of systems for which the Bingham plot gives a fairer and simpler account of the data. The study of non-linear viscoelasticity started at the begining of XXs century, when the area of rheology was most rapidly grown (see, e.g. [13]). Thus, Poynting [42] in his experiment discovered that loaded wires increased by a length that was proportional to the square of the twist, what did not correspond to the usual expectation of the linear viscoelasticity theory. Probably the first theoretical work on non-linear viscoelasticity was done by Zaremba [73], who extended linear theory to the non-linear regime by introducing corotational derivative in order to incorporate a frame of reference that was translating and rotating with the material. More extended description of the results in non-linear viscoelasticity can be found in [13] (see also [28], [69] and references therein). Not all properties of flowing matter can be interpreted in term of real rheological sense. In this case psychophysical approach with its psychophysical experiments can be helpful. Psychophysics is defined as the scientific study of the relation between stimulus and sensation (see, e.g. [20]). Psychophysicists usually employ experimental stimuli that can be objectively measured. Psychophysical experiments have traditionally used three methods for testing subjects' perception in stimulus detection and difference detection experiments: the method of limits, the method of constant stimuli and the method of adjustment. G.W. Scott Blair widely used psychophysical experiments in his research (see [54]). Therefore, it is interesting to recall how he described the role of psychophysics in rheology (see [53]): "The complex and commercially important rheological "properties" of many industrial materials are still assessed subjectively by handling in factory and are expressed in terms "body", "firmness", "spring", "deadness", shortness", "nerve", etc. -concepts which cannot be interpreted ... in terms of simple rheological properties at all. In view of this fact ... it is clearly advisable to know something of the accuracy with which these handling judgements can be made and, by bulking sufficiently large numbers of data together so that reproducibility is ensured, to attempt to correlate the entities so derived with manageable functions of S : σ : t. A start has been made in this direction and not only have a number of reproducible regularities been observed, but the information obtained has laid the foundations of a theory of "Quasi-properties" which it is hoped will facilitate the study of purely "physical" rheology of complex materials." This observation is a core of Scott Blair's method which he used along his career. Nutting's Law In 1921 Nutting reported ( [37]) about his observation that mechanical strains appeared at the deformation of the viscoelastic materials decreasing as powertype functions in time. From a series of experiments, which covered a range of materials from the elastic solid to the viscous fluid, Nutting suggested a general formula relating shear stress, shear strain and time, whenever shear stress remains constant: with constant order α ∈ (0, 1) which is close to 1/2 for many materials. This conclusion was in a strong contradiction to the standard exponential law. Later the Nutting's observation was justified by Gemant who studied the properties of viscoelastic materials under harmonic load. It was shown that the memory function η(t) can have power-type relaxation behaviour proportional to t −3/2 . In 1950 Gemant published a series of 16 articles entitled "Frictional Phenomena" in Journal of Applied Physics since 1941 to 1943, which were collected in a book of the same title [18]. In his eighth chapterpaper [17, p. 220], he referred to his previous articles [15], [16] for justifying the necessity of fractional differential operators to compute the shape of relaxation curves for some elasto-viscous fluids. Gemant has used half-differential, but is his later papers he says that fractional differential "only occurs as a useful mathematical symbol, whereas the underlying elementary process, whatever it may be, will probably contain differential quotient of an integral order". Scott Blair surely knew the attempts by Gemant (see, e.g. [63]) to generalize Maxwell's theory by changing various (but integer) powers in complex modulus of the Maxwell Fluid Model to fractional powers. In fact, Scott Blair (together with Coppen) also came to the form of Nutting equation, but from another consideration. They argued that the material properties are determined by various states between an elastic solid and a viscous fluid, rather than a combination of an elastic and a viscous element as proposed by Maxwell. In [57] it was pointed out that, since for Hookian solids, strain is proportional to stress and to unit power of time, for intermediate materials, it might be expected to be proportional to stress and to some fractional power of time with exponent α, 0 < α < 1 and described this relation in the form where proportionality coefficient ψ is a constant. Derived in this way the equation (2) looks entirely empirical, tough the fundamental significance of α (which is called the dissipation coefficient) is shown in psychophysical experiments described by Scott Blair and Coppen (see [58], [59], [61]). A comparison of the partially differentiated Nutting equation and Maxwell's equation may be written (see [51]), namely, for Nutting: 2 and for Maxwell: Since Nutting equation gives t = ψ 1/α S −β/α σ 1/α , it is apparent that the Nutting treatment postulates a single relaxation time proportional to a power of the stress. This is simplest possible way of implementing Maxwell's suggestion that relaxation time t r may be some function of stress. From the other side it justifies the believe that the use of fractional calculus in description of processes toward equilibrium is necessary if one has to keep the Newtonian time scale. Scott Blair's fractional model It was suggested in [61] that, in considered cases, comparative firmness is judged neither by σ, nor byσ, nor by any mixture of these two magnitude, but by some intermediate entity, namely by fractional derivative ∂ ν σ ∂t ν . 3 More exactly, he wrote: The general constitutive equation "... is applicable to integral values of n but a more general equation may be used even n is a fraction.The numerical coefficient is expressed as a quotient of Γ-functions and may be written The expression Γ(k + 1) is given by and, whatever the value of n, the Γ factor is a number independent of Ψ m and t c , so that the validity of the plot is unchallenged." In his work Scott Blair did not specify what kind of fractional derivative he used. From the way how he has calculated derivative of any power we can conclude that this is the standard Riemann-Liouville derivative. It is quite instructive to cite some words by Scott-Blair quoted by Stiassnie in their correspondence, see [65]: I was working on the assessing of firmness of various materials (e.g. cheese and clay by experts handling them) these systems are of course both elastic and viscous but I felt sure that judgements were made not on an addition of elastic and viscous parts but on something in between the two so I introduced fractional differentials of strain with respect to time. Later, in the same letter Scott-Blair added: I gave up the work eventually, mainly because I could not find a definition of a fractional differential that would satisfy the mathematicians. The above said Principle of Intermediacy was discussed in details by Scott Blair in [51] basing on purely physical grounds. The theory of fractional modelling in rheology is developed by Scott Blair, Veinoglou and Caffyn in [63]. In [53, p. 30] it is briefly summarized: "... times are normally defined as equal when "free" Newtonian bodies (or alternatively light) traverse equal (superposable) distances in them. This leads to a a definition of velocity as the first differential of length with respect to time which, because of this definition of time equality, is constant for Newtonian bodies; and to the second differential, called acceleration. When bodies are not influenced by other bodies, and their velocities change with time, a force is postulated and defined as rate of change of (velocity × mass). It is long been realized that the Newtonian time scale arbitrary (see [41, p. 80]) and in the case of a complex plastic being strained, the rheologically active units are certainly not independent Newtonian bodies. It should, therefore, be easy to choose a non-Newtonian time equality definition which would reduce the entities by which firmness is judged to simple whole-number 4 differential expression. The use of separate time scales for different materials is not convenient, however, so Newtonian time is used, but, as a result of this arbitrary procedure, the derived constants cannot be expected to be built up entirely from whole-number differentials. It is thus apparent that fractional differential is an essential feature of our whole mode of approach." In [56] are discussed the circumstances under which it is practicable to express the Nutting equation and its fractional derivatives in a simple dimensional form. Three main principles of a new proposal are formulated: (1) the fact that the treatment does not lead to any understanding of structure of the materials or of their molecular configurations; (2) the only entities are used whose dimensions depend of the nature of of the material; (3) fractional derivatives and corresponding coefficients are understood as something intermediate between zero and first derivatives and corresponding coefficients. Scott Blair highly supported (see, it e.g. [60]) the ideas by Nutting supposing that for that moment it describe a special but very frequently adequate cases. Anyway, he though that the phenomenon dealing with Nutting equation are related to the fundamental structure of materials. Fractional derivative of order µ, 0 < µ < 1, with respect to time t of the Nutting equation (in the form (2) gives (see [61], [63]) Another way to justify this relation is to introduce quasi-property χ 1 by the Principle of Intermediacy since the viscosity can be defined by the relation η = S ÷ dσ dt , and shear modulus as n = S ÷ σ. The relation (7) can be integrated to give Fractional models in rheology After the first applications of the fractional derivatives in the modelling of the processes in rheology several other fractional models were proposed to describe certain rheological phenomenon. We briefly outline here the most discussed models of such a type. 5 Gerasimov [19] used similar arguments as Scott Blair (in, e.g. [61], [63]), namely, interpolation between Hook and Newton's law, in order to introduce a rheological constitutive equation in terms of a precise notation of fractional derivative σ(t) = κ α D α 0+,t ε(t). This equation was used in [19] for description of the flow of the viscoelastic between two parallel plates. He obtained an exact solution by using operational method. He has started his consideration by appealing to the Boltzmann equation saying that from experiments follow the importance of a special case of the Boltzmann equation corresponding only to the hereditary part of the stress σ(t) or even the processes for which σ(t) has a memory on the velocity of all earlier deformations 5 Here and in what follows we will use modern notations for stress (σ) and strain (ε) that are not be confused with the corresponding notations used by Scott Blair, namely (S) and (σ). Furthermore we will write D α 0+ to denote the Riemann-Liouville fractional derivative implicitly adopted by Scott Blair. For the kernel in this integro-differential relation he claim that for certain materials this kernel (relaxation function) has the a form Hence, equation (11) can be written as 6 In particular, α = 1 gives us the Newton law, and α = 1 corresponds to Hookean law. Same approach is used in the above article for the study of the rotational viscoelastic flow between two concentric cylinders. Similar to (9) formulation of the fractional model was proposed by Slonimsky [64]). Rabotnov (see [43] and more extended description in his monograph [44]) presented a general theory of hereditary solid mechanics using integral equations (see also [25], where the use of integral equations for viscoelasticity was revisited and interjects fractional calculus into Rabotnov's theory by the introduction of the spring-pot was presented). Rabotnov introduced an hereditary elastic rheological model with constitutive equation in form of Volterra integral equation with weakly singular kernel of special type 7 where t α is the aging time, α ∈ (−1, 0], β = 0, and the kernel R is represented in the form of power series β n x n(α+1) Γ((n + 1)(α + 1)) . Rabotnov's kernel function R α (β, x) is related to the well-known Mittag-Leffler function E α,β (z) highly explored nowadays in the fractional calculus and its applications, namely Of course Scott-Blair did not know the Mittag-Leffler function and its asymptotic behaviours (stretched exponential for short times and power law for long times). Presumably that Scott-Blair had guessed the behaviour of the M-L function but he did not have the mathematical background being overall an experimentalist. Both Scott Blair's model (6) and Gerasimov's model (9) are naturally considered later as special cases of fractional Maxwell's model with rheological constitutive equation of the form where E is the shear modulus, and λ is the relaxation time. This equation generalizes celebrating Maxwell equation in which for the first time Newtonian law for viscous fluid and Hook's law for elastic solid are combined to describe the behaviour of visco-elastic media Partial case of fractional Maxwell's model is the so-called three-parametric generalized Maxwell's model with constitutive equation of the type Another popular fractional model with three parameters is the Kelvin-Voigt fractional model that presumably for the first time was introduced by Caputo [8] in 1967, It is a generalization of the classical Kelvin model having the following constitutive equation More general constitutive equation corresponds to the so called fractional Zener model: formerly introduced in 1971 by Caputo and Mainardi [9]. Theoretical background for this was done by Bagley and Torvik, see [3], [4]. It has to be pointed out that the above considered Rabotnov's model (13) is equivalent to the fractional Zener model, see [68]. Sometimes the Poynting-Thomson fractional model is discussed with rheological constitutive equation of the type More extended discussion of the fractional models in rheology can be found in [10], [28], [40], [48], [72]. This approach has been successfully applied to describe rheological behaviour of organic glasses, elastomers, polyurethane, polyisobutylene, monodisperse polybutadiene and solid amorphous polymers in a wide temperature range (see for example [1] and references therein). In [71] and [67] have been derived equations governing the time-dependent indentation response for axisymmetric indenters into a fractional viscoelastic half-space and have proposed an original method for the inverse analysis of fractional viscoelastic properties and applied to experimental indentation creep data of polystyrene. The method is based on fitting the time-dependent indentation data (in the Laplace domain) to the fractional viscoelastic model response. It is shown that the particular time-dependent response of polystyrene is best captured by a bulk-and-deviator fractional viscoelastic model of the Zener type. We shall dwell in details on fractional differential models of viscoelasticity and then consider a few standard hydrodynamic problems in the simplest model of this type. It is impossible to describe the modern state in the fractional rheology. We refer interested readers to the recent monographs [28], [69], and to the survey paper [70] for some additional comments on pioneering works in applications of fractional calculus. Scott Blair's work on fractional calculus, and his talk was based on a recent paper with Francesco Mainardi (Bologna) [45]. As readers may know, not only did Scott Blair help found the BSR, and serve as one of its first presidents, he also contributed to the founding of the (American) Society of Rheology twenty years earlier. His interest in fractional calculus was motivated by trying to explain his experimental results on food rheology, trying to quantify effects such as "firmness" and "taste" and the influence of material memory on rheological response [63]. Gareth McKinley spoke about the same topic in Chicheley Hall last year at the INNFM meeting [32] (see also [24]). Professor Rogosin explained how Scott Blair picked up on Nutting's theoretical work, and Gemant's experiments some time later, indicating that fractional exponents in constitutive relations offered a way to interpolate between Newtonian liquids and Hookean solids, and consequently introduced the idea of quasiproperties. The speaker made the link with Volterra integral equations, which had been developed earlier in the 20th century, providing a robust mathematical framework in which to explore these relationships (although Scott Blair himself probably did not recognise this). At the same time as Scott Blair was working in this area, both Gerasimov and Rabotnov were tackling this problem of interpolation in similar ways and publishing in the Russian literature (although Scott Blair spoke Russian, the 1940s were of course a difficult time to exchange scientific information!). In particular, Rabotnov recognised that using fractional exponents offered the possibility of predicting power-law decay in time, rather than exponential decay, allowing much better fitting of the rheological behaviour of complex fluids. He later acknowledged Scott Blair's contributions in a textbook published in the 1980s; indeed, Scott Blair's contributions were widely valued both by experts in fractional modelling [27] and by rheologists [13]. The seminar was followed by a discussion of nonlinear elasticity and integral equations, upon which I shall not report here. The Scott Blair Collection of books on rheology continues to grow thanks to a financial bequest to the BSR. Donations are also welcome. Further details are available at https://www.aber.ac.uk/en/is/collections/scottblair/ Simon Cox, Aberystwyth, March 2014
2014-12-14T09:46:50.000Z
2014-04-12T00:00:00.000
{ "year": 2014, "sha1": "ee0e0f923d81d2b670a5799b40829db2c9bc6f4b", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1685/journal.caim.481", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "ee0e0f923d81d2b670a5799b40829db2c9bc6f4b", "s2fieldsofstudy": [ "Mathematics", "Engineering" ], "extfieldsofstudy": [ "Philosophy", "Mathematics", "Physics" ] }
214081302
pes2o/s2orc
v3-fos-license
Calculation of force and energy parameters of fracture in the area of the front of the crack in the shell of the reactor of coke chambers This paper presents a three-dimensional elastic calculation of the stress-strain state of the reactor and the force and energy parameters fracture for an elliptical crack located in a cylindrical shell. It is found that for an elliptic crack, the force and energy parameters of the fracture change nonlinearly along the crack front. The qualitative and quantitative relations for the stress intensity factors (SIF and J-integral along the crack front taking into account its size, the shape of the front and the angle of inclination) are obtained. Reactor (coking chambers) is designed for accumulation of the mixed raw material supplied to the column through the furnace and further coking process and accumulation of the resulting coke in the delayed coking unit of a refinery. Reactor ( Figure 1) is an all-welded hollow cylindrical vessel with an internal diameter of 5500 mm, height 27225 mm, capacity 540 m 3 , with the upper hemispherical and lower conical bottoms with necks for entering the hydraulic cutting tool and unloading coke. The reactor is installed on 6 support racks located in the zone of the support belt. The heated raw material of coking enters the reactor through the fitting raw material inputs located in the lower part of the conical bottom. At the top and bottom neck of the reactor there are fitting for the output vapors of the coking and the output vapors of cooling the coke with water and a fitting for filing antispamimage additives and fitting for the output vapors during the heating of the chamber and for heating the at start-up. Defect such as crack in the housing shell are observed due to the cyclic nature of the operation the reactor, which a effects the stress-strain state and in general the reliability of the entire reactor. In this paper, the impact of the crack location in the shell area of the device on the force and energy parameters of fracture was assesses. To assess the danger of such cracks, a three-dimensional numerical experiment was performed, which consists of modeling and subsequent calculation by the finite element method. Stress intensity factor (SIF) K1, K2, K3 and energy integral J were calculated taking into account the shape, location on the outer surface of the shell, as well as taking into account the actual size and angle of the crack. The stress intensity factors K1, K2, K3 were calculated by the known method according to the energy release rate using the formula. The energy release rate was calculated by virtual crack propagation [3]. The calculation of the fracture force parameters was performed on a full 3D model of the reactor, the crack was set in the zone of the welded joint of the shell and the conical bottom. The loading was given as an internal pressure of 0.62 MPa. The weight of the device and the hydrostatic pressure of the liquid inside the device and the temperature of the environment were also taken into account. Formation of calculation models. The reactor was made of steel 12HN10T (С-0,12%, Cr<1%,Ni -10%,Ti<1%) besides; the temperature of the medium 475 0 C was taken into account in the calculations. The mechanical characteristics of the material used in the calculations given in table 1. The following dimensions were taken into account: the thickness of the reactor wall (T) was 21 mm, the diameter of the shell was 5500 mm, and the diameter of the nozzle was 300 mm. The form of the crack in the area of the support wall is shown in Figure 2. Allowable stress for this material at a temperature of 160⁰C is equal to 145.5 MPa, at 475⁰C is equal to 114.0 MPa. Mechanical properties in the temperature range from 20°C to 500°C were determined by linear approximation. All calculations were performed in a static statement in the ANSYS/Workbench package [2]. All calculations were performed in a static setting in the ANSYS/Workbench software package [2]. One of the variants of the finite element mesh for a full-size the reactor object is shown in Figure 3 (a,c). The crack in the welded joint of the reactor shell, which has the shape of an ellipse arc with a small radius a, propagated to no more than 0.4 of the sheet wall thickness. Design parameters of the finite element model. At the first stage, the elastic calculation of the stress-strain state of the reactor by the finite element method was performed. The solid-state reactor model, boundary conditions, and finite element mesh for this calculation are shown in Figure 3. In all the parts of the FE model of the reactor four elements of wall thickness were used. It is found out that the maximum stresses are observed in the zone of welds crossing the shell and branch pipes as well as in the zone of welding support legs of the reactor. At visual inspection of the welded joint of the shell, after the acoustic emission control, surface defects (cracks) were found. The FE mesh, which was further used for FEM calculations in the fracture weld zone, is shown in Figure 3(b) and Figure 6. The theory of the method of calculation. The coefficients of intensity of elastic stresses of the first, second and third type K1, K2 and K3, as noted above, were calculated by the rate of energy release determined by the method of virtual crack propagation [2][3]. Figure 4 shows the coordinates of the crack and the integration loop used for the calculations. The stresses at the crack tip are described by the following relations: (2) (2) (2) 11 1 33 cos 1 sin sin sin 2 cos cos , (2) 13 The force parameters of fracture were defined as: Figure 3(a, b). In the support racks of the device, vertical displacement restrictions were set, in the zone support of the mobility rack contact conditions were set, taking into account the slippage of the support surface of the rack. Loads in the form of axial forces and moments from strapping by technological pipelines were applied to the fittings. The temperature of the environment inside the reactor was set to 475°C. The model of a crack. The area with an elliptical crack was described by isoparametric elements of the second order and set separately from the main model using the CRACK tools in the Static Stractural module [2]. As a result, a hybrid FEM of the reactor with an elliptical crack was formed. The area with a crack with a radial-ring structure of the FE is shown in Figure 3 and Figure 6 with different magnification. Six elements were located along front of the crack, eight elements were divided in the radial direction around the tip of the crack to simulate the fracture and the singularity type stress near the fracture tip [3]. The crack was modeled in the form of an ellipse with parameters C=2.0-28.0 mm, a=0.4-6.0 mm, located in the zone of the radius transition from the branch pipe to the shell of the reactor. The orientation of the crack was chosen so that the X-axis was located along the short radius of the ellipse, and the Z-axis along the long side of the front of the crack. Results of calculation of force and energy parameters of fracture. The dependence of the force parameters fracture K1, K2 and K3 and the J-integral on the crack front is shown in Figure 8-9, where C is the length of the crack front measured from the crack end located on the reactor surface. For the analyzed crack depth and length it was found that the stress intensity factors K1, K2 and K3 change nonlinearly along the crack front. For Figure 7(a) stress fields in the crack tip region are shown. The effect of depth and crack length. Non-destructive testing methods are often revealed cracks of different geometry located in the weld zone of the shell and the bottom of the reactor. The crack can be formed and have different shapes, which leads to a change in the values of the force parameters of fracture and, accordingly, affect the operational survivability of such a connection. The shape of the crack can change when the depth of the crack changes at a constant length, which should also be taken into account when analyzing the structural strength. In this calculation, the crack was located in the weld zone of the reactor shell, as shown in Figure 3(c). Figure 7(b) shows the region of the crack opening and isopoly movements. The results of calculation force of the fracture parameters K1, K2 and K3 and J-integral are shown in Figure 9-10, the calculation is performed on the FE model of the reactor with an elliptical crack located horizontally taking into account its size. The length of the crack C varied from 10 mm to 28 mm at the maximum depth of the crack a=2 -4 mm. the Crack was given in the form of an ellipse. It is established that the fracture force parameters K1, K2 reach a maximum at the peak depth along the crack front ( figure 8). With the growth of the crack length 2C from 10 mm to 28 mm, there is a monotonic increase in the values of SIF (K1, K2, K3) and the energy integral. With a change in the crack length from 10 mm to 28 mm at a maximum depth along the crack front a=2.0 mm, the firstkind SIF (K1) grew more than 2.0 times. Effect of the angle of inclination of the crack. The crack can is located at a certain angle, which must also be considered in the analysis of structural strength. Therefore, a numerical was experiment performed to assess the effect of the crack shape and its location and angle on the force parameters of the fracture. The results of the calculation of force parameters fracture for the elliptical shape of the crack at an angle of 45° to the reactor axis are shown in Figure 11. The geometry of the cracks is ranged in length C=24 mm with a depth of a=8,0 mm. It is found out that the dependences of the intensity factor of the first kind and the J-integral on the crack length along its front, calculated for the crack located on the reactor shell, vary along the crack front and depends on the size, shape and angle of its inclination. Thus, for an elliptical crack, the SIF of the first kind (K1) reaches a maximum at the most distant point along the front of the crack front, and at the exit to the surface of the K1 decreases. For example, for a crack with parameters C=24 mm and a=8 mm, and located at an angle of 45° to the axis of the reactor, the increase in SIF K1 along the crack front reaches values of 398.67 MPa·mm 1/2 . In table. 2 generalized values of force and energy parameters of fracture for the entire range of considered crack lengths located in the most loaded cylindrical part of the reactor vessel are presented. The observed change in K1 along the crack front is qualitatively consistent with a number of published similar elastic calculations for comparable geometric shapes of the welded joint and crack sizes. The results of similar three-dimensional elastic calculations by the finite element method are given in the works of the authors [4][5][6] for different crack lengths and coincide well with the given data. Figure 10. Dependences SIF (К1) and J-integral for the elliptical crack shape located at an angle of 45° to the reactor axis 1-2C=24 mm; а=8 mm It should be admitted that the comparisons above with published solutions are approximate due to incomplete information about the geometry of the object and the loading conditions. Conclusion The elastic calculation of the power and energy parameters of fracture for an elliptical crack located in the zone of the reactor shell is carried out. The dependences of the stress intensity factor of the first, second and third kind and the energy integral are obtained taking into account the shape of the crack and its size and angle of inclination. It is shown that for a crack located in the cylindrical shell of the reactor, the stress intensity factor of the first kind can vary (increase) along the crack front by 2.0 times or more and can reach maximum values at the most remote point of the crack front. In this case, the greatest danger is represented by cracks located in the welded joint zone of the cylindrical shell of the reactor vessel.
2020-01-09T09:22:32.327Z
2020-01-03T00:00:00.000
{ "year": 2020, "sha1": "b88182dd03c6997c0efd859a3cdf06e32b6e49ac", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/709/3/033026", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a5a9f301d296165d81ce5561a28e2eb742024078", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
247597896
pes2o/s2orc
v3-fos-license
Non-antibiotic treatment of acute urinary tract infection in primary care: a qualitative study Background The views of women with acute, uncomplicated urinary tract infection (auUTI) on the acceptability of non-antibiotic treatment options are poorly understood. Aim To establish women’s thoughts on and experience of non-antibiotic treatment for auUTIs. Design and setting Qualitative interview study with primary care patients in Oxfordshire, UK, embedded within the Cranberry for Urinary Tract Infection (CUTI) feasibility trial. Method One-to-one, semi-structured interviews were conducted between August 2019 and January 2020 with some CUTI trial participants and some patients who were not part of the CUTI trial who had experienced at least one urinary tract infection (UTI) in the preceding 12 months in Oxfordshire, UK. Interviews were analysed using thematic analysis. Results In total, 26 interviews were conducted and analysed. Women expected to receive an immediate antibiotic for their UTI but were aware of the potential harms of this approach. They were keen to find a non-antibiotic, ‘natural’ alternative that could effectively manage their symptoms. In certain situations (early illness, milder illness, and with no important upcoming engagements), women indicated they would be prepared to postpone antibiotic treatment by up to 3 days, especially if offered an interim non-antibiotic option with perceived therapeutic potential. Conclusion Many women with auUTIs are open to trying non-antibiotic treatments first in certain situations. There is scope for more dialogue between primary care clinicians and patients with auUTI around delaying antibiotic treatment and using non-antibiotic options initially, which could reduce antibiotic consumption for this common infection. INTRODUCTION As antibiotic resistance continues to rise, 1 there has been growing interest in non-antibiotic treatments to manage common bacterial infections, such as acute, uncomplicated urinary tract infection (auUTI). AuUTIs are commonly managed in general practice, 2 and are almost always treated with immediate antibiotics. 3 However, many UTIs are self-limiting, and there is potential to avoid antibiotics. Trials of non-steroidal anti-inflammatory drugs (NSAIDs) [4][5][6][7] and herbal treatments 8,9 for auUTI treatment have typically resulted in reduced antibiotic consumption but worse symptom control. Many trials of cranberry extract for UTI prevention have been conducted, with promising results. [10][11][12][13] However, many have suffered methodological problems, such as high participant drop-out attributed to difficulty drinking large volumes of cranberry juice over extended periods. 11 A systematic review found limited evidence for or against using cranberry extract to treat auUTIs. 14 Despite this, up to 27% of women report consuming cranberry products for auUTI treatment and around 17% use cystitis sachets to help manage auUTIs, 15 despite an absence of randomised trial evidence. 16 In addition to establishing the efficacy of non-antibiotic treatments through clinical trials, it is critical to understand women's thoughts on and experiences of using them for symptoms of auUTI. Such qualitative exploration provides an understanding of whether and how women might engage with such treatments if they were shown to be clinically effective and introduced into routine clinical practice. Previous qualitative research has established that a delayed antibiotic strategy may be acceptable to some women. 17,18 However, limited studies have focused on exploring the acceptability of non-antibiotic treatments as part of a delayed antibiotic strategy for auUTI symptom management. 19 The aim of this study was to explore women's views on treating auUTIs with non-antibiotic treatments. METHOD Context and recruitment Interviews were embedded within the Cranberry for Urinary Tract Infection (CUTI) feasibility trial, an open-label, randomised feasibility trial of the use of cranberry extract in treating symptoms of auUTI in primary care. 14 In the CUTI trial, patients with auUTI presenting to participating Abstract Background The views of women with acute, uncomplicated urinary tract infection (auUTI) on the acceptability of non-antibiotic treatment options are poorly understood. Aim To establish women's thoughts on and experience of non-antibiotic treatment for auUTIs. Design and setting Qualitative interview study with primary care patients in Oxfordshire, UK, embedded within the Cranberry for Urinary Tract Infection (CUTI) feasibility trial. Method One-to-one, semi-structured interviews were conducted between August 2019 and January 2020 with some CUTI trial participants and some patients who were not part of the CUTI trial who had experienced at least one urinary tract infection (UTI) in the preceding 12 months in Oxfordshire, UK. Interviews were analysed using thematic analysis. Results In total, 26 interviews were conducted and analysed. Women expected to receive an immediate antibiotic for their UTI but were aware of the potential harms of this approach. They were keen to find a non-antibiotic, 'natural' alternative that could effectively manage their symptoms. In certain situations (early illness, milder illness, and with no important upcoming engagements), women indicated they would be prepared to postpone antibiotic treatment by up to 3 days, especially if offered an interim non-antibiotic option with perceived therapeutic potential. Conclusion Many women with auUTIs are open to trying non-antibiotic treatments first in certain situations. There is scope for more dialogue between primary care clinicians and patients with auUTI around delaying antibiotic treatment and using non-antibiotic options initially, which could reduce antibiotic consumption for this common infection. general practices in Oxfordshire were randomly assigned to one of three groups: • immediate antibiotic prescription; • immediate antibiotic prescription and immediate cranberry capsules; or • immediate cranberry capsules and a delayed antibiotic prescription in case symptoms worsened or did not improve within 3-5 days. CUTI trial methods and results have been published in full elsewhere. 20,21 Semi-structured interviews 22 were conducted with a sample of CUTI trial participants and non-CUTI trial patients who had experienced at least one auUTI in the preceding 12 months. The non-CUTI trial patients were identified through an electronic search of women aged ≥18 years with an auUTI in the past 12 months conducted at a general practice in Oxfordshire, outwith the CUTI trial. This practice was chosen to facilitate maximum-variation sampling: it was in an area of higher deprivation and with more ethnic diversity compared with CUTI trial practices. Fully informed, written consent was obtained from each participant prior to being interviewed. Participants The aim was to conduct 20-30 interviews with women aged ≥18 years who had experienced an auUTI in the preceding 12 months, 23 with the final number determined by data saturation. 24 Immunosuppressed women, women with underlying urological abnormalities, and women receiving palliative care were excluded as such women are more likely to experience complicated UTIs and require immediate antibiotics. The authors hoped to employ a purposive maximum-variation sampling strategy 25 with regard to age, ethnicity, and whether or not women were CUTI trial participants. Data collection A narrative, semi-structured interview guide 22 was used to explore participants' experiences (Supplementary Appendix S1). The topic guide was developed by the lead author through review of literature on non-antibiotic treatments of UTIs, in consultation with one other author, and was reviewed by patient and public involvement (PPI) contributors. Patients with a UTI were encouraged to tell their story about their most recent UTI from when they first suspected they had a UTI to the end of the illness episode. Additional questions elicited further details on help-seeking behaviour, self-care strategies, thoughts on nonantibiotic treatments, and experience of taking part in the CUTI trial, where relevant. All interviews were conducted by the lead author, audio-recorded, and professionally transcribed verbatim. Philosophical approach An interpretivist approach was taken, 26 recognising that a person's beliefs about UTI and non-antibiotic treatments are dependent on their prior experiences, their context, and interactions they have had. These beliefs are also changeable in light of new experiences, contexts, and interactions (for example, moving to a different country with a different healthcare system). Data analysis A thematic analysis was conducted with analysis and data collection performed concurrently. Thematic analysis involves constructing and analysing patterns (or themes) within data. 27 The lead author read transcripts and listened to audio-recordings several times to aid familiarisation, allowing immersion in the data. NVivo (version 12) software was used to organise the data and facilitate coding. The lead author grouped codes relating to similar phenomena into categories, and, in discussion with two other authors, subsequently generated themes and subthemes to describe the data through an iterative process. Once the thematic structure was finalised, theme labels were How this fits in To the authors' knowledge, interview studies have not explored women's views on using non-antibiotic treatments as a way of managing symptoms of acute, uncomplicated UTI. While women generally perceive antibiotics to be an effective and reliable treatment, they are aware of the potential harms associated with antibiotic consumption, frequently mentioning fears of becoming 'immune' to their effects. This study found that many women view nonantibiotic treatments, such as cranberry extract and cystitis sachets, positively, and are willing to try them in certain situations (for example, for early symptoms, milder symptoms, and when they do not have important upcoming engagements). There is scope for healthcare professionals to have more discussions with women about considering a delayed antibiotic strategy; offering a non-antibiotic treatment in the interim may make this approach more acceptable to women. refined to comprehensively describe the data within, and supporting quotes chosen to illustrate the themes and subthemes. PPI Four PPI contributors were involved with the CUTI trial and interview studies from the outset. They reviewed all public-facing documents (such as the participant information leaflet). The developing analysis was shared with PPI contributors to seek their thoughts on the findings, which were incorporated into the analyses. RESULTS In total, 27 interviews were conducted with CUTI trial participants (n = 14) and non-trial UTI patients (n = 13) between August 2019 and January 2020. One interview with a non-trial UTI patient was not analysed as the participant met an exclusion criterion (immunosuppressed). Interviews ranged from 28-72 minutes (mean 54 minutes). Participant characteristics are described in Table 1. The three themes presented here demonstrate women's thoughts on and experiences of UTI management, including: self-care; treatments (immediate/delayed antibiotics and non-antibiotic options, within and outwith the CUTI trial); and help-seeking behaviour. Theme 1: treatments, cures, and symptom control Women often spoke of finding a 'cure' for their UTI, and for many women antibiotics represented a cure: most reported finding antibiotics an effective treatment for UTIs, which they perceived worked quickly. Interviewees often expressed a tension between finding antibiotics effective while wanting to avoid taking them if possible. A recurring sentiment expressed was fear of becoming 'immune' to antibiotics if they took them frequently: 'You don't sort of want to, you know, take too many of them [antibiotics] and then like the next episode you have, you cannot sort of treat it well, well enough, you know.' (Non-trial participant [NT]8, aged 31 years) ID numbers T2, T7, T10, T11, T12, T13 T4, T5, T6, T8, T14 T1, T3, T9 NT2- For other women, typically women who had experienced recurrent UTIs or protracted UTI symptoms, a cure instead implied a permanent end to their UTIs, which went above and beyond their perceived capabilities of an antibiotic: Lead author: '… when you then contacted your GP practice … what were you sort of hoping for?' NT6: 'I think I was … hoping for a cure but knowing that perhaps I would only get antibiotics.' (NT6, 62 years) Over-the-counter (OTC) remedies, such as cystitis sachets and cranberry products, were generally seen as more 'natural' and more easily accessible than antibiotics, but less reliably effective. Some women used them as holding measures to provide symptom relief when it was not possible to get a GP appointment. However, other women used them as a UTI treatment in their own right, particularly with milder symptoms, in the earlier stages of their illness, and if they did not have an important upcoming engagement (for example, going on holiday). 'They're [cystitis sachets] quite effective if you, if you get it early but once an infection takes hold, they're, they're not very, in my, from my experience, not very effective.' (NT12, 46 years) Women frequently reported that they would increase their fluid intake as an early part of their UTI management. Similarly to OTC remedies, increasing fluid intake was viewed by some as a holding measure. However, while women usually discontinued OTC measures when starting antibiotic treatment, women often continued increased fluid intake alongside taking antibiotic treatment, perceiving it as a treatment adjunct and a way to 'flush out' the infection. Some women felt that a mild UTI could even be treated through increased fluid intake alone: 'If it's [the UTI] really mild … you can flush it out with water … ' (Trial participant [T]10, 77 years) Cranberry juice was commonly reported in this context. Many women perceived that cranberry juice might have specific therapeutic properties over and above other fluids: 'I've heard cranberry mentioned so much over the past years, going back from my own GP who is saying, you know, "Drink, drink cranberry juice and whatever." So it's always been in the loop … ' (T7, 81 years) Few women were aware of, or had tried, cranberry in capsule/tablet form, prior to taking part in the CUTI trial/interview study. Women usually felt that cranberry capsules/tablets would be preferable to consuming cranberry juice, because of concerns about the taste and sugar content of juice formulations. Those women who had used cranberry tablets (outside of the CUTI trial) usually reported using them as a means of preventing UTI, rather than as a way of managing an acute UTI: 'I've been taking the cranberry tablets just sort of daily.' (NT13, 32 years) Most women did not naturally link analgesia (such as paracetamol and ibuprofen) with treating a UTI. Women who reported taking analgesia saw it as a means of alleviating certain symptoms (such as abdominal pain), but not as a way of treating their UTI: 'It [taking analgesia] wasn't something I really thought or associated really … I was thinking really more about actually making what was causing it better rather than actually taking the painkillers to mask it.' (T3, 51 years) Theme 2: functional and formulaic -UTI consultations in general practice and the role of the healthcare practitioner Women typically contacted their GP when they perceived that their symptoms were severe and/or inconveniencing. Physical evidence, namely haematuria, was seen as confirmation that symptoms warranted attention and a legitimate reason for seeking medical attention: 'If I see blood in my urine that's like the sign I need to ring a GP … it's not just in my mind … something is going on.' (NT8, 31 years) On contacting their GP, women hoped to be seen quickly by a healthcare practitioner and expected an immediate antibiotic prescription; consultations appeared to be set up to meet this expectation: 'I rang the doctor's appointment and asked them if I could come and expecting them to maybe just give me some antibiotics … because before when I've gone in, I've always had antibiotics …' (T1, 23 years) Women described quick and focused consultations, and sometimes sensed that healthcare practitioners seemed to be following an algorithm, for which the outcome was usually an antibiotic prescription: 'They will just say, infection yes or no; antibiotics, yes or no.' (NT13, 32 years) 'They checked the urine and they said, "You have a UTI; we'll give you antibiotics."' (NT7, 18 years) Outside of the CUTI trial, discussions about non-antibiotic treatments with healthcare practitioners were unusual. Women described that healthcare professionals tended to express negative sentiments (stating that non-antibiotic options did not work) or neutral feelings (stating that non-antibiotic options were unlikely to help, but unlikely to do harm): 'I said … "I can recover without antibiotics?" but he said, "No", so. So was very clear. He said, "Antibiotics or nothing, but nothing, make sure you are going to get worse." He didn't mention about there is another method.' (NT9, 40 years) 'There's certain doctors that I think probably don't really talk about over-the-counter stuff and if you mention it they'll be, like, dubious.' (NT5, 27 years) Some women also suggested that different healthcare professionals provided conflicting advice about OTC treatments. Despite this, interviewees usually stated that their view of OTC remedies would be influenced by the views and recommendations of their healthcare practitioner, and the relationship they share: 'I trust my doctor if I was talking to him, if he advised something, I would take it because I trust him. I know him very well and he knows me.' (T10, 77 years) 'I don't feel knowledgeable enough to go and pick something up and feel like, yes this is going to work. I suppose unless somebody in the health profession had recommended me to do so.' (T12, 32 years) Theme 3: changing the treatment paradigm Women tended not to report that a delayed antibiotic strategy had been used in the management of their UTIs, outside of the CUTI trial. Some women considered that the delayed antibiotic approach provided a welcome opportunity to avoid consuming antibiotics for an acute UTI. However, women weighed this potential benefit against other factors, such as the severity of their symptoms and the timing of their presentation to general practice. A few women, typically those who had previously experienced an upper UTI, also factored in the potential risk of developing a complicated UTI if antibiotic treatment was delayed: 'I might be a bit grumpy about it. I think it would depend how awful I felt … ' (NT3, 65 years) Many women felt that a delay of 3-5 days (as suggested in the CUTI trial) was too long; a shorter delay of 2-3 days was generally considered more acceptable. Women expressed that having contacted their GP it was important for them to receive something by way of treatment in the interim: 'I'm not sure why you'd just delay antibiotics without doing anything because the whole reason for going to your doctor is that you've reached a decision, a big decision that you want to go the doctor to get it sorted out and the fact that the doctor says we'll just wait another couple of days, you might already have waited three, four, five days before you made that decision to go to the doctor … ' (T14, 60 years) Women in the CUTI trial were randomly assigned to one of three groups. Group 1 of the trial (immediate antibiotics alone) aligned with what most women had originally hoped for on contacting their GP, and was therefore the preference for some women. This was particularly the case for women who were experiencing perceived severe symptoms, or who had already delayed seeking medical attention. Group 2 (immediate antibiotics and immediate cranberry capsules) was seen by some as the 'best of both worlds,' allowing them to experience the benefit of taking an antibiotic along with any potential additional beneficial effect of cranberry: 'Probably the safest one would be the immediate antibiotics and the cranberry capsules because you get both basically, so that's like a, you know, a full force attack … ' (NT8, 31 years) Women assigned to this group compared their UTI experience within the trial with their previous UTI experiences that had been managed with antibiotics alone, and often reflected that they perceived an additive benefit to taking cranberry alongside antibiotics: 'Between the two [antibiotics and cranberry capsules], it cleared it up very quickly … it was really, really helpful … ' (T8, 77 years) 'I took the cranberry tablets alongside [antibiotics] and I thought the symptoms had gone but within a couple of days, I felt they were coming back again. [I] Took some cranberry tablets … I took another two days' worth … I think that probably if I hadn't had the cranberry tablets I would have had to go back to the GP for more antibiotics.' (T5, 60 years) However, other women primarily saw the utility of cranberry as a way of avoiding antibiotics. Group 3 (immediate cranberry and a delayed antibiotic prescription in case symptoms worsened or did not improve within 3-5 days) was therefore the preferred group for women expressing this view. These women were keen to establish whether cranberry extract would help them personally and felt that combining cranberry with antibiotics would make this more difficult to ascertain. Women felt reassured that they would receive back-up antibiotics, which one woman described as a 'parachute', giving confidence to try a new intervention. While some women in group 3 of the trial were able to avoid taking antibiotics, 21 of the women who were interviewed in this group all ended up taking their delayed antibiotic prescription. However, some women interviewed in this group suggested that cranberry had some effect, albeit not as potent as antibiotics, and seemed to prevent symptom deterioration: 'I felt that actually just taking the tablet, the cranberry tablets it was just like, it didn't get rid of it, but it helped, I think it prevented it from getting any worse. So, the actual like burning feeling and needing to go for a wee, I felt like it was a steady level.' (T1, 23 years) DISCUSSION Summary Acute UTI symptoms can be severe and disruptive to women's lives. Antibiotics were usually perceived as a reliable treatment, or indeed a cure. However, responders were also aware of potential harms associated with antibiotics, such as becoming 'immune' to their effects. Non-antibiotic measures were variably used as UTI treatments, treatment adjuncts, holding measures, or for symptom relief. Non-antibiotic measures were perceived as more natural but less potent than antibiotics, with better results and greater acceptability if used earlier in the illness course, with milder symptoms, and when patients had no important upcoming engagements. Women were willing to consider a delayed antibiotic approach, but this option was not usually offered in GP consultations. The perceived binary choice between antibiotics or no antibiotics did not appear to leave much room for wider discussions. The acceptability of the delayed antibiotic approach was improved by offering an interim non-antibiotic treatment with perceived therapeutic potential. Strengths and limitations Interviews were conducted with patients with UTIs within and outwith the CUTI trial. Interviews were conducted with non-CUTI trial participants to increase the chance of capturing the views and experiences of women who might be less amenable to trying non-antibiotic treatments. However, it is possible that women who were interested in non-antibiotic treatments for UTI were more likely to respond positively to the interview invite. Interviews were conducted with women of a range of ages and from a range of backgrounds. A diverse sample was attempted by recruiting non-CUTI trial participants from a general practice with more ethnic diversity compared with the CUTI trial practices. However, despite best efforts, there was limited ethnic diversity in the sample. The results may therefore be less reflective of the experiences of women with UTIs from minority ethnic groups. Furthermore, electronic searches may have missed some potentially eligible patients. All interviews were conducted by the chief investigator of the CUTI trial (the lead author), which might have influenced the participants' responses. To minimise the potential for this, participants were reminded before the interviews that they were the experts in their experiences and that the interviewer was keen to hear all views. Finally, ethical approval to interview women who declined to take part in the CUTI trial was not received. Interviewing CUTI trial decliners may have provided additional, useful insights. Comparison with existing literature The findings of the present study suggest that women would consider delaying antibiotics in certain situations, such as with earlier and milder UTI symptoms. This is in keeping with previous research that explored GP and patient experiences of delayed prescribing for UTIs. 18 The authors concluded that the decision to delay antibiotics should depend on whether selfmanagement strategies have already been tried before seeing the GP and symptom severity. Leydon et al 17 also explored women's thoughts on and experiences of delayed antibiotics for UTIs and found that women would be prepared to delay antibiotics if their symptoms were not severe. The authors also suggest that GPs should be mindful that women may not always want immediate antibiotics. The present study builds on these findings and provides additional unique insights that suggest that the acceptability of a delayed antibiotic strategy for UTI may be increased by receiving a non-antibiotic alternative treatment in the interim. Interviews with women taking part in a trial of immediate antibiotics versus immediate ibuprofen for auUTI in Germany showed they perceived that it was safe to take part because UTI is 'not a serious condition'. 19 Indeed, four of the five trial decliners interviewed had refused trial participation on the basis that they wished to avoid immediate antibiotics. This contrasts with the findings of the present study, in which women reported significant, disruptive symptoms associated with UTI. Furthermore, the commonest reason for women declining to take part in the CUTI feasibility trial was to avoid being assigned to the delayed antibiotics group. 21 This difference may represent cultural differences in the way that UTIs and antibiotics are perceived. Of note, outpatient antimicrobial prescription in Germany is lower than in the UK. 28 A number of studies have evaluated the use of ibuprofen as an alternative treatment for acute UTIs. [4][5][6][7] However, women interviewed in the present study did not perceive analgesia as a UTI treatment but rather a means of alleviating pain. This is in keeping with a questionnaire study of women with UTI, which found that most women take antibiotics because they want to 'combat bacteria'. 29 The need to 'combat bacteria' may also be prevalent among healthcare practitioners. In an interview study exploring GPs' experience of delayed antibiotic prescribing for a UTI, GPs felt that a firm UTI diagnosis warranted antibiotics. 18 They might consider a delayed antibiotic approach for equivocal UTI symptoms but were more used to applying the delayed antibiotic approach in the context of acute respiratory tract infections. 18 Urine samples sent for culture are the commonest specimens sent to microbiological laboratories. 30 Healthcare practitioners routinely receiving urine culture results that show bacterial growth may serve to reinforce immediate antibiotic prescribing behaviour. As throat swabs/ sputum cultures are not routinely sent to laboratories for respiratory infections in primary care, 31,32 the same positive reinforcement of antibiotic prescribing behaviour may not be present for acute respiratory tract illnesses. Implications for research and practice Women with auUTIs are amenable to trying certain non-antibiotic treatments with advice to delay antibiotics for a short period, in some situations. National Institute for Health and Care Excellence guidance suggests that a delayed antibiotic prescription can be considered for women with auUTIs, taking into consideration various factors such as symptom severity and patient preference. 33 The present study, in keeping with previous studies, 17,18 suggests that this does not happen often. There is therefore scope for clinicians to have more discussions with women about a delayed antibiotic approach, particularly if women have presented earlier in their illness, have milder symptoms, and do not have pressing upcoming engagements. Such consultations should be sensitively conducted, recognising that the patient is the expert in their own symptoms and is best placed to make an appropriate decision. Clinicians should also bear in mind that, if a patient rejects a delayed antibiotic approach on one occasion, this does not necessarily mean that they will reject it in future; often, the decision is dependent on their experience of the UTI episode in question, rather than being a fixed preference. These discussions may lengthen a consultation, but they may have the potential to reduce antibiotic consumption and empower women to take control of their acute UTI management. While a delayed antibiotic approach will not prevent all women from consuming antibiotics, given that auUTIs are common and are almost always managed with an immediate antibiotic prescription, 3 there is scope to meaningfully reduce antibiotic consumption for auUTIs through taking this approach. A delayed antibiotic prescription may be more acceptable to women if they receive Funding This work was supported by the National Institute for Health Research (NIHR) School for Primary Care Research (SPCR) (grant reference: SPCR-2014-10043) and the Wellcome Trust (grant reference: 203921/Z/16/Z). For the purpose of Open Access, the author has applied a CC BY public copyright licence to any authoraccepted manuscript version arising from this submission. Sarah Tonkin-Crine received funding from the NIHR Health Protection Research Unit in Healthcare Associated Infections and Antimicrobial Resistance at the University of Oxford in partnership with Public Health England. Carl J Heneghan receives funding support from the NIHR SCPR and the NIHR Biomedical Research Centre Oxford, and is funded by the World Health Organization for a series of Living rapid reviews on the modes of transmission of SARS-CoV-2 (reference: No. 2020/1077093). The funders had no role in study design, manuscript submission, or collection, management, analysis, or interpretation of study data. The views are those of the authors and not necessarily those of the NIHR or Department of Health and Social Care, nor the Wellcome Trust. an interim, non-antibiotic treatment that they perceive to be potentially effective. The results of the CUTI feasibility trial are suggestive of possible, preliminary evidence of an effect of cranberry extract on reducing antibiotic consumption for acute UTI. 21 Adequately powered clinical trials are needed to definitively establish whether non-antibiotic treatments like cranberry and cystitis sachets are safe and effective treatments, and to better define when delayed antibiotics are suitable. Ideally, these trials should include a qualitative evaluation to better understand whether/ when/how women might find non-antibiotic treatments acceptable when integrated into auUTI management. Such trials and interviews should incorporate the views of people from diverse ethnicities, for example, by engaging communitybridging researchers who speak a variety of languages. 34 Ethical approval The Cranberry for Urinary Tract Infection feasibility trial and interview study were approved by the South Central Oxford B Ethics Committee (Research Ethics Committee reference: 18/SC/0673) and the Health Research Authority (IRAS Project ID: 249672). Competing interests Oghenekome A Gbinigie received funding from the NIHR SPCR and the Wellcome Trust. All other authors have declared no competing interests.
2022-03-17T15:19:20.480Z
2022-03-15T00:00:00.000
{ "year": 2022, "sha1": "7fd872ba481d371f418f6c5a0b5cfeb91b707221", "oa_license": "CCBY", "oa_url": "https://bjgp.org/content/bjgp/early/2022/03/20/BJGP.2021.0603.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fdde7f6e72e7431c1f30b6ae61238dbfa03e55c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
240358420
pes2o/s2orc
v3-fos-license
Knowledge, attitudes and practices of health care providers trained in responding to violence against women: a pre- and post-intervention study Background Violence against women is a serious public health concern, and is highly prevalent globally, including in India. Health-care providers [HCPs] can play an important role in addressing and reducing negative consequences of violence against women. We implemented a pre-post intervention study of HCP training in three tertiary care facilities in Maharashtra, India. Methods The study used a pre-post intervention design with assessment of HCPs’ (n = 201) knowledge, attitudes, perceived preparedness and practice at three time points: before training, after training and at 6 months follow- up. Results Total median score of knowledge about common signs and symptoms of violence (8.89 vs, 10.00), attitudes towards acceptability of violence (9.05 vs. 10.00), individual (6.74 vs. 10.00) and system level preparedness (6.11 vs. 8.14) improved from pre to post- training. The generalized estimating equation [GEE] model, adjusted for age, sex, site and department, showed an improvement in knowledge, attitudes and preparedness post- training. The change from pre to 6 months follow- up was not significant for attitude. Conclusions This package of interventions, including training of HCPs, improved HCPs’ knowledge, attitudes and practices, yet changes in attitudes and preparedness did not sustain over time. This study indicates feasibility and positive influence of a multi-component intervention to improve HCP readiness to respond to violence against women in a low-resource setting. Future phases of intervention development include adapting this intervention package for primary and secondary health facilities in this context, and future research should assess these interventions using a rigorous experimental design. Finally, these results can be used to advocate for multi-layered, systems-based approaches to strengthening health response to violence against women. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-021-12042-7. Background Violence against women is a pervasive and highly prevalent health and social problem, with estimates showing that almost one-in-three women globally have experienced physical and/or sexual violence by an intimate partner or non-partner sexual violence in their lifetime [1]. It is a public health problem with significant implications for women's mental, physical and social wellbeing [2]. There is well-documented evidence which suggests strong associations between violence against women and health consequences including sexually transmitted infections, HIV, unwanted pregnancy, miscarriage, injuries, depression, suicidal ideation, substance abuse and chronic pain [3][4][5][6][7]. Intimate partner violence [IPV] is the most prevalent form of violence against women. In India, the fourth round (2015-16) of National Family Health Survey (NFHS) found that 31% of ever married women have been subjected to physical or sexual violence by their husband in their lifetime. Among women who have experienced partner violence, only 14% have sought any form of formal or informal support following such violence [8]. This is despite widespread evidence that suggests that both formal and informal support systems can play a crucial role in mitigating consequences of violence against women [9,10]. Health systems have an important role to play in a coordinated multi-sectoral response to violence against women [11]. Women facing violence have frequent contacts with health systems. Even if women do not disclose violence to healthcare providers [HCPs], HCPs are in an ideal position to identify and respond to women facing violence [12][13][14]. However, there are significant barriers that prevent HCPs from identifying abuse and providing appropriate care and support to women affected by violence [15,16]. Studies have suggested HCPs' lack of awareness, skills, prejudicial attitudes towards and stereotypes of violence against women as major factors responsible for preventing abused women to access quality healthcare [17,18]. For example, a study in Kenya found that providers' understandings of violence against women were based primarily on their experiences of addressing non-partner sexual violence. This had implications for their (lack of) willingness to identify and provide appropriate response for women experiencing IPV [19]. In the Indian context, these barriers are evident from the fact that only 1% of married women who have ever faced violence from a husband sought any support from an HCP, although quarter of women reported injuries as a consequences of violence, thus indicating that even in case of injuries, survivors may not disclose violence [8]. In-service training is one of the widely suggested ways to address barriers faced by HCPs in responding to violence against women [20,21]. In order to provide the basis of a comprehensive health system response to violence against women, these training programs should influence providers' beliefs and increase HCPs' knowledge and skills to respond to women facing violence, while ensuring their safety and protecting their confidentiality [11]. There is some evidence of effectiveness of training interventions using interactive techniques [22]. A pre and post-intervention study of a training intervention for public health midwives on response to IPV in Sri Lanka found that role plays, field handbooks and cultural sensitivity training were important components of the intervention [23]. The training intervention significantly improved midwives' skills in identifying women affected by violence, improved midwives' knowledge of violence against women and decreased perceived barriers to supporting women affected by violence. In low and middle-income countries [LMICs], there are numerous challenges in conducting in-service training of HCPs to improve healthcare response to violence against women [24,25]. These include gender blind medical education, HCPs' lack of time and heavy patient-load, and a high turnover of HCPs in health facilities. Furthermore, there are several system level constraints, such as inadequate numbers of available health workforce, limited infrastructure and lack of support services for referrals, which need to be addressed in order to improve quality of the health system response [26]. Studies suggest that in order to strengthen health system response to violence against women, training of healthcare providers alone is not sufficient [27][28][29]. For example, in a study of a training intervention for general practitioners and residents in general practice in Greece, the intervention resulted in increase of knowledge and self-preparedness but it did not translate into significant changes in clinical practice, indicating the need for systems-level changes for sustainable improvements in clinical practice to be effected by training of HCPs [27]. Systems-level support include establishment of standard operating procedures, referral linkages, building leadership support, supportive supervision and availability of adequate infrastructure for ensuring privacy and confidentiality. There is little evidence on characteristics, methodologies and effectiveness of training interventions adequate in meeting needs of healthcare providers in LMICs [30]. There are also gaps in understanding of how to implement systems readiness activities (e.g. infrastructure, management support) to further sustain changes health care providers' abilities to retain knowledge and skills and improve their clinical practice in responding to violence against women. In response to this urgent public health problem, the World Health Organization [WHO] published clinical and policy guidelines, Responding to intimate partner violence and sexual violence against women in 2013 [henceforth, the Guidelines], to strengthen the health care provider capacity and health system readiness to respond to violence against women [31]. The Guidelines provide evidence-based recommendations to equip HCPs on what to do to respond to intimate partner violence and sexual violence against women. WHO has published two tools to translate the guidelines into practical "how to" instructions and job aids. One is a clinical handbook for health care providers, Health care for women subjected to intimate partner violence or sexual violence (2014) [32]. The second is a manual for health managers, Strengthening health systems for women subjected to intimate partner violence or sexual violence (2017) [33]. There remain significant gaps in knowledge of how to implement and facilitate the uptake of these tools in order to effectively improve the performance of HCPs and health system/service readiness. To address these gaps, CEHAT (Centre For Enquiry Into health and Allied Themes), a Mumbai-based research organisation, collaborated with WHO to explore approaches to implement the Guidelines and WHO tools in three tertiary hospitals of Maharashtra, India, through a multi-component implementation research project. We conducted a mixed-methods study piloting the implementation of the Guidelines and WHO tools, in order to improve understanding of local contextual factors influencing implementation, including/particularly training, intervention outcomes and support future scale-up. The specific objectives of the overarching study were: 1. To validate approaches to roll out the training and service delivery improvement activities based on the Guidelines and associated tools by: a. assessing needs and priorities of health care providers and managers in responding to violence against women; b. adapting, implementing the training and assessing improvements in provider knowledge, attitudes, perceived preparedness and practice c. assessing the relevance of the training approaches in meeting the needs of health care providers and identifying barriers and facilitators for health care providers to deliver care to women subjected to violence 2. To understand the perceptions of quality of care of women subjected to violence who have received care from trained health care providers. 3. To develop, validate and refine instruments for measuring health care providers' performance and health system/service readiness instrument This manuscript reports on findings from Objective 1b specifically, we seek to assess: i) if the training intervention improved HCPs' knowledge, attitudes and practices related to responding to violence against women, ii) if those improvements were maintained between posttraining and 6-month follow-up, and iii) if age, sex, department and site of the HCPs was associated with changes in knowledge, attitudes and practices. Methods The study used a pre and post-intervention design with assessment of HCPs' knowledge, attitudes, perceived preparedness and practice at three time points: before training, immediately after training and 6 months followup. This study is the pilot stage of a multi-phase research project, and various aims (listed above), were a first step prior to our plans to expandi the intervention and conduct an impact evaluation using an experimental design. In this pilot phase, we sought to first see if the intervention could be adapted and was feasible to implement in this context, meaning that HCPs attended the trainings and demonstrated measureable changes in aspects of clinical practice addressed in the training. This study design was selected to ensure the most rigorous design while taking into account the range of challenges present in implementing training and assessing training outcomes in this context. Setting This study was conducted in two districts in the state of Maharashtra, India between July 2018 and April 2019. The study was carried out at three tertiary medical teaching hospitals: Aurangabad Government Medical College, Aurangabad and Miraj Government Medical College and Sangli District Hospital. These facilities were identified based on their participation in a prior collaborative project with CEHAT on integrating gender within medical education [34], implemented in these and five other hospitals in the state of Maharashtra. The selected hospitals were attached to the medical colleges that had performed best in terms of integrating modules on gender in pre-service curriculum. Further, they were selected as there were at least two gender sensitive medical educators in these facilities who could lead the implementation of the intervention. These hospitals were identified as being the most suitable for this study because integration of gender within medical education can be utilized as a foundation upon which to further build capacity related to violence against women. However, there is still great diversity within and across the hospitals in both districts in terms of capacity and readiness to respond to VAW. Aurangabad district has a population of about 3.7 million and the Government medical college is biggest tertiary care hospital for central Maharashtra [35]. It has more than 1000 beds and serves as a referral point for both urban and rural population. The average flow of patients on outpatient basis is about 27,000/month and about 5600/month as in-patient visits [36]. Sangli district has a population of about 2.8 million with 75% of the population being rural [37]. Miraj medical college has 320 beds while a district hospital managed by college has a total of 380 beds. On average, 52,000 patients visit Miraj medical college and Sangli district hospital every month on an outpatient basis [38]. Participants The training participants were selected based on the following criteria: i) HCPs providing services to patients in any of the following three departments: Obstetrics & Gynecology, General Medicine and Casualty/ Emergency. These three departments were selected as more women access clinical care in these departments and HCPs who were less likely to be transferred from study sites for the duration of the study. Given that this study was conducted as part of a formative phase of research, the sample size in this study was not based on power calculations. Rather, it was determined based on the feasibility of including the largest number of healthcare providers in the training given their availability and interest in developing skills to respond to violence against women. We estimated that a minimum of 30% of healthcare providers could take part in the training and be retained and feasibly followed up over the three time-points. This gave us an approximate sample size of 170 HCPs which was further increased to 220 to account for an expected 20% of attrition of providers at the 6 month follow-up assessment. Intervention The training approach employed the following steps. A cascade training approach was employed, wherein a selected group of senior administrators of the selected departments were trained as master trainers. These master trainers were trained over a period of five days by experts such as other healthcare providers, as well as lawyers, academicians and women's rights activists experienced in training healthcare providers on violence against women. The master trainers then then trained other HCPs in their own facilities, both their peers as well as junior providers. A total of eight two-day and eight half-day peer led refresher trainings were conducted by trainers at their respective health facilities. A maximum of 30 participants were included in each training and a mix of doctors, nurses and social workers were trained together. The trainings were planned in advance so that a arrangements could be made to cover routine clinical service provision. The rationale for including different cadres together in training sessions was to minimize inter-professional cadre hierarchies that exist between doctors, nurses and social workers, create an across health professional cadre team approach, and allow for triaging survivors needing care in accordance with role/function of each cadre and their time availabilities to carry out certain tasks related to responding to violence. The training was built on a draft curriculum manual developed by WHO, based on the Guidelines, and CEHAT's curriculum [39], which reflects its decadelong work with the Indian public health sector on violence against women. The training content was translated in Marathi and was also made available as a manual to trainers. Table 1 shows topics included in the modules implemented in the training. Participatory methods including role plays, clinical case studies, vignettes, and games were used by trainers to deliver the training. At one site, the trainers invited protection officer to jointly conduct the session on the legal mandate of healthcare providers. At the end of the training, a pocket-sized reminder card describing steps for providing first line support to women was given to each participant. Additionally, several system level changes were implemented to enable HCPs to apply the skills they learned during the training and sustain changes in clinical practice. These system changes included: i) establishment of standard operating procedures for establishing privacy, confidentiality, clinical care and documentation of cases; ii) establishment of referral linkages by organizing a meeting between healthcare administrators and organizations providing support services; iii) creating a referral directory for healthcare providers; iv) Introducing a one page documentation register as part of the health management information systems to enable HCPs to document case of violence; v) creation of job-aids for providing care, documentation and maintaining privacy and confidentiality; and vi) discussion of care and support for domestic violence survivors ('case management') in clinical meetings with HCPS to facilitate supervision, mentoring and peer-to-peer learning for other HCPs in the facility. Study instrument A self-administered, paper-based structured questionnaire was used to assess baseline, and changes in HCPs' knowledge of, attitudes towards, perceived preparedness and clinical practices regarding violence against women, immediately after the training and 6 months after the training. [40]. The PREMIS has been validated in the United States of America and the DVHPSS has been validated in Uganda and Nigeria [41]. Items pertaining to Indian legal frameworks, such as an item on provisions of Protection of Women from Domestic Violence Act, 2006, were added to adapt to the local context. The instrument was translated into Marathi and piloted with a sample of 20 doctors and nurses working in a tertiary hospital in Mumbai, Maharashtra. The results of the pilot test indicated that some changes were needed, including improving clarity of some items, and further adapting other items to the local context. For example, the term "intimate partner violence" was replaced with the term "domestic violence" throughout the tool, as legal and policy frameworks in India refer to domestic violence to indicate domestic relationships that go beyond an intimate partner and capture violence perpetrated by in-laws and other family members. The item "It is important not to share or discuss the woman's information with anyone unless she authorizes it" was changed to "It is important not to share or discuss the woman's information with anyone unless she consents to it," as the word "authorize" was not clear to providers. We calculated Cronbach's alpha for various domains based on baseline data, and considering these results, we dropped some questions focused on gender norms and perceptions of the role of healthcare providers for the purpose of this analysis due to very low values of Cronbach's alpha. The following constructs were analysed for this manuscript: i. Knowledge: Knowledge in this analysis was measured using 15 items with response as yes, no or don't know. Each correct answer was given a score of 1, while incorrect and don't know response were given a score of 0. iv. Practices: Practice was assessed using items focused on identification of cases in last 3 months and services provided by healthcare providers to women. The tool administered immediately after the training was the same as the baseline instrument except that it did not include the question on practice in last 3 months. Additional items were added to 6 months follow-up tool to capture perceived need for additional training, facilitators and barriers faced by healthcare providers in responding to women facing violence. Table 2 displays each construct, domain, sources for the items, and the Cronbach's alpha. The full list of items in each domain is included in Appendix 1. Data collection The survey instrument was administered at three points of time: before starting training (i.e. pre-training), immediately after the training (i.e. post-training) and six months after the training (i.e. 6-month follow-up). To increase the participation of providers at post-6 month assessment, we organized a half day refresher training and the tool was administered before commencing the refresher training. The average time taken by providers to complete the tool was 35 to 40 min. A unique ID number specific to each respondent was used to match each of the three assessments. Following paper-based self-completion of the surveys, CEHAT research team members entered the data into OpenClinica, an online data entry system. We checked accuracy of entered data by comparing paper surveys to entered data randomly. Data analysis We used SPSS Version 20.0 for statistical analyses [42]. We conducted descriptive data analysis to examine and summarize socio-demographic details of participants: age, number of years of clinical experience, department and role within the health facility, i.e. doctor, nurse, social worker. As number of items in each domain were varied and the range of responses was different, we rescaled domains. The rescaling of domains was done so as to have same lower and upper limits (0-10). This was done by computing the score of each of the domains out of 10 i.e. by dividing the original score with the original range and then multiplying by 10. Since there were two domains under each construct, the score of domains were added to calculate scores of constructs. Thus, the scores of knowledge, attitudes and perceived preparedness ranges from 0 to 20. The scores of each domain under each construct were summed (for example, adding the clinical knowledge score and the ways of asking about violence score for a total knowledge score), thus each domain (knowledge, attitudes and practices) ranges from 0 to 20. The distribution of scores of knowledge, attitudes and preparedness was assessed graphically by plotting histograms, and the Shapiro-Wilk test indicated that all outcome measures were significantly different from the normal distribution. The pair-wise comparison can lead to inflated Type I error, therefore the overall effect of training was assessed using multivariable Generalized Linear Models using Generalized estimating equation (GEE) that were adjusted for age, sex, site and department. The GEE model was also used to present the effect of training on each of the domain. We used an exchangeable correlation matrix with a robust estimator assuming homogeneous correlation between repeated measurements of scores. As dependent variables wereare not normally distributed, gamma log link with type III analysis was used. The main effects model with a three-level indicator for time (pre-, post-, and 6 month follow-up) as independent variable was fitted to estimate change in dependent variable (scores) at post-and at 6 month follow-up. . All models were adjusted for age of the provider, training site, department of the provider as there is evidence in literature on role of age and sex of the healthcare provider in determining the impact of training (12). As trainers were site specific, site was included in the model to assess any difference in outcomes of training. The three departments differ from each other in terms of patient load and health symptoms with which female patients present. Thus, department was one of the variables included in GEE model. For analysis purposes, data of HCPs from Miraj and Sangli were pooled because the two hospitals are managed by the same administration and the master trainers who are senior health administrators and health care providers rotate between the two hospitals. The training of HCPs from Miraj and Sangli hospitals was done together and by master trainers who worked and oversaw HCPs in both facilities, therefore in all analyses, outcomes from these two facilities were combined. Ethical considerations The project was reviewed and approved by the Institutional Ethics Committee of CEHAT. The project was also approved by the Research Project Review Panel , which reviews all human subjects research conducted or supported by WHO. Permission to conduct the study was also obtained from Directorate of Medical Education and Research (DMER), Maharashtra, which is the governing body for tertiary teaching hospitals in Maharashtra. Informed consent was obtained from all participants. The informed consent was translated into the Marathi and informed participants about the measures implemented to ensure confidentiality. The unique ID for matching three levels of questionnaire was stored separately from the registration lists of trainings. All methods were carried out in accordance with relevant guidelines and regulations. Table 3 shows characteristics of study population. The assessment at before training, after training and 6 months following training was completed by 201 of 220 (91.4%) HCPs. 19 HCPs (8.6) were lost to follow-up at 6 months. There was no difference in the sociodemographic characteristics of those lost to follow-up compared to the sample retained at follow-up. Transfer of HCPs from one health facility to another was the most common reason for loss to follow-up. Out of 201 providers, 90 providers were from Aurangabad hospital while 111 were from Miraj-Sangli hospitals. About 54% of providers were nurses or nursing assistants, 41% were doctors and the remaining were social workers. 70% of HCPs were females while remaining were males. About 52% of HCPs were 25 to 34 years of age. The majority of providers (41.3%) were from Obstetrics and Gynecology department, followed by Medicine (36%). The mean number of years of clinical experience of providers was 11.9 years (SD = 9.7) with a range of less than a year to * A total of n = 13 participants were working in other departments at the time of the training (surgery and psychiatry), but were included in the training as they were nurses who rotated into the relevant departments.**Others include social workers and clinical department helpers 32 years. There was no significant hospital-wise difference found in socio-demographics of HCPs apart from department of participants, which can be attributed to the small number of participants in the "other departments" category in both sites. The on-site trainings covered all doctors and nurses from the three departments included in this study, therefore the demographic characteristics of participants are representative of all HCPs working in the three departments. Change in attitudes of providers Table 5 shows change in median scores of providers' attitudes towards acceptability of violence against women and HCPs' attitudes towards asking women about violence. The median score on attitudes towards acceptability of violence increased (i.e. views that violence was acceptable decreased) from pre to post and this improvement was sustained at 6-month follow-up (9.95 vs 10.00). The adjusted GEE estimates revealed a significant change in attitude towards less acceptability of intimate partner violence from pre to post (p < .001) and 6month follow-up (p = .002). Table 6 shows change in perceived preparedness of HCPs in terms of individual and systems level support. The median scores of both individual level preparedness (6.74 vs. 10.00) and system level support (6.11 vs. 8.14) improved considerably between pre and post-training. However, a decline in median scores was observed from post-training to 6-month follow-up for both individual (10.00 vs. 8.33) and system level preparedness (8.14 vs. 7.04). After adjusting for age, sex, site and department the GEE estimates shows significant increase in individual and system preparedness from pre to post and 6months follow-up. Change in HCPs' perceptions of preparedness Change in practice of providers to identify and provide services to women facing violence Table 7 shows identification of survivors in last 3 months by HCPs and provision of different kinds of support services before training and 6 months later. Six months post-training, 72.1% of providers had identified at least one survivor in the last 3 months as compared to 48.8% before training (p < .001). This increase in identification was not found to be significant for providers from Aurangabad hospital (24.4% vs. 25.9%, p = .385, n = 90). A highly significant increase in identification was found for female providers (35.5% vs. 52.7%, p < .001, n = 141) and providers from Miraj-Sangli (24.4 vs. 46.3%, p < .001, n = 101). At 6-months follow-up, a two-fold increase in the provision of support services like provision of basic information to woman about violence (32.3% vs. 68.7%), discussing options with women (32.8 vs. 70.6%), helping woman to develop a safety plan for (24.9% vs. 51.2%) and referral to support services (25.4% vs. 58.7%) was found. All of these differences were significant at the p < .001 level. Further, we also analysed the improvement in the practice of those HCPs who reported identifying cases of violence before training (n = 81, Table 8). Amongst these providers, a highly significant (p < .001) improvement in provision of support services like providing basic information about domestic violence (64.2% vs. 95.1%), offering supportive statements (81.5% vs. 100%), documentation of cases (51.9% vs. 79%) and making external referrals (53.1% vs. 81.5%) was found. Table 9 shows the results of the GEE model on change in overall scores from pre to post and from pre to 6 months for knowledge, attitudes, and practice. A multivariable generalised GEE model was fitted with knowledge, attitudes and practice scores as dependent variables and time, department, sex, age, centre and site as independent variables. The GEE model indicated that change in scores from pre training to post training was significant for knowledge, attitudes, and perceived preparedness. The change in scores from pre to 6 months follow-up was not found to be significant for attitude. Results of generalised estimating equation Our findings indicate that the training intervention improved knowledge, attitudes and practices of HCPs, with variation in changes in these domains at different time points. In the unadjusted model, the change in knowledge, attitudes and perceived preparedness were found to be same as that in adjusted model. This indicates that age, sex, site and department have no effect on the amount of change in knowledge, attitudes and perceived preparedness of providers over time. Discussion This pilot study reports on the influence of a training intervention on knowledge, attitudes, and skills of HCPs to ask about violence, provide first-line support and enable provision of social and legal support through referrals. The intervention in this study includes both training and system level changes to create a supportive ecosystem for HCPs to respond to VAW. Various organisational changes such as establishing protocols, mentoring by senior clinicians, and establishing referral linkages were introduced to enable trained HCPs to respond to violence against women in their clinical practice. Some of these system-level changes were also integrated into training, for example delivery of training by clinicans with managerial responsibilities ensured mentorship. The presence of stakeholders involved in providing external support services helped in building capacities of HCPs for making external referrals. This study fills an important gap in literature as there are few interventions for improving HCP response to violence against women but there are few with an evaluation component [43]. Further, the majority of training interventions that have been assessed have been implemented in North America, with a very limited evidence-base from LMICs particularly in Asia [44]. For example, a recent systematic review of trials of HCP training (comparing interventions to a wait-list or placebo group) to improve IPV response found that of the 19 included studies, three quarters of all studies were conducted in the USA, and no studies were conducted in Asia [45]. The common outcomes measured by the studies included knowledge, beliefs, self-confidence, skills of healthcare providers and patient related outcomes like perceptions of women about the services provided by HCPs [44]. The findings of the present study have indicated a significant increase in overall knowledge, supportive attitudes towards survivors and individual HCP preparedness following training, however, change in attitudes between pre-training and 6 month follow-up was not significant. This gain of knowledge and skills were also reflected in the significant increase in proportion of HCPs identifying and responding to cases of violence, as well as other supportive practices, such as offering vaidating and supportive statements and talking to women about their needs. This is an important finding because other evaluation studies have reported mixed findings for change in identification and response to survivors by HCPs [46]. As the intervention included both training and organisational changes, this study found the largest magnitude of change in perceived preparedness when we compared pre-training, post-training and 6 months follow-up scores. Further, our study found that the change was retained for knowledge whereas for attitudes and perceived preparedness, the change was not sustained over time. This finding indicates that bringing and sustaining change in attitudes and beliefs of providers requires ongoing reinforcement and further training. Also, our findings indicate that changes in different aspects of attitudes vary. The attitudes of HCP towards acceptability of violence changed between pretraining and post-training, and pre-training and 6 month follow-up. However, attitudes towards the role of HCPs in asking about violence did not change between pretraining and post-training, and pre-training and 6 month follow-up. These findings are consistent with the literature [30,46], and this finding also resonates with the evidence which indicates that consideration of domestic violence as a private matter is a key barrier in establishing response of HCPs to violence [47,48]. The outcomes that we assessed are not clinical outcomes, and therefore we cannot ascertain if the size of the differences in these outcomes represent meaningful improvements in the quality of healthcare provided to women experiencing violence. However, the results presented in Tables 7 and 8 indicate changes in practices in terms of identifying women, providing referral and support services, and assessing women's safety, and represent a substantial shift in practice in the context of the Indian healthcare system, where women experiencing violence usually only receive care for immediate symptoms. In addition, our qualitative findings on HCPs' perceptions of the impact of their participation in the training on their practices will be reported in a subsequent analysis (in preparation). However, we recognize the complexities of HCP behavior change, and that interventions to change HCP practices are non-linear and complex. HCPs are embedded within health systems, which influence the ability of HCPs to implement skills and practices obtained during training, and specific approaches to behavior change, such as modification of peer group norms and expectations, are more effective than others [49][50][51]. The finding that attitudes towards the role of HCPs in asking women about violence did not change in our study is relevant in that these attitudes may continue to inform and influence HCP behavior. As such, future refresher trainings and efforts to reinforce HCPs' quality of practices in response to women experiencing violence should focus on this aspect. Our study showed greater magnitude of improvement in system level preparedness as compared to individual preparedness. This may indicate that system level changes ensured systems' support to HCPs but the individual preparedness which is linked to one's attitude and beliefs showed less improvement. In this study, the intervention not only increased the number of HCPs who inquired about violence but also enhanced the practice of those HCPs who were already doing it before the intervention. There are different factors responsible for the outcomes observed in our study. In addition to changes at system level, there were certain strategies used for rolling out of training. For example the training implemented by senior HCPs with mentoring, administrative and supervisory roles may have shown to HCPs that their managers were committed to addressing this issue. The interactive approaches such as role plays, games and clinical vignettes are in line with adult learning principles and known to be important in retaining knowledge and skills as shown in a recent scoping review of education intervention programs for HCPs [44]. Further, a mix of doctors, nurses and social workers were trained together which resulted in increased sense of ownership across all cadres and also disrupted professional hierarchies between doctors and nurses. Also, there was increased acceptance of training among HCPs as these were conducted by peers. These training strategies along with the system level changes might have played a role in the positive outcomes of intervention. The findings of this study should be interpreted in light of certain limitations. Firstly, the study design was pre -post, without a control group. Thus, the changes in outcomes cannot be attributed completely to the intervention. However, given its focus on acceptability and feasibility, we believe that the study design was appropriate at this stage. Secondly, we used self-administered instruments which could have led to social desirability bias in responses, thereby not reflecting true change. Thirdly, our instruments were not previously validated in this context. However, the results presented in this paper are pertaining only to those domains which were found to have medium or high Cronbach's alpha. Despite these limitations, this study provides robust evidence regarding feasibility and acceptability of a training intervention, combined with health systems-level changes, to support improved HCP knowledge, attitudes and practices for women affected by violence. Important strengths of this study include a large sample size, a low dropout rate and a follow up period, albeit short. The scoping review of training programs for HCPs found mean number of participants in studies of 139.5, and 30% drop-out rate in one-fourth of the included studies [44]. Conclusions We found that a training intervention, combined with health-systems level changes, resulted in improvements in knowledge, attitudes and practices of HCPs in tertiary health-care facilities in Maharashtra, India, although changes varied between sub-domains of these constructs. In order to build an effective and sustainable response of healthcare providers to VAW, it is important to introduce system level changes before implementation of the training intervention to create an ecosystem for starting response. The content, design and implementation of the intervention should be evidence-based, implemented within a healthcare setting with potential for systemslevel changes, and include content not only on identification of abuse and response but also address attitudes, myths, and misconceptions about violence against women. Repeated in-service trainings are required to bring and sustain changes in HCPs' attitude and clinical practice. To conclude, training along with system level changes has the potential to strengthen health systems' response to violence against women.
2021-11-02T13:42:09.819Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "cd0dc7a8eccf1f47aebcad7c278506d84090cff5", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-021-12042-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd0dc7a8eccf1f47aebcad7c278506d84090cff5", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
1667116
pes2o/s2orc
v3-fos-license
Deciding if lifestyle is a problem: GP risk assessments or patient evaluations? A conversation analytic study of preventive consultations in general practice Objective. The aim of this study is to analyse the interaction between patients and GPs in preventive consultations with an emphasis on how patients answer GPs’ questions about lifestyle, and the conditions these answers impose on the process of establishing agreement about lifestyle as a problem or not. Design. Six general practitioners (GPs) video-recorded 15 annual preventive consultations. From these, 32 excerpts of discussions about lifestyle were analysed using conversation analysis (CA). Results. GPs used an interview format to assess risk in patients’ lifestyles. In some cases patients adhered to this format and answered the GPs’ questions, but in many cases patients gave what we have termed “anticipatory answers”. These answers indicate that the patients anticipate a response from their GPs that would highlight problems with their lifestyle. Typically, in an anticipatory answer, patients bypass the interview format to give their own evaluation of their lifestyle and GPs accept this evaluation. In cases of “no-problem” answers from patients, GPs usually encouraged patients by adding support for current habits. Conclusion. Patients anticipated that GPs might assess their lifestyles as problematic and they incorporated this possibility into their responses. They thereby controlled the definition of their lifestyle as a problem or not. GPs generally did not use the information provided in these answers as a resource for further discussion, but rather relied on standard interview procedures. Staying within the patients’ frame of reference and using the patients’ anticipatory answers might provide GPs with a better point of departure for discussion regarding lifestyle. Introduction Consultations in general practice are traditionally initiated by patients attending with health problems [1 -5]. However, in preventive work establishing agreement on the existence of a problem is often the fi rst task. Several studies have indicated that such an agreement cannot always be reached. For example Sorjonen et al. found that most patients in acute and follow-up visits in Finnish general practice thought of their lifestyles as non-problematic [6]. The process of collaborative construction of a problem has been described in studies of general practitioners (GPs) argument for change [10]. It is a core idea in MI not to persuade patients to change their lifestyle as this will cause resistance to change [11]. While the MI method explains in detail how to negotiate change with patients [8,10], it does not, to the same extent, describe how to negotiate whether or not certain behaviours should be considered a problem. In Denmark, as in many other countries, a proactive approach to patient lifestyle is developing. Among such initiatives are annual check-ups with preventive objectives for patients with chronic disease, and for patients with a high risk of developing chronic disease. The GP invites a patient to attend a fi xed-appointment preventive consultation (preventive consultation), which has a pre-agreed agenda [12]. In these consultations a risk assessment is mandatory. Lifestyles are aspects of risk and the guideline for preventive control of diabetes, for example, specifi es that assessments and discussions about smoking, diet, and physical activity should be included in these consultations [13]. The guideline also states that the evidence for the effect of lifestyle interventions on complications and mortality is scarce but that it is included anyway since it was part of the studies testing the pharmacological treatments recommended [13]. Similar recommendations are outlined in guidelines for other chronic diseases such as chronic obstructive pulmonary disease. Preventive consultations provide a particular frame for discussions of lifestyle, as a lifestyle assessment is a mandatory part of the consultation. The aim of this study was to analyse the interaction between patients and GPs in preventive consultations with an emphasis on how patients answered GPs ' questions about lifestyle, and the conditions these answers imposed on the process of establishing agreement about lifestyle as a problem or not. Material and methods The study was carried out with six GPs in Danish general practice. It was based on 32 samples from 15 video-recorded consultations, which were subsequently transcribed. The samples represented the moments when doctors commenced a new lifestyle discussion, that is, when they initiated inquiries about subjects such as smoking, physical activity, diet, alcohol, and weight. Our analyses concentrated on how patients responded to doctors ' initial inquires, and which conversational consequences applied to the different types of responses. Cases were drawn from fi xed-appointment preventive consultations, most of them annual preventive controls of chronic disease. In these consultations, the GP contract encourages the assessment of patients ' health risks and discussions about lifestyle. They are also recognized by GPs themselves as the type of consultation where lifestyle discussions often take place. GPs who had the highest number of preventive consultations during 2010, according to Danish national registries [14], were invited to participate in our study. The GPs were from two of the fi ve regions in Denmark, covering both urban and rural communities. The patients were 10 women and fi ve men, aged between 43 and 80. The GPs were three women and three men, with varying practice tenure, aged 42 -64 years. Four of the GPs were working in partnership practices and two in cooperative practice. GPs who participated received fi nancial compensation equalling the time spent on preparation for the project. We investigated the data using conversation analysis (CA), which is multidisciplinary and crosses both sociology and linguistics. It is used to study how talk-in-interaction is organized, and how members employ its structures to reach common understandings [e.g. 15 -17]. Several of the important contributions to CA have focused on doctor -patient interaction [e.g. 18 -22]. There is a large body of research on the ways in which general practice consultations predominantly consist of question-andanswer-driven interactions, that is, doctors ask questions, to which patients provide answers [22 -24]. In fact, this conversational organization occasionally makes it troublesome for patients to contribute information to the dialogue that exceeds merely answering questions [25,26]. The CA method implies a commitment to consider the possible signifi cance of the smallest paralinguistic details of conversational contributions. This commitment has led to a rigorous notation standard, among other things, which seeks to depict not only what participants say, but also how they say it [27]. Thus, CA researchers transcribe their data Danish GPs are required to risk assess • chronically ill patients and this includes a discussion concerning lifestyles. In a substantial minority of cases patients • made evaluations of their lifestyle ahead of the GP ' s lifestyle interview (anticipatory answers). By way of conversation analysis the article • investigates how GPs proceed with the lifestyle discussion after anticipatory answers. To develop a fruitful discussion about lifestyle • after anticipatory answers it is recommended that GPs focus more on the patient ' s frame of reference and less on risk. rigorously by means of the Jeffersonian principles of notation [28]. Lifestyle interview adherence Lifestyle discussions were organized in conversational trajectories where the parties addressed different lifestyle issues. Most of these trajectories took the form of questions and answers, and they all started with an interview, led by the doctor, assessing the patient ' s risk as in excerpt (1) Patients ' responses regarding tobacco consumption in excerpts (1) and (2) illustrate what we term " lifestyle interview adherence " . This behaviour entails that patients provide answers that conform with doctors ' questions. Also, patients accept and await the trajectories, which are governed by the doctors ' questions. This is also the case for the answer concerning tobacco in excerpt (3) The way in which the patient in excerpt (3) designs her response concerning weight, however, does not meet the criteria for lifestyle interview adherence. The response, rather than providing information on " what the weight says " , hints at a problematic development ( " sort of on the rise " ), which anticipates a discussion about the need for lifestyle changes. In cases of lifestyle adherence it is usually the GP who makes an evaluation of the patient ' s lifestyle as problematic or not. In cases where patients respond with an anticipatory answer, such evaluations are, rather, made by the patients. We investigated further this latter type of trajectory as it poses a challenge to the GPs ' ability to make a risk assessment and eventually establish patients ' lifestyles as a problem. Anticipating issues of problematic lifestyle and possible recommendations Patients ' responses commonly anticipated talk about the problematic nature of their lifestyles, even in cases, such as in excerpt (3), where doctors designed their questions as relatively neutral inquires. Consider the trajectory commencement of excerpt (4) The patient ' s response (line 03) is markedly different from what was observed in the previous examples of lifestyle interview adherence. It answers more than the question. The answer consists of two parts: the fi rst part ( " well … better " ) emphasizes an improvement; the second part ( " but … yet " ) admits that the patient still smokes. Together, not least because of the tying conjunction " but " , the answer conveys awareness of smoking being a bad habit and a hope to overcome it in the future. Most of the examples so far have, for the sake of comparison, concerned smoking. But the distinction between replies that meet lifestyle interview adherence, versus replies that anticipate recommendations for lifestyle changes, apply equally well to other lifestyle issues. For instance, in excerpt (5) 07. DO: But that ' s also … the main thing is that you move your body and bicycling will also get your heart rate up and … 08. PA: Yes. 09. DO: Are you active on a daily basis? Before the doctor has even completed her initial question this patient provides a series of responses, which anticipate either an evaluation of lifestyle as problematic, or advice to change lifestyle. The responses emphasize the patient ' s active lifestyle (line 02 and 06), but also seek to moderate its extent (line 04). Anticipatory answers take different shapes. Most consist of patients ' own evaluations of their lifestyle, as in excerpts (3 -5) above. Some evaluate the lifestyle issue positively, as in (excerpt 5, line 06) " but I consider myself very active " ; others evaluate it as problematic, as in (excerpt 3, line 04) " well it ' s sort of on the rise " . Still other responses describe lifestyle issues as matters that are already taken care of. This is illustrated in excerpt (6): (Excerpt 6) (B3) 01. DO: How about your weight; is it somewhat stable? 02.PA: Actually I ' m in the process of losing weight. 03.DO: Excellent! By answering that she is losing weight (line 02), the patient indicates both that she considers her weight a problem and also that the problem is already being taken care of. Proceeding after anticipatory answers GPs continued the lifestyle interview assessing patients ' risks after the anticipatory answers. Examples are seen in excerpt 3, line 05 where the GP asks how much the patient weighs now; in excerpt 4, line 05 where the GP asks how many cigarettes a day the patient smokes; and in excerpt 5, line 09 where the GP asks about frequency of physical activity. How the GPs proceeded with the lifestyle interview after anticipatory answers depended on whether or not the patients ' anticipatory answers evaluated their lifestyle as problematic. In cases where patients evaluated their lifestyle as problematic, the GPs accepted this evaluation and continued the interview, probing the patient for possible change. Prior to excerpt (7), the patient has explained that he enjoys gardening on a regular basis: This patient replies to the doctor ' s question about his exercise habits (line 01) with an elaborate anticipatory answer (lines 02 -16). In this answer, the patient anticipates that the doctor will assess his exercise habits as insuffi cient by " confessing " how relatively little he bicycles (lines 04 -05); by excusing himself with reference to bad weather (lines 07 -09); and, unsolicited, by adding that he and his wife are about to engage in their wintertime walks. The doctor continues the lifestyle interview asking the patient specifi cally how often (line 17) and for how long (line 19) he walks. After this inquiry the doctor asks a confrontational question about why the patient does not walk every day (line 25). When the patient admits that nothing prevents him (lines 26 -29), the doctor, in turn, is able to pose a fi nal question, which comes very close to explicit advice regarding walking every day (line 30). In cases where patients evaluated their lifestyle as unproblematic, on the other hand, the GPs did not probe for change, as in excerpt (7), but rather offered support for the current habit. The statement by the GP in excerpt 5 (lines 07 -08) is typical of these trajectories. The GP supports the patient ' s view that her level of physical activity is suffi cient, while at the same time incorporating the supportive information that what counts in physical activity is to get the heart rate up. Another example of support for a healthy habit is seen in the continuation of excerpt (6) After the patient ' s anticipatory answer that she is in the process of losing weight (line 02), the GP demonstrates appreciation in several steps telling the patient that this is great (line 03 and 05) and even marvellous (line 11). The GP also adds support by stressing the medical benefi ts of weight loss (lines 22 and 24). This kind of support was given in almost all cases of a " no-problem " answer from patients. The GPs did not challenge patients ' own evaluation of their lifestyle as unproblematic. In one case, however, the question was reintroduced after poor test results indicated that there may, after all, be a problem. Principal fi ndings GPs conducted lifestyle interviews to establish whether patients ' lifestyles posed a health risk. The interview questions were usually answered by the patients, but in a substantial minority of cases patients ' self-evaluations of lifestyle were added in anticipation of advice or recommendations from the GP. In some cases, such self-evaluations were given instead of answers conforming to the GP ' s question. In cases where the lifestyle issue was considered problematic by the patient, the GP probed for possible change; and in cases where the lifestyle issue was considered unproblematic by the patient, the GP supported current habits. GPs usually did not challenge the patients ' own evaluation of their lifestyle as unproblematic even though they generally asked further questions about quantity and frequency of habits. Strengths and weaknesses of the study Recruiting among GPs with the most activity in prevention gave us rich material with an abundance of lifestyle discussions. GPs with a low level of activity in lifestyle discussions were not included in the study and their preventive consultations may differ from those represented here. It is possible that GPs may have conducted more lifestyle discussions than usual to satisfy the researcher and enrich the recordings. However, there is documented research indicating that recording has very little effect on the content of consultations [29], suggesting that this would probably not be an important factor. The preventive annual controls and discussions about lifestyle we investigated shared some aspects of structure and organization. The aspects shared by the practices in our study are expected to apply to more practices due to common institutional goals and shared competences of interaction. There were also many differences and variations. By focusing on the aspects that the consultations had in common, we did not address all the variations in style expressed by the GPs who participated. Findings in relation to other studies By putting their evaluations fi rst, the patients ' answers determined whether or not their lifestyle was considered a problem, and also the ongoing trajectory of the lifestyle interview. It is clear from previous research that establishing a problem in preventive work is often done in a stepwise fashion in collaboration between the patient and the professional [7,9]. It seems, however, that when patients put their evaluations fi rst, they challenge this stepwise process of recognizing that a problem may exist. Furthermore, patients often demonstrated knowledge of lifestyle issues through their anticipatory answers. Previous studies in CA have described how people pitch what they say to meet the knowledge they believe the people they are talking to already have [30]. This has become established as a norm: that is, people do not tell others what they believe they already know [31]. Given this norm, the patients who demonstrated their knowledge of lifestyle issues did not invite GPs to provide more information about lifestyle. GPs supported patients ' habits from a medical perspective if the patients themselves considered their lifestyles unproblematic. In Sorjonen ' s study [6] GPs also worked to support a " no-problem " evaluation made by patients. Unlike the GPs in Sorjonen's study, the GPs in our study usually added support or a recommendation for current habits. This difference could be explained by the fact that Sorjonen was investigating acute consultations, whereas we focused on preventive consultations where lifestyle issues are differently framed. The frame of the preventive consultation might also explain the difference between the fi ndings in our investigation and those in the work by Stivers and Heritage [26]. They described the phenomenon that extended answers, which demonstrate the patient ' s knowledge of the appropriate course of action, pre-empted the GP from pursuing a lifestyle issue. Their study was based on extended medical interviews, which do not have the emphasis on discussion of lifestyle that the preventive consultations in our study have. We found that GPs maintained the interview format asking " how much " , " how often " , and " how far " even after patients ' anticipatory answers. Determining lifestyle risks and reducing risk are formalized aims of preventive consultations [12]. The aim of reducing risk determines what it is necessary for the doctor to know about the problem and leaves other aspects untouched [32]. From an epidemiological perspective lifestyles are correlated with risks of morbidity and mortality. Such classifi cations provide doctors with knowledge about illness but also with specifi c perspectives on people [33]. Quantity and frequency are aspects of lifestyle that are relevant to risk, for example too much food or infrequent physical activity. The general perspective of risk, however, does not include the social context of individual patients. Inquiry that addresses aspects of patients ' answers other than quantity, frequency, or distance might be more fruitful in creating a discussion about lifestyle [11]. A previous investigation of lifestyle counselling in general practice showed that " change talk " was best produced when the nurse stayed within the patient ' s frame of reference [34]. Anticipatory answers could be seen as a contribution by patients to advancing the activity of lifestyle discussions. Previous studies of conversation describe how answers may, in some cases, not conform to the questions but still contribute to the progressivity of the inferred overall activity [35]. In our study, the overall activity is the process of assessing lifestyle, evaluating it as a problem or not, and discussing problematic lifestyles in terms of changes. By giving their evaluations fi rst, the patients make GPs ' assessments of their lifestyles irrelevant and leave the GPs without the knowledge they need to independently evaluate the lifestyle in question. Treating patients ' anticipatory answers as valid contributions to the progress of the discussion, instead of insisting on interviews about risk, might open new possibilities for discussions of lifestyle. Our study shows that patients anticipate advice concerning their lifestyle in preventive consultations. The anticipation of advice interferes with the application of MI in general practice consultations. A key aspect of MI is to avoid raising patients ' resistance to change (11). It seems that the orientation of the institution of general practice towards a healthy lifestyle is established to an extent where advice is an expected outcome. The anticipation of advice has also been described in routine consultations [36]. In this respect, the institution itself may act to raise patients ' resistance even in cases when GPs do not give explicit advice. Conclusion GPs conducted lifestyle interviews to determine patients ' risk. Patients often anticipated that the GP would consider their lifestyle problematic and provided their own evaluations of whether or not lifestyle was a problem. In cases where the lifestyle issue was considered problematic by the patients, the GPs probed for possible change; and in cases where the lifestyle was considered unproblematic by the patient, the GP supported current habits. GPs usually did not use the substance of patients ' initial answers as a resource for furthering this talk but rather relied on standard interview procedures. To develop a more fruitful discussion about lifestyle we recommend that GPs explore other strategies than relying on questions about frequency, quantity, and distance. Staying within the patient ' s frame of reference and developing the conversation on the information the patient provides about relations, everyday life, and experience in anticipatory answers may be one strategy to explore. Nordisk Foundation and the Committee of Multipractice Studies in General Practice for fi nancial support and Julie H ø gsgaard Andersen for valuable comments on the manuscript. Ethical approval According to the principles of Danish ethics committees, qualitative studies are not evaluated. Written informed consent was obtained from patients participating in the study.
2016-05-12T22:15:10.714Z
2015-07-01T00:00:00.000
{ "year": 2015, "sha1": "7f71312cd05c3ce936825a59611d5faceb029361", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.3109/02813432.2015.1078564?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "7f71312cd05c3ce936825a59611d5faceb029361", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
14541389
pes2o/s2orc
v3-fos-license
A regular version of Smilansky model We discuss a modification of Smilansky model in which a singular potential `channel' is replaced by a regular, below unbounded potential which shrinks as it becomes deeper. We demonstrate that, similarly to the original model, such a system exhibits a spectral transition with respect to the coupling constant, and determine the critical value above which a new spectral branch opens. The result is generalized to situations with multiple potential `channels'. Introduction In the seminal paper [1] Uzy Smilansky discussed a simple example of quantum dynamics which could exhibit a behavior one can regard as irreversible. The model in which it can be demonstrated allows for interpretation in different ways, as a one-dimensional system coupled to a heat bath, as a particular quantum graph, or as a two-dimensional quantum system described by the Hamiltonian It was argued in [1] that the behavior of the system depends crucially on the coupling parameter: if |λ| > 1 the particle can escape to infinity along the singular 'channel' in the y direction. The claim can be made mathematically rigorous in terms of the spectral properties of such an operator: one can prove that that for |λ| exceeding the critical value the operator has an additional branch of absolutely continuous spectrum which is not bounded from below [2]. The model was subsequently generalized to the situation when one has more than one singular 'channel'cf. [3,4] -and its further properties were studied, in particular, the discrete spectrum in the subcritical case. It has appeared that there is also another motivation to investigate such systems. Recently Guarneri has used the model -or rather its modification in which the motion in the x direction is restricted to a finite interval with periodic boundary conditions -to describe quantum measurements [5]; he studied the time evolution in such a situation identifying the escape along a particular 'channel' with reduction of the wave packet. The paper [5] concludes with expressing the hope that 'similar behavior may be reproducible with smoother interaction potentials and also in purely classical models'. The aim of the present paper is demonstrate that this is indeed the case. We are going to investigate a model in which the δ interaction with y-dependent strength is replaced by a smooth potential channel of increasing depth, and to show that it exhibits the analogous spectral transition as the coupling parameter exceeds a critical value. Replacing the δ interaction by a regular potentials, however, requires modifications, in particular, the coupling cannot be linear in y and the profile of the channel has to change with y; in this respect our present problem is similar to another model we have investigated recently [6]. To understand the reason one should realize that the essence of the effect lays in the fact that far from the x-axis the variables in the solution to the Schrödinger equation effectively decouple -one can regard it as a sort of adiabatic approximation -and the oscillator potential competes with the principal eigenvalue of the 'transverse' part of the operator, which in the singular case equals to 1 4 λ 2 y 2 . If we want to approximate the δ interaction by a family of shrinking potential in the usual way [7, Sec. I.3.2] we have to match the integral of the potential with the δ coupling constant, U(x, y) dx ∼ y, which can be achieved, e.g., by choosing U(x, y) = λy 2 V (xy) for a fixed function V . Inspired by these considerations we are going to investigate the model described by the partial differential operator on L 2 (R 2 ) acting as where ω, a are positive constants, χ {|y|≤a} is the indicator function of the interval (−a, a), and the potential V with supp V ⊂ [−a, a] is a nonnegative function with bounded first derivative. By Faris-Lavine theorem [8, Thms. X.28 and X.38] the above operator is essentially self-adjoint on C ∞ 0 (R 2 ); the same is true for its generalization, with a finite number of potential channels, where the functions V j are positive with bounded first derivative, with the supports contained in the intervals (b j − a j , b j + a j ) and such that supp V j ∩ supp V k = ∅ holds for j = k. Our aim in the present paper is to demonstrate existence of a critical coupling separating two different situations: below it the spectrum is bounded from below while above it covers the whole real line. Discussion of further properties such as the discrete spectrum in the subcritical case or time evolution of wave packets is postponed to a later paper. We note that the results discussed here depend substantially on the asymptotic behavior of the potential channels and would not change if the potential is modified in the vicinity of the x-axis, for instance, by replacing the cut-off functions in (1.2) and (1.3) with χ |y|≥a and χ |y|≥a j , respectively. It is also not important that in contrast to the original model with the Hamiltonian (1.1) we assume that the potential channels depth increases in both directions parallel to the y-axis. Subcritical case To state the result we will employ a one-dimensional comparison operator on L 2 (R) with the domain H 2 (R); as long as there is no danger of misunderstanding we refrain from labeling the symbol by ω, λ and V . The important property will be the sign of its spectral threshold; since V is supposed to be nonnegative, the latter is a monotonous function of λ and there is a λ crit > 0 at which the sign changes. We shall first focus on the subcritical coupling case. Theorem 2.1. Under the stated assumption, the spectrum of operator H given by (1.2) is bounded from below provided the operator L is positive. Proof. It is obvious it sufficient to prove the claim for λ = 1. We employ Neumann bracketing. Let h n and h n be the restrictions of operator H to the strips G n = R × {y : ln n < y ≤ ln(n + 1)} and G n = R × {y : − ln(n + 1) < y ≤ − ln n} , n = 1, 2, . . ., with Neumann boundary conditions. Then we have the inequality and to prove the claim we have to demonstrate that the sets σ(h n ) and σ( h n ) have a uniform lower bound as n → ∞. Using the fact that the function V has a bounded derivative we find for any (x, y) ∈ G n , and consequently Similarly, we have for for any (x, y) ∈ G n . These relations yield asymptotic inequalities in which the Neumann operators l n := − ∂ 2 ∂x 2 − ∂ 2 ∂y 2 + ω 2 ln 2 n − ln 2 n V (x ln n) on G n and l n := − ∂ 2 ∂x 2 − ∂ 2 ∂y 2 + ω 2 ln 2 n − ln 2 n V (−x ln n) on G n have separated variables. Since the minimal eigenvalue of − d 2 dy 2 on the interval with Neumann boundary conditions defined on intervals (ln n < y ≤ ln(n + 1)), n = 1, 2, . . . , acts on L 2 (R). Note that the cut-off function χ {|x|≤a} in (1.2) plays no role in the asymptotic estimate as it affects a finite number of terms only. By the change of variable x = t ln n the last operator is unitarily equivalent to ln 2 n L which is positive as long as L is positive. In the same way one proves that l n is positive under the assumption of the theorem; this in combination with (2.3) concludes the proof. By a straightforward modification of the proof we get the following claim. with Dirichlet (respectively, Neumann or periodic) boundary conditions. Supercritical case Let us turn to the case when the 'escape to infinity' is possible. Proof. To prove that any real number µ belongs to essential spectrum of operator H we are going to use Weyl's criterion [8, Thm. VII.12]: we have to find a sequence {ψ k } ∞ k=1 ⊂ D(H) such that ψ k = 1 which contains no convergent subsequence and holds. Since the claim is invariant under scaling transformations we can suppose without loss of generality that inf σ(L) = −1. The spectral threshold is easily seen to be a simple isolated eigenvalue; we denote the corresponding normalized eigenfunction of L by h. Our aim is to show first that 0 ∈ σ ess (H). We fix a positive ε and choose a natural number k = k(ε) with which we associate a function χ k ⊂ C 2 0 (1, k) satisfying the following conditions To give an example, consider the functioñ , where g k and q k are interpolating functions chosen in such a way thatχ k ∈ . This function satisfies by definition the first condition of (3.1) and one can check that it also satisfies the second one provided k is sufficiently large; this follows from the fact that Such functions allow us to construct the Weyl sequence we seek. Given a function χ k with the described properties, we define where f (t) := − i 2 t 2 h(t), t ∈ R, and n k ∈ N is a positive integer to chosen later. For the moment we just note that choosing n k large enough for a given k one can achieve that ψ k L 2 (R 2 ) ≥ 1 2 as the following estimates show, note that since the potential V has a compact support by assumption, the ground state eigenfunction h decays exponentially as |x| → ∞, hence the first integral in the last expression converges. Our next aim is to show that Hψ k 2 L 2 (R 2 ) < cε with a fixed c holds for k = k(ε). By a straightforward calculation one gets n k h ′ (xy) e iy 2 /2 χ ′ k y n k − y 2 h(xy) e iy 2 /2 χ k y n k +ih(xy)e iy 2 /2 χ k y n k + 2 iy n k h(xy) e iy 2 /2 χ ′ k y n k We want to show that choosing n k sufficiently large one can make the terms at the right hand side of (3.5) as small as we wish. Changing the integration variables, we get the following estimate, where the last integral again converges from the reason described above. In the same way we establish the remaining inequalities which we need to demonstrate our claim: Consequently, choosing n k large enough we can achieve that the sum of all the integrals at the left-hand sides of the above inequalities is less than ε. Using the fact that Lh = −h and applying the Cauchy inequality, the above result implies It is easy to check that −ih(t) = 0 and the last integral in the above estimate vanishes, which gives To complete the proof we fix a sequence {ε j } ∞ j=1 such that ε j ց 0 holds as j → ∞ and to any j we construct a function ψ k(ε j ) with the corresponding numbers chosen in such a way that n k(ε j ) > k(ε j−1 )n k(ε j−1 ) . The norms of Hψ k(ε j ) satisfy inequality which (3.6) with 9ε j on the right-hand side, and since the supports of ψ k(ε j ) , j = 1, 2, . . . , do not intersect each other by construction, their sequence converges weakly to zero. This yields the sought Weyl sequence for zero energy; for any nonzero real number µ we use the same procedure replacing the above ψ k with where ǫ µ (y) := y √ |µ| t 2 + µ dt, and furthermore, the functions f, χ k defined in the same way as above. Intervals and multiple channels Let us look next how the above result changes if the motion in the x direction is restricted. We have the following result: Let H be the operator on L 2 (−c, c) ⊗ L 2 (R) for some c > 0 given by the differential expression (1.2) with Dirichlet condition at x = ±c and denote by L the corresponding Dirichlet operator (2.1) on L 2 (−c, c). If the spectral threshold of L is negative, the spectrum of H covers the whole real axis. Proof. Without loss of generality we may suppose that c = 1. We shall apply again Weyl's criterion modifying the argument of the previous section. By Dirichlet bracketing, one has that L ≤ ⊕ 3 k=1 L k , where L is the original comparison operator (2.1), L = − d 2 dx 2 + ω 2 − V on L 2 (R), while L 1 and L j , j = 2, 3, are given by the same differential expression on L 2 (−1, 1) and L 2 (−∞, −1), L 2 (1, ∞), respectively. Thus under the assumption the spectral threshold of L is negative, and without loss of generality we may suppose that its ground state satisfies Lh = −h with h = 1 and show that 0 ∈ σ ess (H). The functions (3.2) are now changed as follows, with φ ∈ C 2 0 (−1, 1) such that φ(x) = 1 holds for |x| ≤ 1 2 , while the numbers k = k(ε), n k ∈ N and functions χ k , f are the same as before. which means that ψ k L 2 (R 2 ) ≥ 1 2 − 2 √ ε holds for n k large enough; our aim is to show that Hψ k 2 L 2 (R 2 ) < dε with a fixed d > 0. Let us first compute the partial derivatives and −f (xy) e iy 2 /2 χ k y n k φ(x) + i y 2 f (xy) e iy 2 /2 χ k y n k φ(x) Using the exponential decay of h and the fact that φ is constant on [−1/2, 1/2] we find that for all sufficiently large n k we have in a similar way, As for the remaining term in the partial derivative expressions, we simply repeat our calculations from previous section. In this way we are able to conclude that for large enough k, and respectively n k we have −ω 2 f (xy)χ k y n k + V (xy)f (xy)χ k y n k 2 dx dy + ε . Using the assumption about the ground state of L, the last equation implies Using the fact that f (t) = − i 2 t 2 h(t) we conclude in the same way as in the previous section that the right-hand side of the last inequality can be estimated by 9 φ 2 L ∞ (R) ε. The rest of the proof follows the same routine. We pick a sequence {ε j } ∞ j=1 such that ε j ց 0 holds as j → ∞ and to any j we construct a function ψ k(ε j ) with the corresponding numbers chosen in such a way that n k(ε j ) > k(ε j−1 )n k(ε j−1 ) . The norms of Hψ k(ε j ) satisfy inequality which (3.6) with 9 φ 2 L ∞ (R) ε j on the right-hand side, and the sequence {ψ k(ε j ) } ∞ j=1 converges weakly to zero by construction, their sequence converges weakly to zero. This proves that 0 ∈ σ ess (H); for any nonzero real number µ we proceed in the same way replacing the above ψ k with ψ k (x, y) = h(xy) e iǫµ(y) χ k y n k φ(x) + f (xy) y 2 e iǫµ(y) χ k y n k φ(x) , where ǫ µ (y) := y √ |µ| t 2 + µ dt, and furthermore, the functions f, χ k , φ defined in the same way as above. Observing the domains of the quadratic form associated with such operators we can extend the result in the following way: The result also allows us to answer the question about spectral transition for the model with multiple singular channels. 3) with the potentials satisfying the stated assumptions, namely the functions V j are positive with bounded first derivative and supp V j ∩ supp V k = ∅ holds for j = k. Denote by L j the operator (2.1) on L 2 (R) with the potential V j and t V := min j inf σ(L j ). Then H is bounded from below if and only if t V ≥ 0 and in the opposite case its spectrum covers the whole real axis. Proof. The claim follows by bracketing. By assumption we can choose points x j such that where v − j := inf supp V j and v + j := sup supp V j and impose additional Neumann and Dirichlet boundary conditions at them. The spectrum in the intervals (−∞, x 0 ) and (x n , ∞) is found trivially, to the other components of the direct sum obtained in this way we apply Corollary 2.2 and Theorem 4.1, respectively.
2013-08-20T07:58:06.000Z
2013-08-20T00:00:00.000
{ "year": 2013, "sha1": "2eda9c41a1d74c4f0f8aa25eb7066d273dc7de4d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1308.4249", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "04793c4540dfbfae8d546d54dc91cea0a2648d0f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
6783668
pes2o/s2orc
v3-fos-license
TREATMENT OF MELASMA WITH GLYCOLIC VERSUS TRICHLOROACETIC ACID PEEL : COMPARISON OF CLINICAL EFFICACY Melasma is one of the most common, therapy-resistant forms of acquired hyperpigmentation. The aim of the present study was to assess the efficacy and side effects of chemical peels with 35% glycolic and 15% trichloroacetic acid (TCA) in conjunction with 20% azelaic acid cream in the treatment of melasma. Twenty-six women aged 22-54 years with different forms of melasma have been treated. Six of them were with phototype II, 11 with phototype III and 9 with phototype IV. Disease severity was assessed at the beginning and at the end of therapy according to the Melasma Area and Severity Index (MASI). Patients were randomly divided in two groups – Group I (n=12) treated with 35% glycolic acid and Group II (n=14) treated with 15% TCA. A significant reduction in MASI values after therapy was observed in all patients without significant difference between Group I and Group II (t=0,12; ð>0,05). No statistical difference was established among final MASI values of women with phototypes II, III and IV (t=0,25; ð>0,05). Side effects were light and negligible. Therapy was positively assessed by the patients. In conclusion, chemical peels with 15% TCA and 35% glycolic acid in conjunction with 20% azelaic acid reduce significantly MASI values after therapy and are equally effective in the treatment of melasma. INTRODUCTION Melasma is an acquired hyperpigmentation of the face affecting predominantly women.Multiple etiologic factors have been implicated: high estrogen states (pregnancy, oral contraceptives), genetic factors, cosmetics and autoimmune thyroid disease.Sunlight exposure appears to be essential for its development. Conventional therapy for melasma consists of keratolytic (tretinoin, resorcin, glycolic and trichloroacetic acids etc) and depigmenting agents (hydroquinone, kojic and azelaic acids).It has been established that chemical peels potentiate the effect of the depigmenting agents and reduce significantly the Melasma Area and Severity Index (MASI) (3,4,5,6). AIM The aim of the present study was to assess and compare the efficacy and side effects of chemical peels with 35% glycolic and 15% trichloroacetic acids (TCA) in conjunction with 20% azelaic acid cream in the treatment of melasma. PATIENTS AND METHODS PATIENTS Twenty-six women aged 22-54 years (mean 25) were enrolled in the study.The pattern of melasma was as follows -six patients with centrofacial, four with mandibular, four with malar and twelve with mixed melanosis.The mean duration of the disease was 10,6 years.Six women had Fitzpatrick skin type II, 11 were with skin type III and 9 with skin type IV.Thirteen had had previous pregnancy, 11 had received oral contraceptives and 2 had been on estrogen replacement therapy.Fifty percent of the patients used no photoprotection outdoors.Ten women had undergone previous treatment with other agents with different, but as a whole poor response.Nursing and pregnant patients as well as those who had conducted depigmenting therapy during the previous three months were excluded from the study.According to their birth date patients were randomly allocated in two groups -Group I (n=12) treated with 35% glycolic acid peel and Group II (n=10) treated with 15% TCA peel. METHODS Patients were pretreated with tretinoin (Acnederm gel 0,05%) for two weeks.A series of four peels spaced 15 days apart was applied to each patient. The face was first treated with a mild cleanser and water and prepared with a pre-peel toner.TCA was applied with two cotton-tipped applicators.Hydrating mask was spread on the whole face after the appearance of even pinkish-white frosting.Glycolic acid was applied with a soft fan-like brush.The peeling solution was neutralized and removed with water after the development of slight erythema and/or frosting. After the peel the patients were directed to use emol-lients in unlimited quantities and broad-spectrum sunscreens. As soon as they healed they would start application of 20% azelaic acid cream (Skinoren, Schering) in conjunction with sunscreens and would continue applying them after the end of the treatment course. Assessment of therapeutic efficacy The same investigator evaluated all patients.This was performed before and after treatment and six months after the end of the therapeutic course.Melasma severity was scored using the MASI (2).In this system the face is divided into four areas: forehead, right malar, left malar and chin that correspond respectively to 30%, 30%, 30% and 10% of total face area.The melasma in each of these areas was graded on three variables: percentage of total area involved on a scale from 0 (no involvement) to 6 (90-100% involvement); darkness on a scale from 0 (absent) to 4 (severe); homogeneity on a scale from 0 (minimal) to 4 (maximum).The MASI was then calculated by the following equation: MASI=0,3(DF+HF)AF +0,3(DMR+HMR)AMR +0,3(DML+HML)AML+0,1(DC+HC)AC, where D is darkness, H is homogeneity, A is area, F is forehead, MR is right malar, ML is left malar, C is chin and the values 0,3 and 0,1 are respective percentages of total facial area. At the end of the treatment patients were asked to give their subjective assessment of their clinical response to the peels. Statistical methods Statistical analysis was performed with the help of Student , s t-test for comparing MASI values before and after treatment and among patients with phototypes II, III and IV. Patients , subjective assessment After treatment patients were asked to evaluate the discomfort from the two different peeling solutions.They found the TCA peel caused more discomfort -slight pain and strong stinging during the application, excessive desquamation during the next 4-5 days, which interfered with their daily activities.The glycolic acid procedure was associated with stinging and nipping, which were most pronounced during the first procedure. Sixteen of the patients (8 from Group I and 8 from Group II) assessed therapeutic efficacy as greater than 90% improvement, 8 (6 from Group I and 2 from Group II) -as greater than 50% improvement and 2 (Group I) -as greater than 30% improvement. Adverse reactions They were observed in eight patients from Group I and included persisting postpeel erythema (on the cheeks, chin and around the nose).It was treated with moderately potent topical corticosteroids.In two patients crusting developed as a result of a deeper penetration of the solution.In six women from Group II postlesional hyper-pigmentation was observed. Long-term follow-up Seventeen (65%) of the patients were followed-up six months after the treatment.Only the ten of them, who continued topical therapy with sunscreens and azelaic acid maintained improvement.The others experienced relapse, although they were still improved over the pretreatment measurements. DISCUSSION Melasma is a serious medical and esthetic problem, especially in dark-skinned people.Despite the impressive number of available therapeutic agents treatment results are often disappointing, as the condition usually recurs.The principle rules in the treatment of melasma include avoidance of excessive sun exposure, retardation of melanocyte proliferation, inhibition of melanosome formation and promotion of melanosome degradation (6).This could be achieved by regular use of depigmenting agents and sunscreens with or without keratolytics. Superficial and medium-depth chemical peels are recommended for the treatment of melasma, mainly in fairskinned individuals.People with higher phototype are usually resistant to therapy and therapeutic results are unsatisfactory (5).However, this was not observed in our patients probably because of the small number of women with phototype IV.Chemical peels act by increasing the penetration of medical therapy, not only by "peeling off" the pigment (3).This was confirmed in the study conducted by Sarkar R et al (5) in two groups of Indian patients.The first group was treated with 30 and 40% glycolic acid peels and a topical regimen of a modified Kligman formula (0,05% tretinoin, 2% hydroquinone and 1% hydrocortisone).The other group received the topical regimen alone.After a total of six peels a significant decrease in MASI values was established in both groups (p<0,001).The women who received the glycolic acid peel showed a statistically REFERENCES: The combination of glycolic acid peels with a topical regimen in the treatment of melasma in dark-skinned patients: a comparative study.Dermatol Surg 2002; 28: 828-832. significant trend toward a more rapid and greater improvement (p<0,001). Azelaic acid is a naturally occurring, straight-chained, saturated dicarboxylic acid that acts as a competitive inhibitor of tyrosinase and interferes directly with melanin biosynthesis.Various studies report "good" to"excellent" results in 63-80% of the patients with melanosis after 6 months of treatment with 20% azelaic acid cream in conjunction with broad-spectrum sunscreens (1).Azelaic acid has practically no effect on normal melanocytes and its long-term use has not been associated with ochronosis.Such changes were not observed in our patients also. The results of the present study demonstrate that chemical peels with 35% glycolic and 15% TCA in conjunction with azelaic acid and tretinoin are equally effective in the treatment of melasma and are positively accepted by the patients.This was confirmed by the fact that 16 (62%) of them assessed therapeutic efficacy as excellent (greater than 90% improvement) and 8 (31%) as good (greater than 50% improvement).Side effects were light and negligible except for the postlesional hyperpigmentation, which disappeared in about 4 weeks.It developed most often around the mouth and on the chin in TCA-treated patients probably as a result of the premature desquamation of the epidermis in these regions due to the active contraction of the muscles during speaking and eating. The long-term follow-up of the patients demonstrated that therapeutic results persist only in those of them, who continued the topical application of azelaic acid and broadspectrum sunscreens.This confirms the necessity of a constant maintenance therapy of melasma -an obligatory condition for the achievement of long-lasting therapeutic results (4).
2017-08-15T00:45:16.899Z
2012-04-11T00:00:00.000
{ "year": 2012, "sha1": "cc6d7a127cd7d6ab5ad9121aa1208df81f967fe0", "oa_license": "CCBYSA", "oa_url": "http://www.journal-imab-bg.org/statii/39-41_b1-04.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cc6d7a127cd7d6ab5ad9121aa1208df81f967fe0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
92133136
pes2o/s2orc
v3-fos-license
Metabonomics of mice intestine in Codonopsis foetens induced apoptosis of intestine cancer cells Intestinal cancer is a disease with high morbidity and high mortality in China. Previous studies have shown that Codonopsis foetens can inhibit cellular autophagy and promote the apoptosis of intestine cancer cells. Based on metabolomics method coupled with liquid chromatography-mass spectrometry (LC-MS) technology, we aimed to analyze intestinal small molecule metabolites in the intestinal cancer model group and the Codonopsis foetens treated group. Principal component analysis (PCA) and Partial Least Squares (PLS-DA) were used to identify the pattern of the data. And the metabolic characteristics of the cancer model group were explored based on the metabolic differences between the groups. Multivariate statistical analysis revealed that metabolites presented with differences included: Acetamide, Phosphoric acid, Hydrogen sulfite, Pyruvic acid, Cytosine, 2-Hydroxypyridine, Phosphoric acid, Uracil, Gamma-Aminobutyric acid, Glycerol alpha-monochlorohydrin, Thiosulfic acid, L-Valine, Cysteamine, Taurine, Creatine, Homocysteine, Hypoxanthine, Se-Methylselenocysteine, 5-Hydroxymethyluracil, Oxoglutaric acid, LysoPC(20:0), LysoPC(22:4(7Z,10Z,13Z,16Z)), LysoPC(18:2(9Z,12Z)), LysoPC(16:1(9Z)), LysoPE(0:0/16:0), LysoPE(0:0/18:2(9Z,12Z)), LysoPE(18:0/0:0), LysoPE(20:1(11Z)/0:0), etc. Combined with metabolic pathway analysis, pathways presented with differences included: Citrate cycle (TCA cycle), ABC transporters, 2-Oxocarboxylic acid metabolism, Taurine and hypotaurine metabolism, Butanoate metabolism), Phenylalanine, tyrosine and tryptophan biosynthesis, Biosynthesis of amino acids, Protein digestion and absorption, Aminoacyl-tRNA biosynthesis, C5-Branched dibasic acid metabolism, GABAergic synapse, Proximal tubule bicarbonate reclamation, Mineral absorption, Phenylalanine metabolism. The results showed that the proliferation of intestinal cancer cells caused cell metabolism disorders, manifesting as changes in metabolic pathways and resulting in changes in metabolites. Introduction According to data from disease center, intestine cancer is a disease with high morbidity and high mortality. Rectum is the predominant predilection site of intestine cancer, followed by sigmoid and others (Yu et al., 2016;Jemal et al., 2011;Nordin et al., 2018). Originated in the 1970s, metabonomics technology provides a new perspective for tumor research. In recent years, diagnosis and treatment of tumor have become the focus of research in life science. Metabolomics refers to the overall change of metabolites during a certain period, directly reflecting the final state of biological systems, which is particularly suitable for analyzing the changes of metabolic substances in the body caused by the synthesis and release of tumor cells in tumorigenesis and proliferation processes (Zhang and Wu, 2017;Sun et al., 2017;Qiao, 2018). Preliminary research had demonstrated that long-term constipation could lead to intestinal cancer (Luan et al., 2016;Xiao and Liu, 2018) of Codonopsis foetens on human colon cancer HCT116 and SW480 cell lines, of which results showed that Codonopsis foetens inhibited cell autophagy and induced apoptosis of colon cancer cells via activating the NF-jB pathway and promoting nuclear transportation of P65 (Luan et al., 2018;Zhao and Li, 2018). In the early stage, models of intestinal cancer and Codonopsis foetens treatment model were established. Long-term constipation-induced intestinal cancer is a long-term accumulation process. This study focuses on long-term constipation induced intestinal cancer (CCa), long-term constipation induced intestinal cancer with treatment group (CCaT) and blank control group(B) and perform metabonomic analysis. Therefore, the present study was based on liquid chromatography-mass spectrometry (LC-MS) coupled with metabolomics. Combined with bioinformatics analysis technology, we aimed to identify potential biomarkers and analyze their metabolic pathways, laying a certain foundation for revealing the mechanisms underlying long-term constipation induced cancer and the pharmacological action of Codonopsis foetens. Materials and methods 2.1. The construction of long-term constipation model, intestinal cancer model and Codonopsis foetens treatment model in mice 21 healthy Kunming mice were routinely fed in laboratory for 3 days and later designated into 7 groups, including blank group (B), 1,2-dimethylhydrazine induced intestinal cancer group (CaD), long-term constipation group (C), long-term constipation induced intestinal cancer group (CCa), 1,2-dimethylhydrazine induced intestinal cancer with treatment group (CaDT), long-term constipation with treatment group (CT), long-term constipation induced intestinal cancer with treatment group (CCaT). According to methods mentioned in references, 2.5 mg/(Kg d) of loperamide hydrochloride were administered orally by gavage to mice in all groups except for blank group, constructing constipation mice model, while mice in blank group were gavaged with equal amounts of saline. Gastrogavage was performed for 2 consecutive weeks, and successful construction of constipation model was confirmed with intestine propelling rates and defecation rates. 1,2dimethylhydrazine were injected intraperitoneally to mice of CaD group and successful construction of intestinal cancer model was confirmed by pathomorphology 6 weeks later. To construct 3 Codonopsis foetens treatment groups, namely the 1,2-dimethylhydrazine induced intestinal cancer with treatment group (CaDT), long-term constipation with treatment group (CT) and long-term constipation induced intestinal cancer with treatment group (CCaT), total extract of Codonopsis foetens were given by gavage to mice of 1,2-dimethylhydrazine induced intestinal cancer group (CaD), long-term constipation group (C) and longterm constipation induced intestinal cancer group (CCa). Metabolite extraction 500 lL of bacteria solution was obtained and volatilized to dryness. 300 lL of methanol-water = 4:1 (V/V) was added to reconstitute, then solution was eddied for 30 s, sonicated for 3 min in an ice-water bath and centrifuged for 10 min at low temperature (14,000 rpm, 4°C). 180 lL of supernatant was loaded into an LC-MS vial with liner and analyzed by LC-MS. Metabolite detection The instrument platform for LC-MS analysis was Ultra-High-Performance Liquid Chromatography Tandem Time-of-Flight Mass Spectrometry UPLC-Q-TOF/MS from Waters. The chromatographic conditions were listed as follows: column BEH C18; mobile phase A was water (containing 0.1% formic acid) and mobile phase B was acetonitrile (containing 0.1% formic acid). The flow rate was 0.40 mL/min, the injection volume was 3 lL, and the column temperature was 50°C. The mass spectrometry conditions were as follows: positive and negative ion scan modes was adopted for mass spectrometry signal acquisition of the sample. The electrospray capillary voltage, injection voltage and collision voltage were 1.0 kV, 40 V and 6 eV, respectively. The ion source temperature and solvent temperature were 120°C and 500°C; carrier gas flow rate: 900L/h; mass spectrum scanning range: 50-1000 m/z; scan time and interval time: 0.1 s and 0.02 s, respectively. Data analysis Baseline filtering, peak identification, integration, retention time correction, peak alignment, and normalization were performed on the raw data obtained after mass spectrometry analysis to finally obtain a data matrix of retention time, mass-to-charge ratio, and peak intensity. According to the characteristics of biomarkers under the liquid phase chromatography and mass spectrometry conditions, it was finally confirmed by comparing with the standard and database. Multivariate statistical analysis (PCA analysis, PLS-DA analysis, OPLS-DA) was performed using the normalized data matrix. By using a combination of multivariate statistical analysis of OPLS-DA and univariate statistical analysis, differential metabolites were screened. Results and analysis 3.1. Preliminary research showed that long-term constipation could induce intestine tumor. Preliminary research was based on long-term constipation group (C) induced by loperamide hydrochloride and blank group (B). Successful construction of intestinal cancer model was confirmed by pathomorphology (showed at Fig. 1, Fig. 2 and Fig. 3) (see Table 1). Base peak chromatogram under typical positive and negative ion modes of intestinal cancer group and Codonopsis foetens treatment group Base Peak Chromatogram (BPC) was first visually examined for all samples. Figs. 4 and 5 show base peak chromatogram of quality control samples under typical positive and negative ion modes. The results showed that all samples had strong signals, large peak capacity and reproducible retention time. Principal component analysis of intestinal cancer group and Codonopsis foetens treatment group PCA analysis is an unsupervised multidimensional statistical analysis method that can reflect the overall metabolic difference between samples and the variability between samples within the group, as shown in Fig. 6. From Fig. 6, we can see that samples from control group, intestinal cancer group and Codonopsis foetens treatment group have a better separation, indicating significant changes of metabolic profiles in intestinal cancer group and Codonopsis foetens treatment group. All the metabolites were normalized and then analyzed by cluster heat map, as shown in Fig. 7. From Fig. 7, the metabolite expression patterns in all samples are visually displayed. Enrichment analysis of metabolic pathways in intestinal cancer group and Codonopsis foetens treatment group Different metabolites were mapped in the KEGG database in search for metabolic pathways. KO enrichment analysis bubble metabolites, Tyrosine metabolism, Monobactam biosynthesis, Cysteine and methionine metabolism, Glyoxylate and dicarboxylate metabolism, Valine, leucine and isoleucine biosynthesis, Thyroid hormone synthesis, Glutathione metabolism, Dopaminergic synapse, Melanogenesis, Prolactin signaling pathway. Glucose metabolism of Codonopsis foetens induced apoptosis of intestinal cancer cells The rapid proliferation of tumor cells is energy-consuming. It has been found that the transformation of normal cells into tumor cells is accompanied by the remodeling of energy metabolism pathways. Most typically, energy supply of tumor cells is mainly dependent on the glycolytic pathway even in the presence of oxygen. These kind of remodeling is called aerobic glycolysis or the Warburg effect. Such change could provide abundant energy for the rapid growth of tumor cells, which is also an adaptive changes of tumor cells to the living environment under stress conditions. That is, to establish a foundation for tumor cells to adapt to changes in the microenvironment (Vander Heiden et al., 2009;Hu et al., 2018;Reddy and Aqueel, 2018). Amino acid metabolic pathways The up-regulation of a series of amino acids suggests a perturbation of the amino acid transportability during metabolism, which may be to meet the large amount of energy required for Fig. 7. Cluster heat map of all metabolites in different model groups. tumor growth, or to meet the demand for substances during the rapid growth of tumor cells (Hu et al., 2018;Zeeshan et al., 2018a,b). The level of amino acids (especially His, Lys, Arg) was significantly increased in the intestinal cancer model group compared with that of control group, indicating disorder in the amino acid metabolism in cancer model group. Then cell lesion occurred, causing diseases. Compared with the intestinal cancer model group, the level of amino acid in the treatment group decreased, indicating that Codonopsis foetens can affect the metabolites by regulating the amino acid metabolism pathway, displaying therapeutic effects. Both lysophosphatidylcholine and lysophosphatidylethanolamine showed an increasing tendency in the intestinal cancer model group compared with the control group, indicating that their metabolic disorders could lead to the occurrence of cell lesions, which in turn would cause intestinal tumors. After treatment of Codonopsis foetens, the level of lipids were down-regulated, which were close to those of the control group. Such tendency indicated that the Codonopsis foetens regulates the lipid metabolic pathway, displaying significant therapeutic effects on cancer. Changes of cellular metabolism is an important feature of tumors, and it interacts as both cause and effect of tumor occurrence and development. The occurrence and development of tumor will cause disorders in various metabolic pathways, such as the glycometabolism pathway, mitochondrial biosynthesis, amino acid metabolism, lipid metabolism and others (Zhou et al., 2016;Luan et al., 2017;Zeeshan et al., 2018a,b). In this study, the metabolites of the cancer model group and the control group also changed correspondingly. And such changes in the treatment group were close to those of the control group. Given that previous study had demonstrated Codonopsis foetens could promote apoptosis of colon cancer cells, subsequent studies are need to investigate transcriptomics, proteomics and metabolomics and further explore its mechanisms of action. In conclusion, we demonstrated that the proliferation of intestinal cancer cells would cause metabolic disorders, manifesting as changes in metabolic pathways, and thus leading to changes in metabolites.
2019-04-03T13:08:29.854Z
2018-11-16T00:00:00.000
{ "year": 2018, "sha1": "a121740116a56e9fad483fad118d91606717dd65", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.sjbs.2018.11.010", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5659fdace2dbac9b1313f8709febc98acb513c9", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
225221233
pes2o/s2orc
v3-fos-license
Investigation of the effect of online education on eye health in Covid-19 pandemic Online education Abstract: The aim of this research is to evaluate the effect of online education on eye health in Covid-19 pandemic and to present a new scale on this subject. For this purpose, 402 students (257 females, 145 males) with a mean age of 20.26 from different faculties of Pamukkale university were asked about eye health by e-mail between 8-13 July 2020. Also, eye fatigue questionnaire was applied to evaluate eye fatigue. Corrected item-total correlations and Cronbach Alpha internal consistency coefficient techniques were used for reliability analysis. In this study, online education eye health scale in Covid-19 pandemic was found to be positively correlated with eye fatigue questionnaire. According to the results of simple linear regression analysis conducted to determine the predictive value of the online education eye health scale in Covid -19 pandemic to eye fatigue, it was found that the online education eye health scale in covid-19 pandemic significantly predicted eye fatigue. Data analysis were conducted with SPSS 21.0 statistical package program in 0.01 significance level. INTRODUCTION The novel coronavirus originated from a seafood market place at Wuhan, China. The zoonotic resource of SARS-CoV-2 is unclear, but, previous analysis suggested bats as the main key reservoir (Lu et al., 2020). As yet, no hopeful clinical treatments or prevention methods have been developed against human coronaviruses. The main transmission ways of coronaviruses are direct or indirect human contact, and viral droplets (Yuan et al., 2006). These transmission pathways lead to the rapid spread of the disease. Therefore, social distance and hygiene are very important in preventing the spread of the disease. Coronavirus family had caused outbreaks in the past for example severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) (Wang et al., 2013;Zhong et al., 2003). SARS CoV-2 is responsible for Covid-19 pandemic worldwide. Covid-19 had some common symptoms like sore throat, cough, and fever (Tian et al., 2020). While Covid-19 may be asymptomatic or mild in most patients, it may be severe in some patients, leading to renal failure, respiratory failure and multiple organ failure (Chen et al., 2020;Huang et al., 2020). While typical symptoms were seen at the beginning of the pandemic, then atypical symptoms such as muscle pain, loss of taste or smell, and headache started to appear (Huang et al., 2020;Lee, Min, Lee & Kim, 2020). Eye fatigue-asthenopia consists of subjective complaints that cause discomfort in the eye (Gowrisankaran, Nahar, Hayes & Sheedy, 2012). Asthenopia manifests itself with complaints such as eye discomfort, tearing, dryness, blurred vision, inability to focus, foreign body sensation (Neugebauer, Fricke & Russmann, 1992). This is an important condition that affects attention and academic performance. In our age, the use of digital devices is increasing, depending on the technological developments. In addition, this period of use is increasing in the new generation. As a result, the risk of eye strain increases especially in young people. Considering the previous literature, it has been stated that asthenopia may be associated with various psychosocial and environmental factors. Prolonged near work, increased cognitive load, using computer/screen can affect the eye fatigue complaints (Agarwal, Goel & Sharma, 2013;Ostrovsky, Ribak, Pereg & Gaton, 2012). The prevelance of eye fatigue was observed by previous studies. Han et al., (2013) reported the prevelance of 57% in Chinese students (Han et al., 2013). In another study, the prevalence of asthenopia was found to be 53.3% in collage students. Also workload, time spent on computer per day, sexuality and time spent on handheld digital devices were found sinificantly related eye fatigue/astenopia in this study (Xu, Deng, Wang, Xiong & Xu, 2019). All social layers in society have been seriously affected by the Covid-19 pandemic. Especially people over the age of 65 have been the most restricted socially in this process. On the other hand, the education and training activities of young people were interrupted during this period. During this period, young people also had to stay at home. At the same time, online education activities have increased in this process. Online education has replaced face-to-face education widely all over the world. In this process, students were left alone with the screen for long hours. While this situation shapes their social relations and behavior patterns, it also affects the eye health. The Covid-19 pandemic is one of the most important social events of the last century worldwide. The pandemic, which first started in China, spread to the whole world in a very short time and has seriously affected our country. Since the first case in our country, serious measures have been taken and the spread rate of the Covid-19 pandemic has been tried to be reduced. Within the scope of these measures, schools were closed and online education-training activities continued. In our study, we aimed to measure the effect of online education on eye health of university students. In addition, we aimed to look at the consistency of the scale we developed with this survey by applying eye fatigue questionnaire. METHOD 2.1. Study Group Our study group consisted of 402 university students who receive education in different faculties of Pamukkale University during the 2019-2020 academic year. Participants of this study are students of Faculty of Education, Faculty of Arts and Sciences, Faculty of Engineering, Kale Vocational School, Tavas Vocational School and Faculty of Medicine. 257 (63.9%) female and 145 (36.1%) male students were included in this study. The mean age of the participants was 20.26 years. Procedure First the literature on the concept of eye health in Covid-19 pandemic was reviewed and the knowledge and theories related to this field were analysed. A pilot test was created by looking at the related literature. During the creation of the pilot test, it was asked to 5 field and measurement/evaluation experts to reflect the test to be measured. The pilot test was arranged and applicated to an appropriate sample. The pilot test application was carried out with 78 university students in order to check whether the items in the scale would be comprehensible to students. This application was carried out by the researcher via online and students' feedbacks were taken into consideration. Based on the analysis performed on students' feedbacks, five items were removed from the draft scale. This way, the scale with four items became ready for test application. The items were determined by item-factor analysis. And also to get evidence construct validity Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were carried out. Finally the online education eye health scale in Covid-19 pandemic was formed. The flow diagram of the study is shown in Figure 1. The eye fatigue questionnaire consisted of 10 questions (tired eye, sore/aching eye, irritated eye, watery eye, dry eye, eye strain, hot/burning eye, blurred/doubled vision, difficulty in focusing/headache, visual discomfort). The online education eye health scale in Covid-19 pandemic was a four-item and one sub-dimensional scale The scale was a 3-point Likert type. The items of scale were 1: my eye health has not changed, 2: slight deterioration in my eye health 3: severe deterioration in my eye health. The eye fatigue questionnaire and online education eye health scale in Covid-19 pandemic were applied to university students by an email. Before starting test necessary explanations were made. The tests were applied between 8-13 July 2020. Statistical Analysis: Before starting statistical analysis, it was checked whether there was any missing data in the data set. After determining that the data set had a normal distribution (see Table 2 for skewness and kurtosis), the research data were analyzed. Cronbach Alpha technique was preferred for reliability analysis. Furthermore, Pearson correlation and simple linear regression analysis were used in the analysis of the data. The analysis was tested with the help of IBM SPSS program with a 0.01 level of significance. RESULTS / FINDINGS In this part of the study, construct validity analysis, reliability analysis, correlation and simple linear regression analysis are included. Construct Validity In order to determine properties of factorial design, Exploratory Factor Analysis (EFA), Before EFA, to test whether the sample size is sufficient for factoring, Kaise-Meyer-Olkin (KMO) test was carried out. As a result of analysis, KMO value was calculated to be .798. In accordance with this finding, sample size can be acknowledged to be "sufficient" for exploratory factor analyis (Field, 2009). Furthermore, results of Barlett's Test of Sphericty revealed that chisquare value was seen to be significant x 2 = 922.98 (p<.001). After collecting these evidences about the suitability of the data set, factor analysis performed using the principal components analysis method (Tabachnick & Fidell, 2012). In the consequence of EFA, a single factor structure that explains 76.10% of total variance was obtained. In the result of the study, it was seen that item factor loads ranged from .79 to .83. Corrected item-total correlations and Cranbach Alpha internal consistency coefficient analysis were used for the reliability of the online education eye health scale in Covid-19 pandemic. The adjusted item-total correlations of the scale have a value between 0.80 and 0.84. According to the analyzes, Cranbach Alpha reliability coefficient of the scale was obtained as 0.92 (Table 2). According to the results of the analysis, a positive (r = .78, p <.01) correlation was found between eye fatigue and online education eye health scale in Covid-19 pandemic (Table 3). According to the simple linear regression analysis results, it was observed that the eye health scale significantly predicted eye fatigue in Covid-19 period. According to these analyzes, eye health in covid-19 period explained 62% of the total variance related to eye fatigue (R 2 = .62; FReg = 652.44; p <.01) (see Table 4). DISCUSSION and CONCLUSION The Covid-19 pandemic has profoundly affected all societies in the world, and has had many social, economic and psychological results. One of these results is the social isolation measures have been taken to slow the course of the disease. Schools and universities, where interpersonal distance cannot be maintained, are among the most easily spread environments. For this reason, it is very important to take necessary measures regarding education to reduce the speed of transmission of the epidemic (Afacan & Avcı, 2020). Accordingly, the Board of Higher Education has decided to close schools all over Turkey for three weeks from the date of March 16, 2020. Schools remained closed due to the continuing outbreak, and the Spring term was completed with online education. Although online education has the effect of reducing the transmission rate, it may have negative effects on eye health. In this study, the effect of online education on eye health in Covid-19 period was investigated and a scale was developed on this subject. In addition, the relationship between eye health and eye fatigue in online education was investigated in Covid-19 period using scale. First of all, according to the analysis conducted for the scale, scale has been brought to the literature as a valid and reliable tool (see Tables 1, 2 and Figures 1, 2). With the developed scale, it was observed that the eye health of the university students was negatively affected by the online education of the Covid-19 pandemic process. In addition to this result, in the Covid-19 period, a positive correlation was found between the deterioration of eye health and eye fatigue in online education. In other words, eye fatigue increases as the result of online education deteriorate eye health. In recent years, internet and screen usage has been increasing rapidly among the youth. Eye health can be negatively affected due to this increase. Previous studies have shown that eye health related to screen usage may be seriously affected. Digital screens like tablets, computers and mobile phones can cause harm by radiating short high energy waves that may penetrate eye tissues and can finally contribute to photochemical damage to the retinal cells. By this way, harmfull waves can cause a variety of eye problems ranging from dry eye to age-related macular degeneration (Bhattacharya, Saleem & Singh, 2020). It has been stated that as the duration of daily internet use increases, asthenopic complaints also increase significantly (Kaya, 2019). Another study indicated that computer use for more than 6 hours led to an increase in eye fatigue complaints (Agarwal, Goel & Sharma, 2013). In addition, it has been shown in previous studies that the symptoms of eye fatigue such as burning sensation, dryness, and tearing in the eyes due to the use of electronic devices such as computers and mobile phones have increased (Kaya, 2019; Kim, Lim, Gu & Park, 2017). In the study conducted by Kim et al. (2017), 59 participants used tablets and smart mobile device for 1 hour. Eye fatigue was evaluated before and 1 hour after using the tablet. According to this study, using tablets for 1 hour significantly increased the complaints of eye fatigue/asthenopia (Kim et al., 2017). Environmental and social factors can also affect the eye health. In the study of Guo et al. on 1022 students; students' socioeconomic, dietary habits, lifestyles, eye-related symptoms, eye care habits and history of diseases were evaluated. In this study, it was investigated whether there is a relationship between fruit-vegetable consumption and the risk of asthenopia. According to the results of the study, it was found that dark-green leafy fruit consumption is associated with a lower risk of asthenopia (Guo et al., 2018). In the study conducted by Suh et al., (2018) on 60 patients, the patients slept in the laboratory for 3 nights. On the 3rd night, the patients slept in a 5-10 lux light environment. Eye fatigue findings were evaluated in the morning of the third day and on the fourth day. It was observed that eye strain, difficulty in focusing and ocular discomfort increased significantly in patients sleeping at 10 lux light intensity (Suh, Na, Ahn, & Oh, 2018). Limitations and Suggestions The study includes university students studying at various faculties of Pamukkale University. This can only give an idea about students studying at this university. Multicenter studies can give a wider idea about the subject. Also, trying to determine whether this new scale measures eye health in different age groups can be considered as a new research topic. This study is a quantitative research. In order to test the results of this study, a qualitative research on a similar subject may be proposed in the future. In summary, it can be said that the validity and reliability of the eye health scale related to online education is sufficient in the Covid-19 period, which we prepared for the students who stayed at home during the Covid-19 period and thought that their eye health would deteriorate due to the use of more screen in addition to their normal use. In addition, it can be said that it was positively correlated with the eye fatigue questionnaire and its predictability was good. Declaration of Conflicting Interests and Ethics The authors declare no conflict of interest. This research study complies with research publishing ethics. Permission was received from the Non-Interventional Clinical Ethics Committee of a University (dated 07.07.2020 and numbered 13). The scientific and legal responsibility for manuscripts published in IJATE belongs to the author(s).
2020-09-03T09:04:14.897Z
2020-09-15T00:00:00.000
{ "year": 2020, "sha1": "e2734775ecc89104d3f391d2100474565b53a581", "oa_license": null, "oa_url": "https://dergipark.org.tr/en/download/article-file/1265031", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "20c0f3963c5c7bbf2476590c03e8e74f27738052", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
268084632
pes2o/s2orc
v3-fos-license
The SGLT2 inhibitor Empagliflozin promotes post-stroke functional recovery in diabetic mice Type-2 diabetes (T2D) worsens stroke recovery, amplifying post-stroke disabilities. Currently, there are no therapies targeting this important clinical problem. Sodium-glucose cotransporter 2 inhibitors (SGLT2i) are potent anti-diabetic drugs that also efficiently reduce cardiovascular death and heart failure. In addition, SGLT2i facilitate several processes implicated in stroke recovery. However, the potential efficacy of SGLT2i to improve stroke recovery in T2D has not been investigated. Therefore, we determined whether a post-stroke intervention with the SGLT2i Empagliflozin could improve stroke recovery in T2D mice. T2D was induced in C57BL6J mice by 8 months of high-fat diet feeding. Hereafter, animals were subjected to transient middle cerebral artery occlusion and treated with vehicle or the SGLTi Empagliflozin (10 mg/kg/day) starting from 3 days after stroke. A similar study in non diabetic mice was also conducted. Stroke recovery was assessed using the forepaw grip strength test. To identify potential mechanisms involved in the Empagliflozin-mediated effects, several metabolic parameters were assessed. Additionally, neuronal survival, neuroinflammation, neurogenesis and cerebral vascularization were analyzed using immunohistochemistry/quantitative microscopy. Empagliflozin significantly improved stroke recovery in T2D but not in non-diabetic mice. Improvement of functional recovery was associated with lowered glycemia, increased serum levels of fibroblast growth factor-21 (FGF-21), and the normalization of T2D-induced aberration of parenchymal pericyte density. The global T2D-epidemic and the fact that T2D is a major risk factor for stroke are drastically increasing the number of people in need of efficacious therapies to improve stroke recovery. Our data provide a strong incentive for the potential use of SGLT2i for the treatment of post-stroke sequelae in T2D. Supplementary Information The online version contains supplementary material available at 10.1186/s12933-024-02174-6. Introduction Stroke is the third-leading cause of death and disability worldwide [1].About 30% of ischemic stroke patients have diagnosed type 2 diabetes mellitus (T2D) [2], which is an established predictor of poor functional outcome [3,4] and hampered recovery after stroke [5][6][7], thereby further amplifying the global disability burden.Although both pharmacological and lifestyle change strategies can reduce stroke risk in T2D [8][9][10][11], there are currently no effective therapies targeting impaired post-stroke recovery, emphasizing the necessity for new pharmacological treatments. Hyperglycemia during acute ischemic stroke is an independent predictor of worsened post-stroke recovery [7,12,13].However, intensive interventions targeting acute hyperglycemia did not result in improved functional outcome [14], and clinical studies focused on chronic poststroke hyperglycemia regulation are lacking.Recently, pre-clinical studies from our group have demonstrated that the normalization of hyperglycemia by glucagon-like receptor 1 (GLP-1R) activation or by dipeptidyl peptidase-4 (DPP-4) inhibition in the chronic, post-acute phase after stroke was associated with improved poststroke functional recovery in obese/diabetic mice [15,16].However, in these studies, attenuation of hyperglycemia was accompanied by the normalization of insulin resistance and an overall improvement of glucose metabolism, making it impossible to determine whether the chronic regulation of hyperglycemia per se bears therapeutic value.Additionally, the efficacy mediated by direct neurotrophic properties of GLP-1R agonists and DPP-4 inhibition independently from metabolic regulation cannot be excluded. Sodium-glucose cotransporter-2 inhibitors (SGLT2i) are emerging anti-diabetic drugs that normalize hyperglycemia by blocking renal proximal tubular glucose reabsorption [17].Furthermore, these drugs offer a broad range of beneficial effects beyond glycemic control, such as a reduction of weight [18] and of TD2-induced inflammation [19,20].Moreover, SGLT2i decrease the risk of hypoglycemic events [21] and exert beneficial effects on the cardiovascular system, not only by lowering hypertension [22] and ameliorating endothelial dysfunction [23,24], but also by significantly improving cardiovascular outcomes [25,26].More specifically, in the EMPA-REG OUTCOME trial, the SGLT2i Empagliflozin exhibited significant cardiovascular benefits independently of HbA1c levels [27], stemming the international recommendations for T2D patients with cardiovascular disease to receive SGLT2i treatment in addition to metformin, regardless of baseline HbA1c levels [28].Even though the effects of Empagliflozin on stroke risk were neutral [27], SGLT2i could improve post-stroke recovery in T2D due to their potent anti-glycemic effects as well as their impact on several processes implicated in stroke recovery.Indeed, recent literature demonstrated that a pre-stroke treatment with SGLT2i induces ischemic tolerance after stroke [29].Moreover, SGLT2i can positively impact brain metabolism, even in non-diabetic conditions, as evidenced by both pre-clinical [30][31][32], and clinical studies [33]. Because of the above reported effects to modulate important processes involved in stroke recovery, and the well-known effects on attenuation of hyperglycemia, we hypothesized that SGLT2 inhibition could play a beneficial role in stroke recovery in T2D.Therefore, the aim of this study was to determine in a clinically relevant murine model of T2D and stroke whether the SGLT2i Empagliflozin improves post-stroke recovery when administered chronically in the post-stroke recovery phase.We also investigated whether potential recovery effects of Empagliflozin were associated with the regulation of glycemia and/or affected other factors involved in stroke recovery, i.e. fibroblast growth factor 21 (FGF-21) [34,35], increased production of ketone bodies [36,37], stroke-induced neurogenesis [38,39], neuroinflammation [40] and post-stroke neovascularization [41]. Animals Eighty C57BL/6JRj mice (Janvier Labs, France) were used in this study.Mice were housed in environmentally controlled conditions (22 ± 0.5 °C, 12/12 h light/dark cycle with ad libitum access to food and water).The mice were kept under pathogen free conditions in type III size individually ventilated cages with wood chip bedding and nest material. Sample size calculation Group sizes were determined based on ≈ 20% effect size between groups in functional recovery with α = 0.05 and a statistical power of 90%.Standard deviation used in sample size calculation was obtained from pilot experiments.The analyses suggested the sample size of minimum n = 5 per group.However, after taking into consideration the success rate of stroke surgery, mortality and likelihood of statistical outliers, the experimental groups were set at n = 10-15 each. Diabetic study Starting at four weeks of age, mice were kept on either standard laboratory chow (n = 20, hereafter referred to as non-T2D group) or high fat diet (HFD; n = 40, 60% energy from saturated fat, hereafter referred to as T2D) for 8 months.Obesity and T2D were confirmed by a body weight increase > 20%, fasting glucose levels > 7 mmol/L, hyperinsulinemia, and decreased insulin sensitivity.Then, mice were subjected to transient middle cerebral artery occlusion (tMCAO) (n = 15 for non-T2D and n = 30 for T2D) or sham surgery (n = 5 for non-T2D and n = 10 for T2D).After tMCAO, all T2D mice were switched to SD, to reflect the clinical situation of a balanced post-stroke diet.5 animals in the T2D group and 3 in the non-T2D group were euthanized shortly after tMCAO because the humane endpoint was reached. Three days after stroke, the remaining T2D mice were randomized in two experimental groups and per orally treated daily with vehicle (0.5% methylcellulose solution, n = 12, hereafter referred to as T2D-VH) or the SGLT2i Empagliflozin (Boehringer-Ingelheim, Germany) (n = 13, 10 mg/kg of body weight, hereafter referred to as T2D-E).We specifically chose this delayed treatment to rule out potential acute neuroprotective effects mediated by Empagliflozin.Non-T2D animals were also treated with vehicle starting 3 days after stroke.Sham-operated animals were also randomized to either vehicle treatment (n = 5 for non-T2D and T2D-VH) or Empagliflozin treatment (n = 5).Forelimb sensorimotor function (Forelimb grip test, see below) was measured weekly for 5 weeks (timepoint where non-T2D mice were fully recovered).Then, all mice were sacrificed, and brains and serum samples were collected for analysis.See Fig. 1a for the experimental design. Non-diabetic study In this experiment, 20 adult male C57BL/6 J mice were subjected to tMCAO surgery.Shortly after tMCAO, 6 mice were euthanized because the humane endpoint was reached.Three days after stroke, mice were randomized in two experimental groups and per orally treated daily with either vehicle (0,5% methylcellulose solution, n = 7, hereafter referred to as SD-VH) or the SGLT2i Empagliflozin (10 mg/kg of body weight, n = 7, hereafter referred to as SD-E).Forelimb sensorimotor function was measured weekly during 3 weeks (timepoint where SD-VH mice fully recovered).See Fig. 1b for the experimental design. Transient middle cerebral artery occlusion Stroke was induced by tMCAO using the intraluminal filament technique as described previously [42].Briefly, mice were anesthetized by inhalation of 3% isoflurane and throughout surgery, anesthesia was maintained by 1.5% isoflurane.Using a heated pad with feedback from a thermometer, body temperature of animals was kept at 37-38 °C.Left external (ECA) and internal (ICA) carotid arteries were exposed and a 7-0 siliconecoated monofilament (total diameter 0.17-0.18mm) was inserted into the ICA until the origin of the MCA was blocked.The occluding filament was removed after 30 min.Cerebral blood flow in the vicinity of MCA was monitored by Laser Doppler Blood Flow Monitor (Moor Instruments Ltd, UK), and no differences between the groups were observed (data not shown).Stroke induction was considered unsuccessful when the occluding filament could not be advanced within the internal carotid artery beyond 7-8 mm from the carotid bifurcation, or if mice lacked symptoms of neurological impairment based on the neurological severity score [43].After surgery, all mice were given analgesic (Carprofen, 5 mg/kg) and soft food. Fasting glycemia and ITT Fasting glycemia was measured after an overnight (ON) fasting via blood from a tail tip puncture and a glucometer.For insulin tolerance tests (ITT), mice were fasted for 2 h.Hereafter, baseline glucose levels were measured.Then, mice were injected intraperitoneally (i.p.) with Fig. 1 Experimental design of the studies.a 4-week-old male C57BL6/J mice were fed for 8 months with SD or HFD.Stroke was then induced experimentally by 30 min tMCAO and the mice on HFD were then changed to SD for the entire duration of the recovery phase.Three days after stroke, T2D mice were randomized in two groups: a group receiving 10 mg/kg/day Empagliflozin and a VH-group.During the recovery phase, behavioral tests were performed once weekly for 5 weeks.Serum was collected before stroke and at two and five weeks after stroke.The metabolic state of the animals was characterized before stroke to confirm T2D, and at 2 weeks after stroke to confirm efficacy of Empagliflozin treatment.At 5 weeks after stroke, mice were sacrificed to collect brains for immunohistochemistry and serum for assessment of metabolic parameters.b 3-month-old mice were subjected to tMCAO surgery with a 30 min occlusion.Three days after tMCAO, mice were randomized in 2 groups: a group receiving 10 mg/kg/day Empagliflozin and a VH-group.Behavioral tests were performed once weekly for 3 weeks.HFD = high-fat diet, SD = standard diet, ITT = insulin tolerance test, E = 10 mg/kg/day Empagliflozin p.o., VH = vehicle (0.5% methylcellulose), tMCAO = transient middle cerebral artery occlusion 0.5 U/kg human insulin and blood glucose levels were measured at 15, 30, 45, 60, 75 and 90 min after injection.Area under the curve was computed for statistical analysis. Assessment of sensorimotor function To assess sensorimotor function, forelimb grip strength was tested as previously described [44].Briefly, mice were held firmly by the body and allowed to grasp the grid with the affected forepaw.Hereafter, they were dragged backwards until their grip was broken.Grip strength was measured using a grip strength meter (Harvard apparatus, MA, USA) at 3 days and 1-5 weeks after stroke induction.Ten trials were performed, and the highest value was recorded. Immunohistochemistry Mice were anesthetized using an i.p. injection with an overdose of sodium pentobarbital.Hereafter, blood was collected via cardiac puncture and mice were perfused transcardially using phosphate-buffered saline (PBS) followed by a 4% ice-cold paraformaldehyde (PFA) solution.Brains were harvested and stored ON in 4% PFA at 4 °C.After 24 h of fixation, brains were transferred to PBS containing 25% sucrose and stored at 4 °C until they sank.Then, 30 μm thick coronal sections were cut using a sliding microtome, and sections were stored at − 20 °C in anti-freeze solution. Quantitative microscopy and image analysis Ischemic stroke volume assessment Ischemic volume was measured using all serial sections containing visual ischemic damage.Briefly, NeuNlabelled sections were displayed live on a computer monitor using a 1.25 × lens.Volume of the whole contralateral, non-damaged hemisphere, and of the intact part of the ipsilateral, stroke-damaged hemisphere was determined using the Cavalieri Estimator probe (Stere-oInvestigator, MBF Bioscience, USA).The ischemic volume was then determined by subtracting the ipsilateral volume from the entire contralateral volume, thus adjusting for stroke-induced tissue shrinkage. Assessment of neuroinflammation The Fiji opensource image analysis software was used to evaluate Iba-1 immunoreactivity [45].Briefly, images of Iba-1 staining in striatum were acquired at 20 × magnification using the Olympus BX40 microscope.Images were then converted into grayscale (8-bit) mode and thresholded.The lowest Iba-1 immunoreactivity in the non-T2D group was used as baseline to determine the threshold.For each hemisphere, 3 images containing > 90% of the striatum were analyzed, resulting in a total of 9 pictures analyzed per hemisphere per animal.The Iba-1 + area was measured and expressed as percentage of total area.Animals with an ipsilateral Iba-1 response less than 1.5-fold compared to their contralateral hemisphere were classified as non-responders and excluded from analysis. Assessment of neurogenesis Manual counting of Ki67 in the subventricular zone and of DCX in striatum was performed on three coronal brain sections using the Olympus BX40 microscope.The first section was selected based on its anatomical location along the rostral-caudal axis (approximately 1 mm from Bregma).The second and third sections were 300 and 600 μm caudal from the first section, respectively.The number of Ki67 + cells in the subventricular zone and of DCX in the striatum was manually counted in all three sections using a dry 40 × lens.All counts were performed by experimenters blinded for experimental groups. Assessment of vascularization and blood-brain barrier leakage Two brain sections were selected for assessing vascularization, and one brain section was selected for assessing blood-brain barrier (BBB) leakage.Confocal images were obtained using a Leica DMi8 confocal microscope.One to two images per section were taken at 20 × magnification from the dorsolateral and medial striatum depending on the dimension of the brain (image size: 775 μm × 775 μm; z-stack size = 10 μm; step size = 0.5 μm).The same acquisition settings were applied for each image.Immunohistochemical images were compared to the images from the NeuN staining to visualize the ischemic core, and the images from regions outside of the ischemic core were excluded.Quantification of the vascularization parameters and BBB leakage were performed on the maximum projected and automatically thresholded images using the area fraction measurement tool of Fiji opensource image analysis software [45].The area density was expressed as the percentage of PDXL and CD13 of the total image area.Pericyte coverage of the vessels was obtained by calculating the area of the colocalizing CD13 and PDXL signals and normalizing it to the total PDXL area of the same image.Activated pericytes were identified by NG2 [46].The area of the activated pericytes was obtained by calculating the area of the colocalizing NG2 and CD13 signals and normalizing it to the total CD13 area of the same image.The density of parenchymal pericytes was calculated on the maximum projected and automatically thresholded images by subtracting the colocalizing PDXL/CD13 pixels from the CD13 ones.For vessel length and branch counts, the maximum projected images were binarized by automatic thresholding and skeletonized, and the skeletons were analyzed using the AnalyzeSkeleton plugin [47] as previously described [48].Extravascular fibrinogen and albumin were quantified to evaluate BBB leakage.PDXL vessels were outlined to exclude intravascular plasma proteins.Then, by applying an automatic image threshold, the area covered by extravascular fibrinogen and albumin was quantified and expressed as the percentage of the total image area using the area fraction measurement tool.The image analysis was scripted and automated using the programming language ImageJ Macro minimizing the potential for human error or bias. Data and statistical analysis Data were checked for statistical outliers by using the ROUT method, and for normality by using the Shapiro-Wilk normality test. Parametric tests: For pre-and post-stroke metabolic parameters and NeuN analysis, Brown-Forsythe and Welch ANOVA test, followed by two-stage linear stepup procedure of Benjamini, Krieger, and Yekutieli was used.For behavioral tests, two-way repeated measures ANOVA with Geisser-Greenhouse's correction followed by Dunnett T3 was used.For neuroinflammation, neurogenesis and vascular analysis, two-way repeated measures ANOVA followed by two-stage linear step-up procedure of Benjamini, Krieger, and Yekutieli was used.All data were analyzed by GraphPad Prism Version 9.0.Data are expressed as mean ± SD. p-values less than 0.05 were considered statistically significant. To assess the potential efficacy of Empagliflozin to improve stroke recovery, forepaw grip strength recovery was followed up for 5 weeks after tMCAO.After tMCAO, non-T2D mice recovered fully within 5 weeks while T2D-VH mice remained significantly impaired (Fig. 2f, g).Importantly, Empagliflozin treatment completely normalized the T2D-induced worsening of stroke recovery (Fig. 2f, g).No differences in stroke volume were observed between groups (Fig. 2h), demonstrating that the improved recovery was not due to differences in infarct size mediated by Empagliflozin-induced neuroprotection. To investigate the potential association between Empagliflozin-induced improved recovery and metabolic changes, we analyzed several metabolic parameters after stroke.In accordance with previous studies [49][50][51], tMCAO and subsequent switch from HFD to SD induced significant weight loss in the first two weeks after tMCAO in all T2D mice, without significant differences between T2D-VH (− 34 ± 3%) and T2D-E (− 28 ± 7%) groups (Fig. 2i).At 2 weeks after stroke, all T2D mice were still IR (Fig. 2j, k) and hyperinsulinemic (Fig. 2l), irrespective of treatment.However, T2D-E mice became normoglycemic while hyperglycemia was still present in T2D-VH mice, showing that Empagliflozin efficiently reduced hyperglycemia, but not IR after stroke in our model (Fig. 2m). Unlike T2D mice subjected to tMCAO, T2D mice subjected to sham surgery only lost 12% of their initial body weight during the first two weeks post-diet change, resulting in differences in the metabolic state between stroke and sham mice (Additional file 1: Fig. S1a).In sham-operated animals treated with Empagliflozin, a trend (p = 0.112) towards attenuated hyperglycemia was observed, whereas no effect on insulin sensitivity was observed (Additional file 1: Fig. S1b-d). In summary, our results demonstrate that a post-stroke treatment with Empagliflozin significantly improves post-stroke recovery, in association with the normalization of hyperglycemia. Improved stroke recovery by Empagliflozin is associated with increased post-stroke serum levels of FGF-21 but not BHB Increased FGF-21 levels have been associated with poststroke recovery [34].Moreover, Jiang and colleagues recently demonstrated that a therapeutic administration of FGF-21 improves post-stroke recovery in diabetic mice [35].Since recent literature has shown that SGLT2i can increase FGF-21 levels [52], we investigated whether the improved recovery in the T2D-E group was associated with increased serum levels of FGF-21.Before stroke, no difference in FGF-21 levels between non-T2D and T2D mice was recorded (Fig. 3a).At two weeks after stroke, FGF-21 levels were significantly decreased in both non-T2D and T2D-VH mice (Fig. 3a) and remained significantly lower than pre-stroke levels in T2D-VH (p = 0.01) at five weeks post-stroke (Fig. 3a).Interestingly the poststroke treatment with Empagliflozin resulted in a significant increase of serum FGF-21 levels, both at two and at five weeks after stroke (Fig. 3b).Taken together, these results indicate that stroke decreases serum FGF-21 levels independently of the metabolic state of the animals, and that Empagliflozin prevents this stroke-induced reduction, in association with improved recovery. In accordance with existing literature [53], HFDinduced T2D significantly increased serum BHB-levels (Fig. 3c).After stroke, there was a trend towards a decrease in serum BHB (p = 0.198 at 2 weeks and (See figure on next page.)Fig. 2 Effect of Empagliflozin treatment on metabolic parameters and functional recovery after stroke.Effect of 8 months of HFD on weight (a), insulin sensitivity shown as plotted curve (b) and area under the curve (c), serum insulin levels (d) and fasting glycemia (e).Forepaw grip strength after stroke shown as plotted curve (f) and area under the curve (g).Ischemic stroke volume (h).Body weight during stroke recovery (i).Insulin sensitivity, shown as plotted curve (j) and area under the curve (k), serum insulin (l) and fasting glycemia (m) at 2 weeks after stroke.Data are presented as mean ± SD.Statistical significance was calculated using two-way repeated measures ANOVA followed by Benjamini, Krieger and Yekutieli multiple comparisons test for insulin tolerance tests, forepaw grip strength and post-stroke body weight (b, f, i, j), Welch's t-test for weight, area under the curve for ITT, plasma insulin and fasting glycemia before stroke (a, c-e) Brown-Forsythe and Welch's one-way ANOVA followed by Benjamini, Krieger and Yekutieli multiple comparisons test for post-stroke area under the curve of grip and ITT, stroke volume, plasma insulin and fasting glycemia (g, h, k-m).Results were considered significant if p < 0.05.*denotes a significant difference between non-T2D and T2D-VH, °denotes a significant difference between non-T2D and T2D-E, $ denotes a significant difference between T2D-VH and T2D-E.* and $ denote p < 0.05, ** and $$ denote p < 0.01, ***, °°° and $$$ denote p < 0.001, **** and °°°°denote p < 0. p = 0.056 at 5 weeks after stroke) in T2D-VH mice compared to pre-stroke levels (Fig. 3c).We found no difference between T2D-VH and T2D-E mice at either 2 or 5 weeks after stroke (Fig. 3d), indicating that after stroke, SGLT2i-treatment does not upregulate ketone production in T2D mice. Effect of Empagliflozin treatment on post-stroke neurogenesis Stroke-induced neurogenesis has been associated with improved stroke recovery (reviewed in [39].Therefore, we next assessed whether improved functional recovery by Empagliflozin after stroke was associated with the regulation of this process.Neural stem cell proliferation and neuroblast formation were analyzed by quantifying Ki67 + cells in the SVZ and DCX + neuroblasts in the striatum, respectively.No differences in Ki67 + cells were recorded between groups in the SVZ (Fig. 4a).In accordance with existing literature [51], stroke induced a significant increase in DCX + cells in the ipsilateral, stroke-damaged striatum in all three groups (Fig. 4b).However, there was no difference in the number of DCX + cells between the groups (Fig. 4b), suggesting that improved stroke recovery in the T2D-E group was not due to increased neurogenesis.and β-hydroxybutyrate (BHB) (c) levels before stroke and at 2 and 5 weeks after stroke in non-diabetic controls (non-T2D) and type-2 diabetic mice (T2D).Serum FGF-21 (b) and BHB (d) levels of T2D mice treated with VH (T2D-VH) and diabetic mice treated daily with 10 mg/kg Empagliflozin (T2D-E) at 2 and 5 weeks after stroke.The grey area indicates the range of pre-stroke levels of Fgf-21 (b) and BHB (d) T2D mice.Data are presented as mean ± SD.Statistical significance was calculated using two-way repeated measures ANOVA followed by Benjamini, Krieger and Yekutieli multiple comparisons test.Results were considered significant if p < 0.05.§ denotes a significant difference between T2D-VH and T2D-E, * denotes a significant difference between non-T2D and T2D, # denotes a significant difference compared to pre-stroke in the same group.*denotes p < 0.05, **, § § and ## denote p < 0.01.Sample size: n = 5-10 per group.At the pre-stroke and intermediate post-stroke timepoint, each data point represents results from serum pooled from 2-3 animals Effect of Empagliflozin treatment on T2D-induced Iba-1 immunoreactivity To evaluate stroke-induced neuroinflammation, we quantified Iba-1 immunoreactivity in ipsilateral, stroke-damaged striatum versus the intact contralateral hemisphere.Stroke induced a significant upregulation of Iba-1 in ipsilateral striatum compared to contralateral in all three groups.Notably, this increase was significantly higher in T2D-VH mice than in non-T2D mice (Fig. 5).However, Empagliflozin treatment did not significantly decrease Iba-1 immunoreactivity compared to T2D-VH animals, although a trend was observed (p = 0.103) (Fig. 5).Moreover, we observed no apparent morphological differences of striatal microglia between groups.In sham-operated animals, T2D upregulated striatal Iba-1 compared to non-T2D controls, but no differences between groups were detected between sham-T2D-VH and sham-T2D-E animals (Additional file 1: Fig. S2).Taken together, these data indicate that, at least at 5 weeks after stroke, improved stroke recovery in Empagliflozin treated animals is likely not associated with attenuated post-stroke Iba-1 immunoreactivity. Effect of Empagliflozin treatment on post-stroke neovascularization To investigate whether Empagliflozin treatment has an impact on the vascular system after stroke, we evaluated cerebral vascular changes in terms of vessel (PDXL + ), total pericyte density (CD13 + ) and coverage (CD13 + / PDXL + ratio), vessel length and branching, markers of pericyte activation (CD13 + /NG2 + ), and BBB leakage by assessing extravascular albumin and fibrinogen. In non-T2D mice subjected to stroke, the injury significantly increased vascular density, total pericyte density, parenchymal pericyte density, pericyte coverage, and density of activated pericytes in the ipsilateral striatum Fig. 4 Effect of Empagliflozin on neurogenesis after stroke.Number of Ki67 + cells in subventricular zone (SVZ) (a) and number of DCX + cells in striatum (b) of non-diabetic controls (non-T2D), diabetic mice (T2D-VH) and diabetic mice treated with Empagliflozin (T2D-E) after stroke.Data are presented as mean ± SD.Statistical significance was calculated using two-way ANOVA followed by Benjamini, Krieger and Yekutieli multiple comparisons test.Results were considered statistically significant if p < 0.05.# denotes a difference between the contralateral and ipsilateral hemisphere within the same group.## denotes p < 0.01, ### denotes p < 0.001.non-T2D n = 6, T2D-VH n = 7, T2D-E n = 8 Fig. 5 Effect of Empagliflozin on neuroinflammation after stroke.Iba-1 expression in striatum of non-diabetic controls (non-T2D), diabetic mice (T2D-VH) and diabetic mice treated with Empagliflozin (T2D-E) after stroke.Data are presented as mean ± SD.Statistical significance was calculated using two-way ANOVA followed by Benjamini, Krieger and Yekutieli multiple comparisons test.Results were considered statistically significant if p < 0.05.*denotes a significant difference between non-T2D and T2D-VH in the same hemisphere, # denotes a significant difference between the contralateral and ipsilateral hemisphere within the same group.# and *denote p < 0.05, ### denotes p < 0.001 and #### denotes p < 0.0001.non-T2D n = 6, T2D-VH n = 9, T2D-E n = 9 compared to the contralateral, indicating post-stroke angiogenesis (Fig. 6a-e and Additional file 1: Fig. S3).In T2D-VH stroke mice, total and parenchymal pericyte density, coverage and activation were increased in the ipsilateral striatum (Fig. 6a-e, Additional file 1: Fig. S3).In T2D-E animals, stroke upregulated parenchymal pericyte density and coverage in the ipsilateral striatum compared to the contralateral hemisphere (Fig. 6a-e and Additional file 1: Fig. S3). When comparing ipsilateral hemispheres between groups, two-way ANOVA revealed a significant increase in the total pericyte density in the T2D-VH group compared to the non-T2D group, which was normalized by Empagliflozin treatment (Fig. 6b).Interestingly, groups did not differ in vascular density (Fig. 6c) and had similar pericyte coverage of the vessels (Fig. 6d), implicating that the increased overall pericyte density observed in the T2D-VH group is due to pericytes located in the parenchyma.Indeed, we observed a significant increase in parenchymal pericyte density in the T2D-VH group compared to the non-T2D group (Fig. 6e).Moreover, there was a strong trend towards a decreased parenchymal pericyte density in the T2D-E group compared to the T2D-VH group (p = 0.061), and no difference was detected between non-T2D and T2D-E animals (Fig. 6e).No differences were seen between the three experimental groups within the contralateral striatum, with the exception of increased parenchymal pericyte density in the T2D-VH vs. non-T2D groups, indicating that T2D impacts the balance between perivascular and b Fig. 6 Effect of Empagliflozin on vascularization after stroke.Confocal images (A) showing the expression in the striatum of non-diabetic controls (non-T2D), diabetic mice (T2D-VH) and diabetic mice treated with Empagliflozin (T2D-E) after stroke of CD13 (red) and PDXL (blue) evaluating vessel density (B), pericyte density (C), pericyte coverage (D) and parenchymal pericyte density (E).White arrows indicate the pericytes that are not associated with the vessels.Data are presented as mean ± SD.Statistical significance was calculated using two-way ANOVA followed by Benjamini, Krieger and Yekutieli multiple comparisons test.Results were considered statistically significant if p < 0.05.*denotes a difference between non-T2D and T2D-VH, § denotes a difference between T2D-VH and T2D-E, # denotes a difference between contralateral and ipsilateral hemisphere within the same group.Scale bar = 50 μm.non-T2D n = 5, T2D-VH n = 5, T2D-E n = 6 parenchymal pericytes (Fig. 6a-e).Pericyte activation was similarly activated in all groups after stroke (Additional file 1: Fig. S3).Taken together, these data indicate that T2D alters parenchymal pericyte density, and that Empagliflozin-treatment can normalize this effect. To assess BBB integrity, we examined the presence of plasma proteins in the brain parenchyma analyzing two different molecular sizes, albumin (~ 65 KDa) and fibrinogen (~ 340 KDa).We observed no significant differences in albumin or fibrinogen extravasation between the groups, (Additional file 1: Fig. S4).In sham-operated animals, treatment with Empagliflozin led to a significant increase in total and parenchymal pericyte density and coverage, whereas no differences were found in BBB integrity between groups (Additional file 1: Fig. S5). Empagliflozin does not improve post-stroke functional recovery in non-T2D mice We next determined whether a post-stroke intervention with Empagliflozin could improve recovery independently of its glycemia-regulating properties.Non-T2D mice were treated daily with either VH or Empagliflozin starting from 3 days after stroke until Empagliflozintreated mice fully recovered (Exp.design Fig. 1b).There was no difference in forepaw grip strength between the groups (Fig. 7), indicating that the improved stroke recovery induced by Empagliflozin in the diabetic study was likely mediated by the anti-T2D properties of Empagliflozin. Discussion The aim of this study was to determine whether the SGLT2i Empagliflozin improves post-stroke recovery in T2D when administered chronically in the post-stroke recovery phase.We demonstrated that Empagliflozin significantly improves stroke recovery, and this effect occurs in association with attenuated hyperglycemia, elevated serum FGF-21 levels and normalization in parenchymal pericyte density in the infarct core.Five weeks after stroke, Empagliflozin-treatment did not affect the production of ketone bodies, post-stroke neurogenesis or inflammation.Moreover, we showed that in non-T2D mice, a post-stroke intervention with Empagliflozin had no effect on stroke recovery. Our experimental design was conceived with the idea to prove potential recovery effects mediated by Empagliflozin independently from acute neuroprotection which has recently been demonstrated [29,54].Therefore, we initiated the treatment only 3 days post-stroke.Indeed, Empagliflozin treatment after stroke improved recovery without affecting infarct size, thus excluding acute neuroprotective effects of Empaglifozin in our study. Recent studies have demonstrated that SGLT2i can pass the BBB [55][56][57] and can boost neuronal activity [58][59][60][61][62].However, since Empagliflozin-treatment did not improve stroke recovery in the non-T2D study, it is highly likely that the recovery-effects observed in the diabetic study were mediated by the anti-diabetic properties of the drug and were not due to direct brain effects.Indeed, previous work from our group has shown that a prolonged treatment with the GLP1-agonist Exendin-4 [49] and the DPP-4 inhibitor Linagliptin [50] initiated after stroke, improved stroke recovery in association with normalized glucose metabolism.Interestingly, unlike Exendin-4 and Linagliptin that affect both hyperglycemia and insulin resistance, Empagliflozin specifically attenuated hyperglycemia without affecting insulin sensitivity, indicating that sustained glycemic control post-stroke might be sufficient to improve stroke recovery in diabetes. Post-stroke recovery effects might be associated with the regulation of stroke-induced adult neurogenesis [63] and/or neuroinflammation [64].We have shown in previous studies that the DPP-4 inhibitor Linagliptin enhances the number of stroke-induced DCX + neuroblasts in association with improved stroke recovery, even though T2D per se did not affect this cellular process [50,65].However, in the present study, we found no effect of Empagliflozin on DCX + neuroblasts, suggesting that SGLTi and DPP-4 inhibitors exert their beneficial effects on stroke recovery via different mechanisms of action. Stroke-induced neuroinflammation is a complicated and multifaceted, yet vital process for stroke recovery [40,66].Diabetes disrupts the intricate balance between proand anti-inflammatory responses after stroke, thereby hampering stroke recovery [67].We have recently demonstrated exacerbated neuroinflammation in the poststroke recovery phase of T2D mice after prolonged HFD Fig. 7 The effect of Empagliflozin on functional recovery after stroke in non-T2D mice.Forepaw grip strength of non-T2D mice treated with vehicle (SD-VH) or daily treatment with 10 mg/kg Empagliflozin p.o. (SD-E) after stroke shown as plotted curve.Data are presented as mean ± SD.Statistical significance was calculated using two-way repeated measures ANOVA followed by Benjamini, Krieger and Yekutieli multiple comparisons test and results were considered significant when p < 0.05.SD-VH n = 7, SD-E n = 7 feeding [49][50][51] as well as the effect of different T2D drugs to counteract this effect [49,50].Since SGLT2i have been shown to dampen exacerbated neuroinflammation induced by T2D both in vitro and in vivo [68][69][70], we hypothesized that Empagliflozin might dampen the neuroinflammatory process in the recovery phase after stroke.As expected, we found that stroke increased ipsilateral microglia-infiltration which was significantly higher in the T2D-VH group compared to non-T2D controls.However, Empagliflozin treatment did not affect the amount of Iba-1 + microglia in the ipsilateral hemisphere, suggesting that the beneficial effect of Empagliflozin on stroke recovery was not due to attenuated T2D-induced inflammation, at least not at the 5 weeks post-stroke timepoint, when the mice were sacrificed. The positive effect of ketone bodies on the brain is well known [37,71,72].Since SGLT2i increase ketone production [36,73,74] and this mechanism has been proposed to play a role in cardiovascular outcome [75], we hypothesized that this mechanism could be also involved in Empagliflozin-improved stroke recovery.However, serum BHB-levels were not increased in the obese/ T2D-E group after stroke.The increased ketone-production upon SGLT2i-treatment is modest [53,76], suggesting that the effects of Empagliflozin on serum BHB might have been masked by the pronounced tMCAO-induced weight loss inherent to our T2D/stroke model.In addition, ketone bodies production following treatment with SGLT-2i are much more pronounced in T2D patients vs impaired fasted glucose individuals, and our HFD animal model resembles more a mild T2D [77].Taken together, this suggests that it is unlikely that the improved poststroke recovery by Empagliflozin occurs via increased ketone production in this study. Here, we show that the improved stroke recovery in the T2D-E group was associated with elevated post-stroke FGF-21 serum levels. FGF-21 is an important regulator of glucose and lipid metabolism, that has also been shown to have beneficial effects on stroke recovery [78][79][80].In accordance with a recent study by Wang et al., we found that FGF-21-levels were reduced after stroke in the non-diabetic mice [81].Moreover, we demonstrated that this FGF-21 reduction is not affected by T2D.Interestingly, Empagliflozin treatment inhibited this stroke-induced decrease both at 2 and 5 weeks after stroke.This is in line with existing literature indicating that SGLT2i treatment increases plasma FGF-21 levels [52,82,83].Interestingly, FGF-21 has been positively associated with improved stroke recovery, both in pre-clinical and clinical studies [34,84].Furthermore, an intervention with recombinant FGF-21, either acutely or in the chronic phase after stroke, significantly improved recovery in diabetic mice [78,79,85].Therefore, although speculative, our results highlight FGF-21 as a potential mechanism for improved stroke recovery mediated by SGLT2i treatment. Efficient post-stroke angiogenesis and vascular remodeling are crucial for effective stroke recovery [86].T2D disrupts these processes, thereby impairing stroke recovery [41], whereas anti-diabetic treatments can revert aberrant vascular remodeling, thus restoring BBB-integrity [87].Moreover, we recently showed that the poststroke administration of the GLP-1R agonist Exendin-4 restored vascular remodeling after stroke, in association with improved recovery [49].Emerging evidence indicates beneficial effects of SGLT2i on vascularization [23,24].Indeed, SGLT2i improve remodeling of the neurovascular unit in T2D [88] and stroke [29].Similar effects were observed in diabetic mice with a post-stroke administration of recombinant FGF-21 [35].Therefore, we investigated the potential role of Empagliflozin on post-stroke vascular remodeling.Our results show that a post-stroke intervention with Empagliflozin normalizes parenchymal pericyte density in the infarct core in T2D mice. Following stroke, angiogenesis and vascular remodeling are essential to restore the ischemic tissue with oxygen and nutrients and therefore favor the recovery of the tissue after stroke [89].In general, enhanced tissue perfusion and increased vessel density are beneficial in recovery; but at the same time, extended angiogenesis might be accompanied by BBB leakage [90][91][92].While we observed clear stroke-induced effects when comparing contralateral and ipsilateral hemispheres, diabetes did not determine relevant effects in terms of vascularization, except for an increase in pericyte density which, interestingly, was normalized by Empagliflozin treatment.The changes in pericyte density were not complemented by alterations in vessel density, pericyte coverage or pericyte activation and were in accordance with the fact that BBB leakage was also not detected, perhaps due to the late time point selected for the analysis after ischemic injury.Since T2D was associated with a higher pericyte density which was not reflected in increased vascular coverage, we assessed the density of parenchymal pericytes.Previous studies in literature report that following a stroke, platelet-derivedgrowth-factor beta (PDGFRß) positive cells (a marker of pericytes) within the infarct core migrate away from the blood vessels into the parenchyma [93][94][95].It has been proposed that these parenchymal PDGFRß + cells are involved in the formation of the fibrotic scar following stroke by depositing extracellular matrix proteins [96].T2D increased parenchymal pericytes density compared to non-T2D controls, and treatment with Empagliflozin normalized this effect.Therefore, our data suggest that Empagliflozin treatment might prevent or resolve this T2D-induced shift in the location of the pericytes from a perivascular to a parenchymal location.The functional significance of this phenomenon is unclear, but a relation to the improved functional recovery cannot be ruled out. SGLT2i efficiently attenuate T2D-induced cardiac fibrosis and oxidative stress, thereby improving cardiac function, prompting these drugs to be implemented for heart failure treatment [97][98][99].Recently, diabetesinduced ROS-production and senescence have been proposed as cellular mechanisms behind this impaired cardiac function [100,101].Interestingly, ischemic stroke induces increased ROS-production and senescence in the brain [102][103][104], and interventions to decrease ROS and senescence can improve neurological function after stroke [105][106][107].Since SGLT2i have been shown to attenuate T2D-induced senescence and ROS production [108][109][110], this could be an additional cellular mechanism behind the improved stroke recovery that should be investigated in future studies. There are limitations to the present study that need to be acknowledged.First, an additional timepoint to perform IHC studies would have helped to more thoroughly characterize cellular processes involved in stroke recovery such as neuroinflammation and neurogenesis.In addition, although we showed a positive association between Empagliflozin-induced improvement in stroke recovery and increased FGF-21 levels, we did not address whether this is indeed a causative mechanism of improved functional recovery.In this respect, new studies using Empagliflozin in the presence of FGF-21 antagonists [111] will be needed.Finally, our study demonstrates the efficacy of a post-stroke intervention with SGLT2i to improve recovery in T2D.Although these data are encouraging, they do not provide insight in the potential benefit of a pre-stroke intervention with SGLT2i on stroke recovery in T2D.Of interest in this respect was the recent study of Takashima and colleagues, demonstrating improved neurological recovery with a pre-stroke SGLT2i-intervention in non-diabetic mice [29]. Based on the mechanistic action of SGLT2i in enhancing glucose excretion, which is compensated by an increased hepatic glucose production, we are currently establishing a suitable experimental design to test a pre-stroke intervention with SGLT2i in HFD animals.In particular, the catabolic status of the animals during weight loss after stroke, together with a shift in diet after tMCAO that might impact ketone body generation will also need to be taken into account. Conclusions Our study shows that a post-stroke intervention with the SGLT2i Empagliflozin improves stroke recovery in T2D mice.Moreover, it has recently been shown that SGLT2i-treatment, both in normal and hyperglycemic rodent models [54,112,113], acutely after stroke significantly decreased infarct size and ameliorated neurobehavioral outcome after stroke.Taken together, these data demonstrate additional advantage of SGLT2i-based therapies for patients with T2D, not only to treat diabetes and to reduce associated co-morbidities [28], but potentially also to improve stroke recovery.the manuscript.All authors have read and agreed to the published version of the manuscript. Fig. 3 Fig.3 Effect of stroke and Empagliflozin treatment on serum FGF-21 and BHB concentrations.Serum fibroblast growth factor 21 (FGF-21) (a) and β-hydroxybutyrate (BHB) (c) levels before stroke and at 2 and 5 weeks after stroke in non-diabetic controls (non-T2D) and type-2 diabetic mice (T2D).Serum FGF-21 (b) and BHB (d) levels of T2D mice treated with VH (T2D-VH) and diabetic mice treated daily with 10 mg/kg Empagliflozin (T2D-E) at 2 and 5 weeks after stroke.The grey area indicates the range of pre-stroke levels of Fgf-21 (b) and BHB (d) T2D mice.Data are presented as mean ± SD.Statistical significance was calculated using two-way repeated measures ANOVA followed by Benjamini, Krieger and Yekutieli multiple comparisons test.Results were considered significant if p < 0.05.§ denotes a significant difference between T2D-VH and T2D-E, * denotes a significant difference between non-T2D and T2D, # denotes a significant difference compared to pre-stroke in the same group.*denotes p < 0.05, **, § § and ## denote p < 0.01.Sample size: n = 5-10 per group.At the pre-stroke and intermediate post-stroke timepoint, each data point represents results from serum pooled from 2-3 animals
2024-03-02T06:17:35.375Z
2024-02-29T00:00:00.000
{ "year": 2024, "sha1": "6ddbd84e5ca56abe6ed72361ddb410aea7bbcf74", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1d79e4e50783579b2a0bd9617bda94dbc69226fa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236673688
pes2o/s2orc
v3-fos-license
Comparison of vacuum assisted closure (VAC) and standard therapy in compound fractures Background: A bone fracture is a medical condition where the continuity of the bone is broken. Open fractures usually are high-energy injuries. This, along with the exposure of bone and deep tissue to the environment, leads to increased risk of infection, wound complications, and non-union [1, 2]. Antibiotics, surgical debridement, and internal fixation have improved outcomes of open fracture management in important ways, and it includes primary asepsis, adequate debridement, immobilization, and protection of wounds against disturbance and reinfection [3, 4]. Wound healing is a complex and dynamic process that includes an immediate sequence of cell migration leading to repair and closure. This sequence begins with removal of debris, control of infection, clearance of inflammation, angiogenesis, deposition of granulation tissue, contraction, remodeling of the connective tissue matrix and maturation. When wound fails to undergo this sequence of events, a chronic open wound without anatomical or functional integrity results. Vacuum assisted closure (VAC) is relatively a new technique which hastens granulation tissue formation by speeding up all these parameters [5].Materials and Methods: The present study was Hospital based Prospective comparative study carried out from July 2016 to October 2018, on 90 cases satisfying the inclusion criteria following complete assessment. Patients were assessed by efficacy of both procedures was measured by the time taken by wound be optimal for skin grafting/flap, whether slough and discharge present or not, rate of decrease in size of wound (%) and whether flap is needed or avoided by use of VAC dressingResult: Both group are compare on the basis of type of fracture as per Gustillo and Anderson classification, duration of receiving treatment from initial injury, slough was comparable on day 0 and day 4, frequency of discharge, granulation tissue and size of wound.Conclusion: VAC therapy show final cessation of slow earlier than those treated by standard therapy for fracture management. VAC therapy shows earlier control of discharge, earlier appearance granulation tissue and earlier decrease in size of wound compare to standard therapy. Rate of healing is faster in VAC therapy compared to standard therapy. Earlier optimized covering of wound can be obtained by VAC therapy. Requirement of skin grafting is less in subjects treated with VAC therapy. Minimal complications with complete healing possible with VAC therapy in compound fractures of lower limb. Introduction According to the global burden of disease study of 2010, injury accounted for 10% of deaths worldwide and 11.2% of all disability-adjusted life years (DALYs). A bone fracture is a medical condition where the continuity of the bone is broken.Open fractures represent a spectrum of injuries sharing the common feature of fractures that have communication with the environment [1,2] . Open fractures usually are high-energy injuries. This, along with the exposure of bone and deep tissue to the environment, leads to increased risk of infection, wound complications, and non-union [3,4] . Antibiotics, surgical débridement, and internal fixation have improved outcomes of open fracture management in important ways, but the underlying principles for treating open fractures have remained the same since World War I: primary asepsis, adequate debridement, immobilization, and protection of wounds against disturbance and reinfection [3,4] . Wound healing is a complex and dynamic process that includes an immediate sequence of cell migration leading to repair and closure. This sequence begins with removal of debris, control of infection, clearance of inflammation, angiogenesis, deposition of granulation tissue, contraction, remodelling of the connective tissue matrix and maturation. When wound fails to undergo this sequence of events, a chronic open wound without anatomical or functional integrity results. Topical negative pressure therapy (TNP) was developed by Fleischmann in 1993 and was popularized in 1995 as Vacuum assisted closure (VAC) system (kinetic Concepts Inc, San Antonio, Texas, USA). 8 It is known by many pseudonyms-TNP (topical negative pressure), SPD (sub-atmospheric pressure), VST (vacuum sealing technique) and SSS (sealed surface wound suction). It is hypothesized that VAC therapy if used in precise indications hastens wound healing by decreasing tissue edema, reducing tissue bacterial levels, increasing blood flow to the wound and hence hastens granulation tissue formation and results in early wound closure 5 .This study is intended to assess the outcome of wound healing in Compound fractures by VAC therapy and Standard wound therapy Methodology 2.2.1 Wound Preparation for VAC. Any dressings from the wound was removed and discarded. A culture swab was taken and wound was thoroughly irrigated with normal saline. All the necrotic tissues wwre surgically debrided and adequate homeostasis was achieved. Periwound skin was thoroughly dried and prepared. Sterile, openpore foam dressing was then gently placed into the wound cavity. The foam was then sealed with an adhesive drape (loban /opsite) covering the foam and the tubing with at least three to five centimeters of surrounding healthy tissue to ensure a seal. Connecting tube was then applied after making a small opening (3-4 mm) on the drape. Controlled pressure was then uniformly applied to all the tissue on the inner surface of the wound through connecting tube connected to the negative pressure central suction delivering an intermittent negative pressure of -125mmHg. The dressing was changed every 4 th day. Wound Preparation for Standard wound therapy Any dressing from the wound was removed and discarded. A culture swab was taken and wound was thoroughly irrigated with normal saline. All the necrotic tissues were surgically removed and surgical debridement will be done. Daily dressings was done by conventional methods, that is, cleaning with Hydrogen peroxide and Betadine Normal saline and then dressing the wound with povidone iodine (5%) or Neosporin ointment. Outcome record: Results were made on the basis of the analysis of the following between the two groups:- The efficacy of both procedures was measured by the time taken by wound be optimal for skin grafting/flap cover of  Whether slough and discharge present or not.  Rate of decrease in size of wound (%).  Whether flap is needed or avoided by use of VAC dressing. Results Comparison of type of fracture (Gustillo and Anderson classification) between study groups was performed between two study groups using chi square test. No significant difference was detected between two groups indicating that both groups were matched for distribution of type of fracture. Comparison of Duration between Injury and VAC/SWT application was performed using student's t test. No significant difference was observed between mean values of duration indicating that both groups were matched for the parameters. Comparison of slough at different interval from surgery between two study groups was performed using Chi-square test. No significant difference in frequency distribution was present at day 0, and day 4 but slough was found to be significantly higher at say 8 in VAC therapy group. Further frequency of Slough was found to be lower in VAC therapy at 12th day though difference failed to reach statistical significance Comparison of Discharge at different interval from day of surgery between study groups was performed using Chi square test. Frequency of discharge was found to be significantly lower in VAC therapy group at day 0, day 4 but on day 8 and day 12 discharge was found to be significantly lower in standard therapy group. Comparison of granulation tissue at different interval from day of surgery between study groups was performed using Chi square test. While granulation tissue was not significantly different at day 0 between two study groups, significantly higher Frequency of granulation tissue was noted at day 4 but on day 8 and day 12 in VAC group. Comparison of size of wound at different interval from day of surgery between study groups was performed size of wound was found to be significantly smaller in VAC group at day 0, day 4, day 8 and day 12 compared to that in standard therapy group. Discussion Our two study groups VAC and Standard therapy were matched for age distribution as well as gender distribution. These findings indicate that study groups were comparable at baseline as per basic demographic status. Aging affects the inflammatory response during fracture healing through senescence of the immune response and increased systemic pro-inflammatory status. Important cells of the inflammatory response, macrophages, T cells, mesenchymal stem cells, have demonstrated intrinsic agerelated changes that could impact fracture healing. Additionally, vascularization and angiogenesis are impaired in fracture healing of the elderly. Finally, osteochondral cells and their progenitors demonstrate decreased activity and quantity within the callus. Age-related changes affect many of the biologic processes involved in fracture healing. However, the contributions of such changes do not fully explain the poorer healing outcomes and increased morbidity reported in elderly patients [8] . Clinically, gender and stability affect bone defect healing simultaneously. It is unclear whether gender and stability interact in some synergistic or deleterious way. Knowledge of synergistic or independent effects of these factors might suggest a gender-related modification in the stability of clinical fracture devices that could possibly improve bone healing outcome. Clinically, gender and stability of fracture fixation have been reported to independently influence bone regeneration but it remains unclear whether these factors interact in a deleterious or synergistic way. Therefore, the purposes of the present pilot study were to generate research questions by comparing bone defect healing between middleaged male and female rats under the influence of variable fixation stability using analysis from (1) mechanical properties of the callus at 6 weeks; (2) bony bridging of the defect; (3) in vivo callus development over time as well as (4) to determine callus mineralization, size, geometry, and microstructure [9] . Though classically known fact that female are at higher risk of low bone density, fractures are seen at higher bone density in males. Further healing process as is dependent on bone mineral density, is slower in females comparably [10] . No significant difference was detected between two groups regarding type of fracture (Gustillo and Anderson classification) between study groups indicating that both groups were matched for distribution of type of fracture. Also No significant difference was observed between mean values of duration for start of therapy indicating that both groups were matched for duration of start of therapy from trauma. Gustilo-Anderson classification that has become the most commonly used system for classifying open fractures. Like many classification systems, the purpose of the Gustilo-Anderson schema is to provide a prognostic framework that guides treatment and facilitates communication among surgeons and clinician-scientists. Decades of research correlating the Gustilo-Anderson type with infection risk have helped refine surgical protocols, change antibiotic recommendations, and determine appropriate timing for interventions including débridement, internal fixation, and soft tissue coverage. As a widely known and relatively straightforward system, which has become the standard of classifying open fractures, the Gustilo-Anderson classification also is useful for education of residents and other trainees in the treatment of patients with orthopaedic trauma. Thus matching of groups with respect to these classes indicate that the bias of initial status of wound with possible varied outcome can be well taken care of. [11] When flow of slough was compared between subjects treated with VAC therapy and standard therapy, no significant difference in frequency distribution was present at day 0, and day 4 but slough was found to be significantly higher at day 8 in VAC therapy group. Further frequency of Slough was found to be lower in VAC therapy at 12th day though difference failed to reach statistical significance. Thus it was observed in our study that though there is momentary increase in slough at one week post op, the slough decreases later on in comparison to standard therapy and slough regression is faster in VAC therapy [12] . Frequency of discharge was found to be significantly lower in VAC therapy group at day 0, day 4 but on day 8 and day 12 discharge was found to be significantly lower in standard therapy group. This indicates that in VAC therapy, discharge is less in initial days as compared to standard therapy. Further the discharge is though rapidly controlled in VAC therapy, final decrease in discharge is found in standard therapy. In our study granulation tissue was not significantly different at day 0 indicating that at baseline these two groups were comparable. Though significantly higher Frequency of granulation tissue was noted at day 4 in standard therapy, on day 8 and day 12 granulation tissue frequency was higher in VAC group though the granulation tissue was not found to be adequate. This indicates that initial healing was with appearance of granulation tissue was faster in standard therapy, but VAC therapy shows increased healing potential in later stages at day 8 and day 12. In our study Comparison of size of wound at different interval from day of surgery between study groups was performed size of wound was found to be significantly smaller in VAC group at day 0, day 4, day 8 and day 12 compared to that in standard therapy group. As the baseline size of wound was smaller in VAC groups it is a source of bias which can be said to be following the same expected trends in further progression of wound during treatment. Significantly higher frequency of skin grafting was needed in standard therapy to close the wound compared to wounds treated by VAC therapy, though at least part of this can be attributed to larger mean size of wound in standard therapy group. The time duration taken for formation of healthy granulation tissue was less as claimed by authors. But surprisingly, author have not used any control group in the study [13] . The results of our study was in line with this study showing appearance of granulation tissues earlier in VAC group though granulation tissue were not adequate. At 4 th day though granulation tissue were found to be higher in standard therapy group. Later on in process VAC group took over. The mechanism of VAC therapy is very simple. An open-cell structured foam is cut according to size and shape of the wound and then it is kept on the wound bed, a suction drain with perforations only in the end of the tube is laid on the foam. Then the entire wound is then sealed with an opposite or a transparent membrane which is adhesive then the other end of the suction tube is connected to a vacuum machine, once the wound is sealed and the machine is switched on the fluid from the wound is drawn through the foam into a canister which can be disposed subsequently. By this the edema from the wound is removed, new blood vessels are formed (angiogenesis) & hence leads to formation of healthy granulation bed & all this leads to earlier skin cover procedures of the wounds [14] . Gupta U et al performed a study to analyse and compare the results of vacuum assisted closure therapy and standard wound therapy in management of compound fractures. 30 patients having compound fractures upto grade IIIB (Gustilo and Anderson classification) were randomly treated either using SWT or VAC therapy. After initial wound debridement and provisional fracture fixation, therapy was started and continued till the wound got optimized for coverage either by split skin graft or flap. Author observed that time to optimized coverage in VAC group was shorter also the mean rate of decrease in wound size was higher in VAC therapy compared to standard wound therapy. But author have not mentioned anything about the significance of difference between two groups [15] . Though the significance is not mentioned the trends seen in this study by a large matched to those in our study. Our study is better only, owing to having achieved statistical significance. Our results are again consistent with the findings of study done by Morykwas et al. Sinha et al. and Banwell et al. [16] who also found that VAC therapy helps in reducing the size of wound at much faster rate as compared to SWT when applied to wounds resulting from open fractures. Our results are again somewhat comparable with the findings of study done by Arti et al. who also found less time taken by wounds to get optimized for coverage treated by VAC therapy as compared to SWT [17] . Gupta K et al. also carried out prospective randomised study is to compare the rate of infection, primary wound coverage, hospital stay and healing of soft tissue injury associated with open musculoskeletal injuries. Thirty patients with open musculoskeletal injuries were included in this study. They were divided in two groups of 15 each, Group A (VAC) and Group B (sterile dressing group) .All these patients had undergone wound debridement and fracture fixation. This was followed by application of Vacuum Assisted Closure (VAC) for Group A and sterile dressings for group B patients. The infection rate of these two groups was analysed by clinical signs and symptoms. Authors noted that primary wound coverage can be done earlier in VAC group. Also, hospital stay was minimum in VAC Group and wound healing was also faster patients when compared to group with standard therapy [18] . Conclusion Subjects who are treated by VAC therapy show final cessation of slow earlier than those treated by standard therapy for fracture management. VAC therapy shows earlier control of discharge compared to standard therapy though final decrease in discharge is more in standard therapy. Granulation tissue appears early in standard therapy but sustained noticeable granulation tissue are favored by VAC therapy. VAC therapy leads to earlier decrease in size of wound. Rate of healing is faster in VAC therapy compared to standard therapy. Earlier optimized covering of wound can be obtained by VAC therapy. Requirement of skin grafting is less in subjects treated with VAC therapy. Minimal complications with complete healing possible with VAC therapy in compound fractures of lower limb.
2021-08-03T00:04:12.476Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "8049388abda3f4065c7b415d3b976a5c796c19db", "oa_license": null, "oa_url": "https://www.orthopaper.com/archives/2021/vol7issue2/PartE/7-2-60-408.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f37c7d81cfc993a095e03530a13c637d275e6403", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
237563074
pes2o/s2orc
v3-fos-license
Spherically complete models of Hensel minimal valued fields We prove that Hensel minimal expansions of finitely ramified Henselian valued fields admit spherically complete immediate elementary extensions. More precisely, the version of Hensel minimality we use is $0$-hmix-minimality (which, in equi-characteristic $0$, amounts to $0$-h-minimality). Question 0. If K is definably spherically complete, does it have an (actually) spherically complete elementary extension L? Can L additionally be taken to be an immediate extension of K? If K has equi-characteristic 0 and we work in the pure valued field language (say, the ring language together with a predicate for the valuation ring), then the answer to Question 0 is yes-and-yes: In that case, K is Henselian, and then Ax-Kochen/Ershov implies that its maximal immediate extension is an elementary extension.In particular, given a Henselian valued field K of equi-characteristic 0, a model theorist may assume without loss that K is spherically complete, by passing to an elementary extension.(We use this crucially in [BWH22].) The aim of this note is to prove that Question 0 also has a positive answer for certain expansions of the valued field language on K, namely when the structure on K is 0-h-minimal.Recall that 0-h-minimality is an analogue of o-minimality for valued fields.More precisely, in [CHRK22, Definition 1.2.3], a whole family of such analogues are introduced, called "Hensel minimality" in general, where 0-h-minimality is the weakest of them.Note that Question 0 was already known before to have a positive answer in some (other) cases; see Section 4. We also obtain a positive answer in mixed characteristic under the assumption that K is finitely ramified.More precisely, in mixed characteristic, there are several variants of 0-hminimality; the one we use is 0-h mix -minimality in the sense of [CHRKV21, Definition 2.2.1].Note that among all the variants of Hensel minimality introduced in [CHRKV21] (called ℓh ⋆ -minimality for ℓ ∈ N ∪ {ω} and ⋆ ∈ {mix, coars, ecc}), the only ones not currently known to imply 0-h mix -minimality are 0-h coars -minimality and 0-h ecc -minimality. 1 Note also that 0-h mix -minimality is equivalent to 0-h-minimality in equi-characteristic 0. Assuming finite ramification in the mixed characteristic case seems natural: 0-h mix -minimality does not imply definable spherical completeness in general (see Example 1.5), but it does if we assume finite ramification (see Proposition 1.4). To summarize, our main result is the following: Theorem 1.Let K be a characteristic 0 valued field, considered as a structure in a language expanding the valued field language, such that, either • The residue characteristic of K is 0 and Th(K) is 0-h-minimal; or • K is finitely ramified, of finite residue characteristic, and Th(K) is 0-h mix -minimal.Then K has an elementary extension L ≻ K which is immediate and spherically complete (and hence maximal). By Remark 3.2, passing from K to L preserves saturation, so we also obtain the following: Corollary 2. Given any cardinal κ, any K as in Theorem 1 has a κ-saturated elementary extension that is moreover spherically complete. The notion of 0-h-minimality is defined by limiting the complexity of definable subsets of the field (in an analogy to o-minimality).We refer to [CHRK22] for details, particularly Definition 1.2.3 and Section 1.2 for the main definitions, and to [CHRKV21], particularly Section 2, for the mixed characteristic versions.While we do not recall the original definitions of those minimality notions, Lemma 1.1 below may be taken as a definition of 0-h-minimality and Lemma 1.2 may be taken as a definition of 0-h mix -minimality. Many examples of valued fields with 0-h-minimal and 0-h mix -minimal theories are given in [CHRK22, Section 6].Those in particular include all Henselian valued fields of characteristic 0 in the valued field language, expansions by various kinds of analytic functions, and the power bounded T -convex valued fields from [DL95] (see Section 4). Notations and Assumptions We use notation and conventions as in [CHRK22, Section 1.2]: • Given a valued field K, we write |x| for the valuation of an element x ∈ K, we use multiplicative notation for the value group, which is denoted by Γ × K , and we set for the corresponding leading term structure (recall that B <λ (1) is a subgroup of the multiplicative group K × ) and rv λ : K → RV λ,K for the canonical map.In the case λ = |1|, we will also omit the index λ, writing rv : K → RV K . • We will freely consider Γ K as an imaginary sort, and also RV λ,K , when λ is ∅-definable. (Actually, the only λ relevant in this note are of the form |p ν |, where p is the residue characteristic.) The notions of 0-h-minimality (from [CHRK22]) and 0-h mix -minimality (from [CHRKV21]) are defined by imposing conditions on definable subsets of K. Instead of recalling those definitions, we cite some direct consequences as Lemmas 1.1 and 1.2.Those are simply family versions of the definitions of 0-h-minimality and 0-h mix -minimality.(The actual definitions are obtained by restricting those lemmas to the case k = 0.) Lemma 1.1 ([CHRK22, Proposition 2.6.2]).Suppose that K is a valued field of equi-characteristic 0 with 0-h-minimal theory, that A ⊆ K is an arbitrary (parameter) set and that W ⊆ K ×RV k K is (A ∪ RV K )-definable.Then there exists a finite (without loss non-empty) A-definable set C ⊂ K such that, for all x ∈ K, the fiber W x ⊆ RV k K only depends on the tuple (rv(x−c)) c∈C ; i.e., if rv(x − c) = rv(x ′ − c) for all c ∈ C, then W x = W x ′ .Lemma 1.2 ([CHRKV21, Corollary 2.3.2]).Suppose that K is a valued field of mixed characteristic with 0-h mix -minimal theory, that A ⊆ K is an arbitrary (parameter) set, and that )-definable, for some integer m ′ ≥ 1.Then there exists a finite A-definable set C ⊂ K and an integer m ≥ 1 such that, for all x ∈ K, the fiber Note that if in Lemma 1.2, one takes K of equi-characteristic 0, then |m ′ | = |m| = 1, so one just gets back Lemma 1.1.In a similar way, equi-characteristic 0 and mixed characteristic could be treated together in this entire note.However, since the latter is more technical, we will often explain the equi-characteristic 0 case first. Any field K as in Theorem 1 is definably spherically complete, i.e., every definable (with parameters) chain of balls B q ⊆ K has non-empty intersection, where q runs over an arbitrary (maybe imaginary) definable set Q.In equi-characteristic 0, this is just [CHRK22, Lemma 2.7.1], and in general, it follows a posteriori from Theorem 1.We nevertheless give a quick separate proof in the mixed characteristic case: Proposition 1.4.If K is a finitely ramified valued field of mixed characteristic with 0-h mixminimal theory, then it is definably spherically complete. <γ of open radius γ that contains some ball B q , for some q ∈ Q, if it exists; let Q ′ ⊆ Γ K be the set of those γ for which B ′ <γ exists.Since the chains (B q ) q∈Q and (B ′ <γ ) γ∈Q ′ have the same intersection, by working with the latter, we can, and do, assume without loss that Q ⊆ Γ K and that every B q is an open ball of radius q. Pick a finite set C ⊂ K (using Lemma 1.2) and an integer m ≥ 1 such that for x ∈ K, the set Suppose now that there exists a q ∈ Q such that B q ∩ C = ∅.(Otherwise, C contains an element of the intersection of all B q and we are done.)We claim that every q ′ ∈ Q satisfies q ′ ≥ q • |m|.Since K is finitely ramified, this claim implies that below B q , the chain is finite and hence has a minimum (which is then equal to the intersection of the entire chain). To prove this claim, suppose for a contradiction that q ′ < q • |m|.Pick any x ′ ∈ B q ′ and any for all c ∈ C, contradicting our choice of C (and the fact that x ′ ∈ B q ′ and x / ∈ B q ′ ). Note that in mixed characteristic, the assumption of finite ramification is really necessary to obtain definable spherical completeness, as the following example shows. Example 1.5.Let K be the algebraic closure of Q, considered as a valued field with the p-adic valuation.Fix any elements a n ∈ K (n ∈ N ≥1 ) with |a n | = λ n := |p| 1−1/n and set a I := i∈I a i for I ⊂ N ≥1 finite.The balls B ≤λn (a I ) (n ∈ N ≥1 , I ⊂ {1, . . ., n}) form an infinite binary tree (with B ≤λn (a I ) containing B ≤λ n+1 (a I ) and B ≤λ n+1 (a I∪{n} )), so since K is countable, there exists a chain B n = B ≤λn (b n ) (where b n = a In for suitable I n ) which has empty intersection.We fix such a chain. Since K is henselian, it is ω-h ecc -minimal by [CHRK22, Corollary 6.2.7] and hence in particular 0-h mix -minimal (see the footnote on p. 2).Since each B n has a radius between 1 and |p|, it is the preimage of a subset of the residue ring O K /B <|p| (0) ⊂ RV |p| , so we can turn the chain (B n ) n∈N ≥1 into a definable family by expanding the language by a predicate on RV 2 |p| (e.g., {(rv |p| (c), θ(n)) | n ∈ N ≥1 , c ∈ B n }, for an arbitrary fixed injective map θ : N ≥1 → RV |p| ).By [CHRKV21, Proposition 2.6.5],K stays 0-h mix -minimal when expanding the language by this predicate.However, it is not definably spherically complete in this expansion, as witnessed by the (now definable) chain (B n ) n∈N ≥1 . Lemmas First we recall briefly a basic well-known lemma concerning the relationship between rv λ and res λ , where res λ : Hence, as res λ is a ring homomorphism, we have res λ (a) = res λ (a ′ ). Lemma 2.2.Suppose K is a valued field of characteristic 0. Let A ⊂ K be a finite subset of K. • In case K has finite residue characteristic p, let ℓ be any natural number such that #A < p ℓ and set λ := |p ℓ |; • Otherwise, when K has residue characteristic 0, set λ := 1.In either case, for any a, a ′ ∈ A, Proof.In case A is empty or a singleton, the lemma is trivial.So we continue under the assumption that #A ≥ 2. To each a ∈ A, associate the finite subset D a ⊆ RV λ defined by for λ as in the statement of the lemma.Suppose, towards a contradiction to the lemma, that there are a 1 = a 2 in A such that D a 1 = D a 2 .Without loss of generality, we assume that a 1 − a 2 = 1.This assumption can be made without loss as rv λ is multiplicative; so dividing every element of A by a 1 − a 2 allows us to reduce to the case where a 1 − a 2 = 1. Claim.Assuming the above, in particular that a 1 = a 2 but D a 1 = D a 2 , we find, for every natural number i ≥ 3, an element a i in A, satisfying (1) (Note that the condition already holds for i = 1, 2.) Proof of the claim.Assume inductively that for some i ∈ N we have found such an a i ∈ A. By the definition of D a 1 and the assumption that Therefore there exists some a i+1 ∈ A such that rv λ (a 2 − a i+1 ) = rv λ (a 1 − a i ).First note that this implies |a 2 − a i+1 | = |a 1 − a i | ≤ 1; the last inequality by property in part 1 of the claim for a i .Then, applying the ultrametric inequality, we calculate This establishes that a i+1 satisfies part 1 of the claim.Now by Lemma 2.1, the fact that rv λ (a 2 − a i+1 ) = rv λ (a 1 − a i ) implies that res λ (a 2 − a i+1 ) = res λ (a 1 − a i ).By the inductive assumption on a i we have res λ (a 1 − a This establishes that a i+1 also satisfies part 2 of the claim.Hence, by induction, we have proved the claim. (Claim) To finish the proof, it remains to verify, for some N > #A, that all the elements res λ (0), . . ., res λ (N − 1) are distinct.Since then, part 2 of the claim implies that a 1 , . . ., a N are distinct, too, contradicting N > #A; the contradiction going back to the assumption that If K is of residue characteristic 0 then we have set λ = 1, and clearly the elements of N ⊆ K have different residues in the residue field.In case K has finite residue characteristic p, recall that we chose ℓ so that #A < p ℓ =: N and that we have set λ := |p ℓ |.In particular, we have B <λ (0) ⊆ p ℓ O K , so 0, 1, ..., p ℓ − 1 have different residues in the residue ring O K /B <λ (0).This finishes the proof of the lemma. (Lemma 2.2) Definition 2.3.Suppose that K ⊆ M are valued fields.An element α ∈ M is called a pcl over K (which stands for "pseudo Cauchy limit") if for all a ∈ K there exists an Note that if α is a pcl over K, then we can find an a ′ ∈ K with |α − a ′ | < |α − 0| and hence rv(α) = rv(a ′ ).In particular, we have rv(α) ∈ RV K . Lemma 2.4.Assume K is an expansion of a valued field of characteristic 0 and either: • K has residue characteristic 0 and Th(K) is 0-h-minimal; or • K is finitely ramified with finite residue characteristic and Th(K) is 0-h mix -minimal.For K of residue characteristic 0, set λ := 1; in case K has finite residue characteristic p, let ℓ be any non-negative integer and set λ := |p ℓ |.Suppose that • M is an elementary extension of K, • α ∈ M is a pcl over K, and x∈M is a ∅-definable family of subsets of RV n λ,M (for some n).Then there exists some a ∈ K such that W α = W a .In particular, if the language contains constants for all elements of K, then W α is ∅-definable. Proof.First assume that K is of equi-characteristic 0 and Th(K) = Th(M ) is 0-h-minimal.By applying Lemma 1.1 in the valued field M with the parameter set A = ∅, we obtain a finite ∅-definable subset C of M such that the fiber W x (for x ∈ M ) only depends on the tuple (rv(x − c)) c∈C .As C is finite and ∅-definable, and as K is an elementary submodel of M , we have C ⊂ K. As C ⊂ K and α is a pcl over K, for every c ∈ C, there exists a ′ ∈ K such that |α − a ′ | < |α − c|.Hence as C is finite, there exists some a ∈ K such that for all c ∈ C we have rv(a − c) = rv(α − c) .Hence by our choice of C, having the defining property given in Lemma 1.1, we have that W a = W α as required. Suppose now that K has finite residue characteristic p and Th(K) = Th(M ) is 0-h mixminimal.Let λ := |p ℓ | for some integer ℓ ≥ 0 as in the statement of the lemma.Now apply Lemma 1.2 in the valued field M to the ∅-definable family W x .This provides a finite ∅-definable set C ⊂ M and the existence of an integer m ≥ 1 such that the fiber W x (where x ∈ M ) only depends on the tuple (rv |m| (x − c)) c∈C ; so fix such an integer m.Again, as C is finite and ∅-definable, and as K is an elementary submodel of M , we have C ⊂ K. Take any c ∈ C. As α is a pcl over K, for any positive integer s, there is a finite sequence u 1 , ..., u s of elements of (Lemma 2.4) Spherically complete models In this section, we prove the main result, Theorem 1.Throughout this section, we fix the valued field K and work in an elementary extension M ≻ K (which we will at some point assume to be sufficiently saturated).Note that a field extension L of K is an immediate extension if and only if RV L = RV K . We will deduce Theorem 1 from the following proposition. Proposition 3.1.Suppose that K ≺ M are as in Theorem 1 (of equi-characteristic 0 and 0-h-minimal, or of mixed characteristic, finitely ramified, and 0-h mix -minimal).Let α ∈ M be a pcl over K (see Definition 2.3) and set L := acl VF (K, α).Then we have L ≻ K and Here and in the following, acl VF means the field-sort part of the (relative model theoretic) algebraic closure inside the fixed extension M ; we also use dcl VF in a similar way. Proof of Proposition 3.1.In the following, we work in the language with all elements from K added as constants, so dcl VF (∅) = K and L = acl VF (α).Note that adding constants from the valued field to the language preserves 0-h-minimality and 0-h mix -minimality, by [CHRK22, Theorem 4.1.19]and [CHRKV21, Lemma 2.3.1].(Alternatively, since we claimed that we only use Lemmas 1.1 and 1.2 as definitions, note that the statements of these lemmas permit adding constants.) We will prove the following (in this order): Then every subset of (RV µ,M ) n (for any n) that is definable with parameters from L is ∅-definable. Then Claim B implies that RV L = RV K , since any element ξ ∈ RV L \ RV K would provide a subset {ξ} of RV 1,M that is definable with parameters from L but is not ∅-definable.Claim C implies that K ≺ L. Thus upon proving the claims above, we will be done. Proof of Claim A. Suppose that β ∈ L and that the algebraicity of β over {α} is witnessed by the formula φ(α, y).Let A be the finite set φ(α, M ), which is a subset of L by the choice of L. Depending on the residue characteristic of K, fix λ as in the statement of Lemma 2.2 (applied to this set A).Given any b ∈ A, we consider the (finite) set of "RV λ differences" By Lemma 2.2, different b satisfying φ(α, b) yield different finite sets D b ⊆ RV λ,L .So β can be defined by a formula (a priori over L) stating that φ(α, y) ∧ D y = D β , which uses the (finitely many) elements of D β ⊆ RV λ,L as additional parameters.To prove Claim A, it therefore suffices to check that D β ⊆ RV λ,K after all; from which it follows that β can be defined over K ∪ {α}. Subclaim.D β ⊆ RV λ,K .Now, with x running over M , consider the ∅-definable family of sets Each such W x is a subset of RV λ,M .Note that D β ⊆ W α , which is clear from their respective definitions.Now we apply Lemma 2.4 to the ∅-definable family W x to obtain that W α is ∅-definable.As acl VF (∅) = K (and W α is finite), we therefore have that W α ⊆ RV λ,K .So in particular D β ⊆ RV λ,K , which establishes the subclaim. (Claim A) Proof of Claim B. By Claim A, any set definable over L is in fact {α}-definable.So take any subset W α of (RV µ,M ) n that is definable over L and let ψ(α, y) be a definition for it, where ψ(x, y) is a formula over ∅.Now consider the ∅-definable family of sets W x := ψ(x, M ) in M .Applying Lemma 2.4 to this family yields that, for some a ∈ K, we have W α = W a , and hence is ∅-definable, as required. (Claim B) Proof of Claim C. We need to verify that every non-empty subset Y ⊆ M that is definable with parameters from L already contains a point of L. From the 0-h-minimality (or 0-h mix -minimality assumption), we obtain that there exists a finite L-definable set C = {c 1 , . . ., c r } ⊂ M such that Y is a union of fibers of the map for a suitable µ.Indeed, if K is of equi-characteristic 0 and Th(K) = Th(M ) is 0-h-minimal, then Lemma 1.1 (applied to Y considered as a subset of M × RV 0 M ) provides such a C with µ = 1.If K is of mixed characteristic and Th(K) = Th(M ) is 0-h mix -minimal, then we use Lemma 1.2 instead, which yields the analogous statement with µ = |p ν | for some integer ν ≥ 1. Fix C and µ for the rest of the proof; also fix an enumeration of C and the corresponding map ρ as above.Since L = acl VF (L), we have C ⊆ L, so ρ is L-definable, and so is the image ρ(Y ) ⊆ (RV µ,M ) r of Y .Hence by Claim B, that image is ∅-definable.By assumption Y = ∅, so we also have ρ(Y ) = ∅; as K is an elementary substructure of M , ρ(Y ) ∩ (RV µ,K ) r is non-empty, too. Choose any ξ = (ξ 1 , . . ., ξ r ) ∈ ρ(Y ) ∩ (RV µ,K ) r in this intersection.Since Y is a union of fibers of ρ, to show that L ∩ Y is non-empty, it suffices to prove that the preimage of ξ in M , has non-empty intersection with L. Each of the finitely many sets B i := c i + rv −1 µ (ξ i ) is a ball and the intersection of all of them is non-empty, since ξ ∈ ρ(Y ) ⊆ ρ(M ).By the ultrametric inequality, this intersection is equal to one of those balls, say B j .Since ξ j ∈ RV µ,K , there exists a ∈ rv −1 µ (ξ i ) ∩ K. Thus we obtain the desired element As explained after the statements of the claims, the proposition follows. (Proposition 3.1) We now conclude with the proof of Theorem 1. Proof of Theorem 1. Fix some (#K) + -saturated elementary extension M ≻ K.By Zorn's Lemma, there exist maximal elementary extensions N ≻ K satisfying N ≺ M and such that RV N = RV K .Fix N to be one of them.The rest of the proof consists in showing that such an N is spherically complete (using Proposition 3.1).Suppose, towards a contradiction, that (B i ) i∈I is a nested family of closed balls in N such that i∈I B i = ∅.Being nested, this family of balls defines a partial type over N .Without loss, we assume that all the B i have different radii, so there are at most as many balls as the cardinality of the value group of N .As N is an immediate extension of K, the value group of N is the same as the value group of K. Therefore M is sufficiently saturated to contain a realisation α ∈ M of this partial type.This α is a pcl over N , since for every a ∈ N there exists an i ∈ I with a / ∈ B i ; hence for any a ′ ∈ B i , we have |α − a ′ | < |α − a|.Now Proposition 3.1 (applied to N ≺ M ) implies that L := acl VF (N, α) is a proper elementary extension of N in M satisfying RV L = RV N = RV K , contradicting the maximality of N .We conclude instead that L := N is spherically complete. (Theorem 1) As a further model theoretic addendum, we add that the spherically complete elementary extension L obtained above is also at least as saturated as K. Remark 3.2.Suppose K satisfies the assumptions of Theorem 1 and is κ-saturated.Then any immediate spherically complete elementary extension N ≻ K is also κ-saturated. Proof.We write the proof in the mixed-characteristic case, which is easily simplified to the equi-characteristic 0 context.For ease of notation we write RV • := n∈N RV |n| .Note that by our assumption of finite ramification, RV K = RV N implies RV •,K = RV •,N .Indeed, this follows from the existence of natural short exact sequences of multiplicative groups k × ֒→ (RV λ ′ ,K \{0}) ։ (RV λ,K \{0}) when λ ′ is a predecessor of λ in the value group, and where k is the residue field of K. Fix a κ + -saturated and strongly κ + -homogeneous M ≻ N .Suppose we are given a type p over a set E ⊆ N of parameters of cardinality less than κ.Without loss, E = acl VF (E).Choose a realization ε ∈ M of p. Fix some enumeration of E × N and let ξ := rv |n| (ε − e) e∈E,n∈N ∈ RV κ ′ •,M (for some κ ′ < κ) be the sequence of leading term differences from ε to E. Let q := tp(ξ/E) be its type.By stable embeddedness of RV • , each formula ψ ∈ q is equivalent to a formula ψ ′ with parameters from RV •,N = RV •,K , so we find a set E ′ ⊂ RV •,K of cardinality ≤ #E < κ such that q ′ := tp(ξ/E ′ ) implies tp(ξ/E).Since K is κ-saturated, q ′ is realized in RV •,K .Pick such a realization ξ ′ , let σ ∈ Aut(M/E) be an automorphism sending ξ to ξ ′ , and set ε ′ := σ(ε).Note that we have rv |n| (ε ′ − e) ∈ RV •,K for every e ∈ E and every n ∈ N. Now by 0-h mix -minimality, our type p = tp(ε ′ /E) is implied by {rv |n| (x − e) = rv |n| (ε ′ − e) : e ∈ E ∧ n ∈ N}.Indeed, for each formula in p, the set C provided by Lemma 1.2 lies in E = acl VF (E).The formulas rv |n| (x − e) = rv |n| (ε ′ − e) define a chain of nested balls, so they have non-empty intersection in N .Any point in this intersection is a realization of p. Related questions Any power-bounded T -convex valued field K in the sense of L. van den Dries and A. H. Lewenberg [DL95] is 1-h-minimal by [CHRK22,Theorem 6.3.4] and hence also 0-h-minimal, so Theorem 1 applies.However, this case is already given by E. Kaplan [Kap21, Corollary 1.12], and Kaplan's result establishes uniqueness (up to the relevant notion of isomorphism over K) of the spherically complete elementary extension L in that case.We can ask whether uniqueness holds more generally: Question 4.1.Assuming equi-characteristic 0, or mixed characteristic and finitely ramified, is the spherically complete immediate extension L of K constructed in Theorem 1 unique as a structure in the expanded language (up to isomorphism over K)? It is then natural to consider what happens in the T -convex case when T is not powerbounded.(Such K are not 0-h-minimal; see [CHRK22, Remark 6.3.5].)By [KKS97, Theorem 2], any valued real closed field with an exponential does not have any spherically complete elementary extension ([KKS97, Remark 7]).So, as explained in [Kap21, Remark 1.13], if K is a T -convex valued field arising from a non power-bounded o-minimal T , then no spherically complete elementary extension (or even model) exists.One might suspect that such K are not even definably spherically complete, but we do not know whether this is the case. Recall that in mixed characteristic, the assumption of finite ramification seemed to be natural since then, 0-h mix -minimality implies definable spherical completeness.However, one can still ask slightly more generally: Question 4.2.If K is 0-h mix -minimal and definably spherically complete, does it have an (immediate) spherically complete elementary extension?Question 0 also has a positive answer in some non-0-h-minimal cases.Concretely, in [Kap21, Theorem 6.3], conditions are given for certain T -convex valued fields expanded by certain derivations to have spherically complete immediate elementary extensions.Note that the field of constants of a non-trivial derivation on K is a definable subset for which Lemma 1.1 does not hold, so these expansions are typically not 0-h-minimal. and a ⊂ A, then one can apply the original version of [CHRKV21, Corollary 2.3.2] to the subset of K × RV k+ℓ |m ′ |,K defined by φ(x, y, a). Now since M is finitely ramified, by taking s to be a large enough positive integer and ν such that|p ν | ≤ |m|, we obtain that |α − u s | < |p ν | • |α − c| ≤ |m| • |α − c|.This implies that rv |m| (α − c) = rv |m| (u s − c).Since this works for each of the finitely many c in C, we can find an a ∈ K such that rv |m| (α − c) = rv |m| (a − c) for all c ∈ C. It then follows from the defining property of C given by Lemma 1.2 that W a = W α , as required.
2021-09-20T01:15:50.309Z
2021-09-17T00:00:00.000
{ "year": 2021, "sha1": "5b3f437ff922ae11ebe324533be3a52bd7b3492c", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/malq.202100055", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "5b3f437ff922ae11ebe324533be3a52bd7b3492c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
225127471
pes2o/s2orc
v3-fos-license
Potential for native hydrocarbon-degrading bacteria to remediate highly weathered oil-polluted soils in Qatar through self-purification and bioaugmentation in biopiles Highlights • Highly adapted hydrocarbon-degrading bacteria were isolated from weathered soils.• High diversity of bacterial metabolism was shown although from the same soil.• Biostimulation improves removal of weathered hydrocarbons by indigenous bacteria.• Combination of stimulation to augmentation requires selection of indigenous strain.• Without selection of strains, negative effect may be registered on bioremediation. Introduction Crude oil contains thousands of different molecules with various properties. Qatar, an important producer of oil and gas, is in an arid area with harsh soils and weather. Oil is discharged to the environment as waste during anthropogenic activities and through spillages, and oil in the environment is subjected to weathering processes, particularly under extreme conditions especially in the summer season (mean temperature 34.9 C, relative humidity of 58 %, daily sunshine of 10.6 h, solar radiation of 584.7 mW h/sq.cm and wind speed of 26 km/h) [1,2]. In a recent study it was found that oil pollution has occurred in limited specific areas in Qatar but extensive weathering caused by the harsh weather in Qatar has led to oil in these areas being at different stages of oxidation [3]. Microbial remediation is an attractive strategy for decreasing the effects of high degrees of pollution with mixtures of complex molecules [4,5]. Soil bacteria can adapt and develop mechanisms for totally or partly degrading oil molecules to generate energy and allow the bacteria to grow. Other bacteria can use oil molecules that have already been transformed in some way [5][6][7]. Hydrocarbon-degrading bacteria can develop chemotaxis, a signaling system, to help guide access to hydrocarbons [8]. Degradation of organic compounds in oil requires bacterial cells to interact with the oil [9][10][11]. Some bacteria that have adapted to oil-contaminated soil can transfer pollutants through their hydrophobic surfaces [7,12]. Some hydrocarbon-degrading bacteria produce surfactants to help increase the solubility and accessibility of the hydrocarbons [13,14]. The susceptibility of a hydrocarbon to microbial degradation depends on the hydrocarbon structure, and branched-chain alkanes, cyclic alkanes, linear alkanes, and small aromatic compounds have different susceptibilities to microbial degradation [15,16]. Weathering processes are particularly severe in arid regions such as the area around the Persian Gulf. Biodegradation of weathered compounds is a more complex process than biodegradation of unweathered compounds [17][18][19][20]. Only microorganisms with structures and functions welladapted to the conditions can grow in harsh environments like the environment in Qatar. Such microorganisms are interesting because they have particular abilities to degrade or transform unconventional mixtures of organic compounds, as was found in studies performed by Al Disi et al. [19] and Attar et al. [21]. Natural microbial remediation in Qatar is therefore an appropriate model for the degradation of weathered hydrocarbons in harsh soils. It has been found in previous studies that methods for bioremediating hydrocarbons should be designed taking into account the diversity and metabolic adaptations of endogenous bacteria, particularly in areas with harsh conditions [19,22]. It has been found that a long adaptation period with endogenous bacteria is required if unadapted bacteria are used in bioaugmentation processes to remediate weathered oil-contaminated soil [23,24]. However, the fate of weathered oil in soil in arid areas like Qatar is not well investigated [25]. In situ microbial bioremediation by endogenous bacteria involving augmentation/stimulation processes is particularly poorly understood. Most bioremediation failures are caused by the bacteria not being adapted to harsh weather and soil [25]. Introducing exogenous bacteria cannot lead automatically to the success of the bioremediation of the soil. Interactions between the endogenous and the exogenous bacteria as well as their symbiosis in term of cooperation or inhibition are complex to predict in the case of weathered oil. Indeed, the complexity of such oil components requires, in general, potent bacteria able to tolerate their toxicity and cooperate in their removal by commensalism and/or co-metabolism. Not any of the endogenous bacteria can support the self-purification or may be seeded to accelerate the remediation of the wethered oily-soil. In order to demonstrate such hypothesis, wethered oil pollution of soil was investigated at two sites in Qatar. The study sites were in areas with unique conditions, characterized with dry soil, harsh weather conditions and weathered oil left for self-purification for more than three years [3]. They are the coastal area in AlZubara beach and the oilwaste dumping site in the industrial area of Dukhan [3]. They are considered an appropriate and a characterized model for this type of study. Here, to achieve the goal, communities representing only the highly adapted bacterial strains, tolerating high toxicity and exhibiting high activity of oil range organics, pristine and phytane were isolated, identified and screened to select candidates for seeding the soils in biopiles. Comparison between enhanced selfpurification and bioaugmentation was performed, in order to establish the appropriate approach of the microbial bioremediation of weathered oily soils. The study linked the microbial, ecological and applied aspects of the issue related to the weathered oil bioremediation. Sampling soil polluted with weathered oil AlZubara beach is 12 km long and is in north-west Qatar. The beach has continually received oil pollution caused by oil transportation and related activities in the Arabian Gulf for a long time. A total of 13 sampling points along the beach were selected, and three samples of soil were collected at each of the 13 sampling points, meaning 39 samples were collected. The hydrocarbon contents and weathering statuses of all of the sampling sites were investigated in a previous study [3]. The sampling points at the AlZubara beach site are shown in Fig.1. The Dukhan dump area is a well-controlled site that is used to store oil waste. Storing oil waste in the area has not been negatively affecting by the environment. Soil samples collected over a three-year period in which the soil was exposed to air and weathering processes have been analyzed previously [3]. Samples were collected systematically at the AlZubara and Dukhan sites. Samples were collected from nine sampling points in a square area. Each sampling point was 20 m from the neighboring sampling points. At each sampling point, a surface soil sample and a sample from a depth of 20 cm were collected, meaning a total of 36 samples were collected from both sites (18 from each site). Each soil sample was collected using a sterile spatula and was stored in a sterilized glass bottle. Each bottle was sealed, labeled, and wrapped in foil to protect the sample from light and prevent further reactions occurring. Each bottle was sealed, labeled, wrapped in foil to protect the sample from light and prevent further oxidation reactions and stored at 2À4 C. The samples were collected in spring, and the ambient soil temperatures were 25-26 C. Preparation of soil samples for analysis A 50 g aliquot of a soil sample was mixed with 50 mL of water in a 250 mL stoppered conical flask, and the mixture was shaken using a shaking device for 30 min at 25 C. The sample was then allowed to stand for 30 min and then passed through a Whatman no. 42 filter paper (GE Healthcare Bio-Sciences, Pittsburgh, PA, USA). The filtrate was then centrifuged at 3000 rpm for 5 min. Determining the major element concentrations by ion chromatography The concentrations of various ions in the sample extracts were determined by ion chromatography using an Dionex IC 5000 system (Thermo Fisher Scientific, Waltham, MA, USA). The concentrations of chloride and sulfate (anions) and calcium, magnesium, sodium, and potassium (cations) were determined. The ion chromatography system had an isocratic pump, an injection valve with a 250 mL sample loop, a Dionex conductivity detector, and an automated sampler. A specific pH gradient was used for each analyte ion. The anions were separated using a Dionex Ionpac AS4A-SC analytical column (250 mm long, 4 mm inner diameter; Thermo Fisher Scientific) and a Dionex AG4A-SC guard column (50 mm long, 4 mm inner diameter; Thermo Fisher Scientific), and an Anion Self-Regenerating Suppressor-1 system was used. The eluent contained 1.8 mM Na 2 CO 3 and 0.8 mM NaHCO 3 , and the flow rate was 0.25 mL/min. The cations were separated using a Dionex Ionpac CS12 analytical column (250 mm long, 4 mm inner diameter; Thermo Fisher Scientific) and a Dionex CG12 guard column (50 mm long, 4 mm inner diameter; Thermo Fisher Scientific), and a Cation Self Regenerating Suppressor system was used. The eluent contained 0.020 M methanesulfonic acid, and the flow rate was 0.25 mL/min. Oil hydrocarbon analysis by gas chromatography mass spectrometry Total Petroleum Hydrocarbons-Oil Range Organics (TPH-ORO: n-C10-n-C35) in the soils were analyzed by gas mass spectrometry according to AlKaabi et al. [3]. Accordingly, TPH were first extracted from homogenized soils, subjected to analysis, in the ASE DINOX SE 500 evaporator for extraction, using Methylene chloride/acetone (1:1, v/v) as reported [3]. The TP-ORO analysis of the extracts was performed by first dissolving an aliquot in dichloromethane to give an extract concentration of 2500 mg/mL. The solution was then analyzed using an Agilent 7890A/5975C gas chromatograph mass spectrometer (Agilent Technologies, Santa Clara, CA, USA) equipped with a 60 m ZB-5 capillary column. The carrier gas was helium, and the flow rate was 1.1 mL/min. A 1 mL aliquot of the oil solution was injected in pulsated splitless mode, and the injector temperature was 315 C. The oven temperature program started at 40 C, which was held for 2 min, increased at 25 C/min to 100 C, then increased at 5 C/min to 315 C, which was held for 13.4 min. The mass spectrometer was operated in selected ion monitoring mode, and m/z 55 was monitored [26]. Analysis of hydrocarbons in diesel by gas chromatography flame ionization detection Hydrocarbons in diesel were determined using an Agilent 6890 N gas chromatograph with a flame ionization detector (Agilent Technologies). This type of analysis is usually performed to investigate diesel degradation. Separation was achieved using an Agilent HP-1 gas chromatography column (30 m long, 0.25 mm inner diameter, 0.10 mm film thickness, on a 7-inch cage; Agilent Technologies). The oven temperature program started at 100 C, then increased at 15 C/min to 280 C, which was held for 5 min. The carrier gas was nitrogen, and the pressure was 7.85 psi, giving a starting flow rate of 6.1 mL/min. Once a chromatogram had been acquired, the concentration of each analyte was determined from the peak area using US Environmental Protection Agency method #8015. Enrichment of cultures for isolating hydrocarbon-degrading bacteria Cultures containing hydrocarbon-degrading bacteria were prepared from samples of surface soil (the upper soil layers) and soil from 20 cm deep (the lower soil layers) from each sampling point and samples of oil waste. A 1 g aliquot of a sample was suspended in 20 mL of Luria Broth (LB). The mixture was then incubated at 30 C for 72 h on a shaker set to shake at 300 rpm. A 2 mL aliquot of the liquid was then added to 20 mL of mineral salt medium (MSM) supplemented with 1 mL of diesel or crude oil as a carbon source. This adaptation step was repeated three times to ensure that the medium was enriched with bacteria capable of growing using hydrocarbons in crude oil or diesel [19,21,22]. A 100 mL aliquot of an enriched LB liquid culture was then spread on MSM agar, then 100 mL of crude oil or diesel was sprayed onto the agar. Isolates with distinct morphologies were transferred to new LB agar plates and purified by successively sub-culturing the isolated colonies six times. The MSM contained 4.0 g/L NH 4 NO 3 , 2.0 g/L Na 2 HPO 4 , 0.53 g/L KH 2 PO 4 , 0.17 g/L K 2 SO 4 , 0.10 g/L MgSO 4 .7H 2 O, 1 g/L ethylenediaminetetraacetic acid, 0.42 g/L ZnSO 4 , 1.78 g/L MnSO 4 , 0.5 g/L H 3 BO 3 , and 1 g/L NiCl 2 . MSM solid medium was prepared by adding 20 g of agar to 1 L of MSM. Molecular identification of the hydrocarbon-degrading bacterial isolates The DNA from cells grown overnight on a LB plates was removed using a polymerase chain reaction (PCR) protocol. The cells were suspended in 0.2 mL of distilled water and stored at À80 C for 20 min. The cells were then placed in a water bath at 100 C for 10 min. The mixture was then centrifuged for 10 min at 13,000 rpm, and the supernatant was transferred to a new tube and subjected to the next part of the PCR protocol. The 16S rDNA was amplified using the universal PCR primers RibS73sp (AGAGTTT-GATCCTGGCTCAG) and RibS74sp (AAGGAGGTGATCCAGCCGCA) [27]. The PCR protocol was performed using 25 mL of PCR buffer containing 1.5 mM MgCl 2 , 0.8 mM deoxyribonucleotide triphosphate, 1.35 mM forward and reverse primers, 10-20 ng of isolated genomic DNA, and 0.5 IU of Taq DNA. The PCR reaction started with denaturation at 94 C for 3 min, then had 35 cycles of 45 s denaturation at 94 C, 45 s annealing at 50 C, and 45 s elongation at 72 C, and then had a final 2 min extension step at 72 C. DNA purification was performed using a QIAquick gel extraction kit. Bacterial 16S rRNA amplicons were sequenced after the amplicons had been purified. Sequencing data were generated by the sequencing unit. Sequences lengths were between 900 bp and 970 bp. The 16S rDNA sequence for each isolate was used to identify related DNA sequences in the Gene Bank database using the Blast server at the US National Center for Biotechnology Information. Self-purification and bioaugmentation in soil biopiles The method used to investigate self-purification and bioaugmentation was previously published by Oualha et al. [28]. Biopiles were performed in glass containers (20 Â 10 Â 15) cm containing 685 g soil each. The C/N/P ratios mentioned with results were adjusted using ammonium nitrate containing 34 % (w/w) nitrogen and potassium phosphate containing 22 % (w/w) phosphorous as reported by Oualha et al. [28]. Water content of all biopiles was adjusted to the indicated level, using dilled water [28]. The biopiles were covered with aluminium foil preventing light oxydation of hydrocarbons and incubated at in an incubator set at the desired temperature. All biopiles were mixed, manually, twice a week during the period of incubation. For biopiles performed with seeding, a 5 mL suspension of the pellet of the selected isolates was prepared from a 4-day culture in 20 mL-liquid MSM medium supplemented with 10 % diesel and washed twice with MSM. The suspension was then mixed in the corresponding biopile-soil. The bacterial cells in the solution and soils were then counted and the number of colony-forming units (CFU) was determined. Determining bacterial cell counts in the liquid and soil samples The bacterial cell count for a sample was determined by performing serial dilutions and then spreading a diluted sample on LB medium and incubating the plate at 30 C for 48 h. For a liquid, a 1 mL aliquot was serially diluted and then, for each dilution, 100 mL of the solution was spread on a solid LB plate. The dilution factor was considered when calculating the cell densities. For soil, 1 g of soil was suspended in 1 mL of liquid MSM, then the mixture was serially diluted with MSM and treated as described above for a liquid sample. Statistical analysis Each experiment was performed in triplicate and the mean result was calculated. Each mean is presented below with the standard deviation. The means and standard deviations were calculated using Microsoft Excel 2013. The significances of differences between sets of results were determined by performing one-way analyses of variance using the 95 % confidence level (p > 0.05). Chemical and physical characterization of the AlZubara and Dukhan soil samples The samples from AlZubara beach were collected from a site that was found to contain weathered oil in a previous study [3]. Soil from Dukhan that had been weathered for 3 years was used as a source of adapted bacteria. AlZubara beach was continuously under pollution by Oil as exposed to oil industry in the Gulf sea. AlKaabi et al. [3] showed that such pollution is continuous and traces of petroleum dated from the Gulfa War in 1991. A composite sample from each site (AlZubara and Dukhan) was prepared, by mixing 5 kg of soil sample from each site, meaning that 45 kg from all the surfaces and 45 kg from 20 depth were homogenously prepared. These samples served to analyze the major composition in each site. The chemical and physical characteristics of the samples from AlZubara and Dukhan were determined to allow the environmental conditions suitable for the indigenous microbial communities found at the sites to be evaluated, and the results are shown in Table 1. The overall chemical and physical characteristics of the surface layer and 20 cm below the surface of the soil from the AlZubara or Dukhan sites were not significantly different (one-way analysis of variance, p > 0.05). The total petroleum hydrocarbon (TPH) oil range organic (ORO) contents of the upper and lower soil samples were around 280 mg/kg, and the TPH diesel range organic (DRO) contents were <1 mg/kg. The upper and lower soil samples were all slightly alkaline ($pH 7.20). The sulfate contents of the upper and lower soil samples were around 2.04 mg/g, the nitrate and ammonia contents were low ($0.140 mg/g and $0.007 mg/g, respectively), and the phosphate contents were moderate ($0.350 mg/g). There were also no significant differences between the characteristics of the upper and lower soil samples from the Dukhan site. However, most of the mineral contents were significantly lower in the Dukhan soil samples than the AlZubara soil samples, but the calcium contents were twice as high in the Dukhan soil samples as in the AlZubara soil samples. The salinities were low in the samples from both sites. They are of 2.4 ppt and 6 ppt in Dukhan and AlZubara sites, respectively, indicating that the samples may have not contained halophilic and halotolerant microorganisms [29]. The Dukhan soil samples were slightly acidic (pH 6.75). The pH values of the soil samples from both sites were likely to be favorable for most microorganisms that could degrade hydrocarbons. The TPH-DRO contents of the upper and lower Dukhan soil samples were 6250 and 6480 mg/kg, respectively. The TPH-ORO contents of the upper and lower Dukhan soil samples were $4000 mg/kg. This indicated that soil at the Dukhan site was much more polluted with TPH than was soil at the AlZubara site. This could be because solid and liquid oil waste from oil extraction activities are collected at specific sites in the Dukhan area. When a specific dump area is full it is left open to the air for years to allow self-remediation to occur. The Dukhan site is strictly controlled to prevent spreading of oil pollution or the transfer of oil to groundwater or watercourses. However, the site is subjected to weathering processes. Isolation of potential hydrocarbon-degrading bacteria from AlZubara and Dukhan Eight bacterial isolates were prepared from the upper and lower soil samples from the nine sampling points in the AlZubara area. Because of the isolation procedure that was used, the isolates were expected to contain hydrocarbon-degrading bacteria. The isolates were identified by ribotyping based on 16S rDNA sequencing, and similarities between the sequences and sequences in the Blast database were identified. The results are shown in Table 2. It can be seen that few isolates were isolated. There are two possible reasons for this. First, the soil samples had low contents of organic compounds required for cell growth and maintenance. Therefore, few cells would have been able to adapt and subsist. Second, the aim of the isolation strategy was to isolate hydrocarbon-degrading bacteria by enriching cultures in MSM containing 10 % diesel. This culture medium was very toxic to bacteria because the hydrocarbon compound concentration was 75 g/L (the total hydrocarbon concentration in the diesel was 750 g/L, and the diesel concentration in the culture medium was 10 % v/v). The aim of the isolation procedure was to isolate and purify hydrocarbondegrading bacteria with strong potentials to degrade and tolerate diesel and crude oil. The isolation procedure was previously published by Al Disi et al. [19] who demonstrated such a process of isolated of highly potent hydrocarbon-degrading bacteria. The isolated bacteria were therefore highly adapted. However, it was clear that these bacterial isolates were not homogeneously distributed in all of the samples and both layers. The samples from some sampling points (numbers 1, 2, and 5) did not contain any adapted isolates. For other sampling points, only the upper or lower samples contained adapted bacterial isolates. Of the eight isolates, three bacterial strains belonged to the Bacillus genus but were of three different species (Bacillus subtilis, Bacillus licheniformis, and Bacillus circulans). Two strains of Providencia rettgeri were isolated. Both strains were found in surface soil samples. Two isolates belonged to the Virgibacillus genus. One was the species Virgibacillus halodenitrificans and the other was the species Virgibacillus marismortui. Both Virgibacillus species were found only in lower soil samples. One Morganella morganii strain was isolated from the samples from the AlZubara site. Bacillus licheniformis has previously been found to degrade hydrocarbons [19]. Some strains of Bacillus subtilis also degrade hydrocarbons Table 2 Bacteria isolated from the AlZubara soil samples. The sampling point numbers are the numbers shown in Fig. 1. Each code is for one purified species/strain. The similarities and accession numbers were obtained after the DNA sequences had been deposited in the Gene Bank database using the Blast server at the US National Center for Biotechnology Information. [29]. Three strains of Providencia rettgeri have been found within oil bacterial populations, and responsible of the degradation of oil organic compounds [30]. Some Bacillus strains have previously been isolated from soil from polluted sites in Qatar and were found to degrade hydrocarbons [19,28]. The other genera of bacteria have not previously been found in the environment in Qatar or other areas around the Arabian Gulf. A total of 16 isolates were obtained from the upper and lower soil samples from the nine sampling points at the Dukhan site. The isolates are described in Table 3, and the sampling point each isolate was found at and the identity of each isolate determined by ribotyping (based on 16S rDNA sequencing) are also shown. Like for the AlZubara site, few isolates were found at the Dukhan site because the aim of the isolation procedure was to isolate hydrocarbon-degrading bacteria with strong potentials to degrade and tolerate diesel and crude oil. In terms of biodiversity at the Dukhan site, the genus Pseudomonas was represented by nine strains of three species. The Bacillus genus was represented by six strains of several different species isolated from both the upper and lower soil samples. These Bacillus and Pseudomonas species were found in both upper and lower soil samples from different sampling points. Pantoea calida was found only in the lower soil sample from sampling point 4. Some Bacillus species, e.g., the subspecies Bacillus lichenoformis and Bacillus subtilis [31,32] have previously been found to degrade hydrocarbons. Pseudomonas luteola has previously been described [33]. Pseudomonas aeruginosa strongly degrades hydrocarbons [34] as does Pseudomonas stutzeri [35]. Pantoea calida has not previously been found to degrade hydrocarbons. Some of the hydrocarbon-degrading Bacillus and Pseudomonas bacteria isolated from the Dukhan soil samples may have been very similar strains because only a short sequence of 16S rDNA was amplified when the ribotyping identification procedure was performed. Potentials for the isolated bacteria to degrade diesel hydrocarbons Differences between the isolated strains were determined from the different biological activities of the strains in MSM containing 10 % v/v diesel (the only source of carbon). The diesel concentration of 10 % corresponded to a hydrocarbon concentration of 75 g/L (the hydrocarbon concentration in the diesel was 750 g/L). The potentials of each strain to grow using hydrocarbons as an energy source and to tolerate high hydrocarbon concentrations were also investigated. The potential of each strain to adapt to the presence of hydrocarbons by synthesizing biosurfactants to increase hydrocarbon bioavailability, by removing hydrocarbons with low molecular weights (LMWs), medium molecular weights (MMWs), and high molecular weights (HMWs), and by removing TPH was also investigated. The n-heptadecane (n-C17) to pristane ratio and n-octadecane (n-C18) to phytane ratio for the medium containing each strain were determined to allow the potential for that strain to biodegrade oil to be assessed. The results for all of the strains isolated from the soil samples from the AlZubara and Dukhan sites are shown in Table 4. The results indicated that all of the strains isolated from the soil samples from the AlZubara site were able to grow and increase the cell biomass under the experimental conditions that were used. However, different amounts of biomass were produced by the different strains. The TPH removal efficiencies were clearly not proportional to the amounts of biomass produced. For example, strain Z3S1 gave a final CFU of 0.15 Â 10 7 per mL and removed 38 % of the TPH, but Z4D1 gave a final CFU of 1.33 Â 10 7 per mL and removed 19 % of the TPH (half of the percentage removed by Z3S1). These results reflected the different metabolic pathways used by the different bacteria caused by metabolic diversity and the adaptation processes the bacteria used. The TPH removal efficiency was used as a criterion to differentiate between the different isolates. Five strains (the largest group of strains) removed 19 %-23 % of the TPH, and two strains (the second largest group) removed 27 %-29 % of the TPH. Interestingly, Z3S1 removed 38 % of the TPH, which was a higher percentage than was removed by any other isolate. Most of the isolates removed 13 %-18 % of the LMW (n-C12-n-C16) hydrocarbons, but Z7D1 removed 27 % of the LMW hydrocarbons. Z7D1 also removed 29 % of the HMW (n-C21-n-C25) hydrocarbons, which was a higher percentage than was removed by any other isolate. There were also differences between the percentages of the MMW (n-C17-n-C20) hydrocarbons by the different isolates. Z3S1 gave the highest MMW and HMW hydrocarbon removal efficiencies, of almost 35 %. The biosurfactant activities in the culture broths were determined to evaluate the abilities of the bacteria strains to enhance diesel biodegradation. Biosurfactants are essential to bioremediation because they emulsify and solubilize hydrophobic compounds. The biosurfactant activities in the culture broths of all of the bacteria strains were weak. As mentioned above, biosurfactant activity is essential for bacteria to interact with hydrocarbons, so these results may have been caused by the biosurfactants that were Table 3 Bacterial strains isolated from the Dukhan soil samples. The point numbers are the numbers of the systematic sampling points. Each code is for one purified species/strain. The similarities and accession numbers were obtained after the DNA sequences had been deposited in the Gene Bank database using the Blast server at the US National Center for Biotechnology Information. produced having been fully engaged with hydrocarbons and therefore removed with the organic phase before the analysis was performed or by the biosurfactants being attached to cell walls. Intracellular biosurfactants in natural systems tend to become attached to cell walls or excreted [36]. A bacterial cell has a membrane made up of lipids, and the transportation of insoluble substrates through the membrane is facilitated by intracellular biosurfactants that can pass through the membrane. Complex lipids, proteins, and carbohydrates are extracellular biosurfactants that facilitate the solubilization of substrates that are potentially useful to bacteria [37]. The main difference between an intracellular and extracellular biosurfactant is the chemical nature of the hydrophilic head [36]. The isolate Z3S1 had a higher emulsification activity (9.9 EU/mL, where EU is emulsification units) than the other isolates, and Z6S1 gave the next highest (6.4 EU/mL). All of the isolates had solubilization activities, however poor. It was therefore clear that the isolates had a high range of hydrocarbon-degradation activities. Some isolates were expected to have complementary activities. The n-C17/pristane ratio and n-C18/phytane ratio were used to indicate biodegradation assuming that the isoprenoid hydrocarbons pristane (n-C19) and phytane (n-C20) had similar volatilities to n-C17 and n-C18 and that if they disappeared at different rates it would be caused by a mechanism (e.g., biodegradation) other than evaporation [24]. Isoprenoids are less susceptible than n-alkanes of similar molecular weights to microbial degradation. The rates at which isoprenoids evaporate and are degraded tend to decrease as the degree of alkylation increases [37]. For example, the isolate Z8D1 had the highest n-C17/pristane ratio of 15 %, but Z4D1 and Z9D1 had n-C17/pristane ratios of only 9%. The other isolates had n-C17/pristane ratios of 2.3 %-7%. The isolate Z9D1 had the highest n-C18/phytane ratio of 23 %, and Z6S1 and Z7S1 had n-C18/phytane ratios of 15 % and 10 %, respectively, but the other isolates had n-C18/phytane ratios of 2.5 %-8%. All of the isolates from the soil samples from Dukhan were able to grow under the experimental conditions and produce new biomass. However, the isolates, even species within the same genus, had very different potentials for producing new cell biomass. The amounts of biomass produced were not proportional to the TPH removal efficiencies. For example, D1D1 gave a final CFU of 0.47 Â 10 7 per mL and removed 31 % of the TPH, but D1D2 gave a final CFU of 2.67 Â 10 7 per mL and removed 48 % of the TPH. These results reflected the metabolic diversity and adaptation processes of the isolates. Interestingly, D1D2 (Bacillus licheniformis) and D5D1 (Pseudomonas aeruginosa) removed 48 % and 42 % of the TPH, respectively, under the experimental conditions. The TPH removal efficiencies for most of the isolates were 16 %-28 %. Most the isolates, including D5D1, removed <30 % of the LMW (n-C12-n-C16) hydrocarbons, but D1D2 removed 72 %. Most of the isolates degraded MMW and HMW hydrocarbons moderately, giving removal efficiencies <30 %, but D1D2 degraded 60 % of the MMW hydrocarbons and 80 % of the HMW hydrocarbons and D5D1 removed 38 %-43 % of the MMW and HMW hydrocarbons. Very small amounts of biosurfactants were found in the culture supernatants. We therefore drew similar conclusions about the biosurfactants produced by the Dukhan samples as we drew for the AlZubara samples. The biosurfactants produced by the isolated bacteria would have been intracellular if they promoted the transportation of insoluble substrates through the membranes [38][39][40]. Isolate D1D2 had the highest emulsification activity (303 EU/mL), followed by the strain D5D1 (278 EU/mL) and isolates D9D1, D7S1, and D5S1 had emulsification activities of 178, 116, and 113 EU/mL, respectively. It is clear from these results that the 16 isolates from the Dukhan soil samples had very different hydrocarbon-degradation activities from the isolates from the AlZubara beach samples. The isolates tolerated hydrocarbon toxicity well and degraded hydrocarbons at high concentrations. Strong selection pressure would have led to bacteria most appropriate for bioremediating oil hydrocarbons being isolated. It would be expected that some isolates would have had complementary activities. Isolate D1D2 had the highest n-C17/ pristane ratio (58 %), and the D5D1 n-C17/pristane ratio was 38.3 %. Table 4 Screening results for the bacterial strains isolated from the soil samples from the AlZubara and Dukhan sites by culturing the bacteria in mineral salt medium containing 10 % diesel (SA: solubilization activity, TPH: Total Petroleum Hydrocarbons, EA: emulsification activity, LMW: low molecular weight (n-C12-n-C16) hydrocarbons, MMW: medium molecular weight (n-C17-n-C20) hydrocarbons, HMW: high molecular weight (n-C21-n-C25) hydrocarbons). The values were calculated statistically and are the average of three separate determinations. The control is the non-inoculated culture. Strain Growth (10 7 The other isolates had n-C17/pristane ratios of 2.4 %-25 %. D1D2 had the highest C18/pristane ratio (65 %), D5D1 had the next highest (31.68 %), and the other isolates had n-C18/ phytane ratios of 2.5 %-23 %. D1D2 (Bacillus licheniformis) and D5D1 (Pseudomonas aeruginosa) had the strongest abilities to biodegrade the highly weathered hydrocarbons found at the Dukhan site. The significance of differences between D1D2 and D5D1 results in one side and of all the rest of the strains on the other side, regarding the biodegradation ability (n-C17/pristane and n-C18/phytane ratios), removal of HMW range of hydrocarbons and emulsification activities was clear by performing one-way analyses of variance using the 95 % confidence level (p > 0.05). They were selected for further studies. Bioremediation of weathered Dukhan soil through selfpurification and bioaugmentation in biopiles The TPH-ORO contents of the AlZubara soil samples were around 280 mg/kg, but the TPH-DRO contents were <1 mg/kg. The TPH-DRO contents of the upper and lower Dukhan samples were 6250 and 6470 mg/kg, respectively. The TPH-ORO contents of the upper and lower Dukhan samples were around 4000 mg/kg. This indicated that the Dukhan soil samples were much more polluted with TPH than the AlZubara soil samples. The self-bioremediation potential was therefore studied using the Dukhan soil samples. The bioaugmentation approach was tested using isolates D1D2 (Bacillus licheniformis) and D5D1 (Pseudomonas aeruginosa). A biopile system was used to perform ex situ bioremediation tests under laboratory conditions. Each biopile contained 685 g of homogeneous soil (a mixture of the upper and lower soil samples) that had been passed through a 2 mm sieve to remove particles with diameters >2 mm. The C/N/P ratio is very important for bacterial growth, so the carbon, nitrogen, and phosphorus contents of the weathered Dukhan soil samples were determined. The results are shown in Table 5. As expected, because the soil was heavily polluted, the carbon and hydrogen contents of the soil were high, and contributed around 23 % of the dry matter content. The total nitrogen and phosphorus contents of the soil were low, at 0.5 and 0.09 mg/kg respectively, which was also expected. The C/N/P ratios for the homogeneous soil were 238/0.5/0.09 (which could be expressed as 100/0.21/0.038), which was not appropriate for microbial growth in the bioremediation system. The optimal C/N/P ratios for the bioremediation of oil by bacteria are between 100/10/0.5 and 100/ 20/1 [41]. The C/N/P ratios were therefore adjusted to the ratios shown in Table 6 for use in the different bioremediation tests. The ratios were adjusted by adding ammonium nitrate and potassium phosphate, as mentioned in the "Materials and methods" section. The pH of the soil was close to 7. The TPH-DRO content was 6.4 mg/ kg, and the TPH-ORO content was 40,270 mg/kg. The moisture contents of the soil used in the different tests were adjusted to 10 % or 13.5 % by adding distilled water. In fact, preliminary results (not shown) showed growth of all isolated bacteria in soils containing 10 % or 13.5 % moisture. Knowing that the soil initially contained 6% moisture and considering the dry weather in the region for long periods of the year, these two moisture contents were selected for the study. The biopiles were incubated at room temperature (23 C) or at 30 C. The growth of the endogenous bacteria (as the total CFU) and the TPH contents were periodically determined. The values found after 90 d are shown in Table 6. As expected, bioremediation of the oil-contaminated soil required the chemical composition of the soil to be adjusted to provide the optimal nutritional requirements of the hydrocarbon-degrading microorganisms [41]. Ammonium nitrate was found to be an appropriate source of nitrogen for the endogenous hydrocarbon-degrading bacteria in the Dukhan soil. Adjusting the moisture content to either 10 % or 13.5 % was necessary. Increasing the temperature from room temperature to 30 C increased the bacterial growth efficiency and increased the percentage of TPH removed. The TPH-DRO and TPH-ORO removal efficiencies through self-purification were 30 % and 20 %, respectively, after 90 d at 30 C at a moisture content of 13.5 %. Interestingly, the soil pH remained close to 7, meaning incubating the soil for longer could have given a better self-purification performance. Bioaugmentation using D1D2 (Bacillus licheniformis) and D5D1 (Pseudomonas aeruginosa) increased the TPH-DRO and TPH-ORO removal efficiencies to 53 % and 30 %, ORO respectively. Bioaugmentation with D1D2 and D5D1 did not appear to cause inhibition of growth of either the endogenous bacteria or the bioaugmented bacteria themselves, although there would have been very complex interactions between all of the bacteria in the soil. The statistical analysis using one-way ANOVA with a variance using the 95 % confidence level (p > 0.05) showed that the total biomass and the TPH removal were significantly higher in the biopiles bioaugmented with D1D2 or D5D1 at C/N/P ratios of 100/ 10/0.5 and 100/10/1 at 30 C, than all the biostimulated piles. The C/N/P ratio of 100/10/1 was significantly more suitable both for growth and TP removal. D1D2 was also significantly more appropriate than D5D1. Interestingly, both strains D1D2 and D5D1 significantly improved the removal of the oil range organics (n-C10-n-C35) which include the most difficult hydrocarbons exceeding n-C25. The relationship between the growth in term of cfu and TPH removal is an indicator of the adaptation of the bacteria to the available substrate for growth. However, since many bioconversions can also occur and lead to generation of energy to the cells without net removal of the substrate, it is difficult to attribute the growth only to degradation of the substrate. Conclusions The results indicated that hydrocarbon-degrading bacteria can adapt to soil contaminated with highly weathered oil, particularly in the harsh Arabian Gulf environment. Eight and 16 bacterial species were isolated by enriching cultures from soil collected in the polluted AlZubara and Dukhan areas, respectively. The bacteria were identified using the 16S rDNA (i.e., by Ribotyping). Bacillus and Virgibacillus bacteria were found to be dominant as highly tolerant to 10 % diesel (75 d/L hydrocarbons) in the AlZubara soil, and Bacillus and Pseudomonas were found to be dominant in the Dukhan soil. The main objective of such isolation procedure by enrichment cultures was to evaluate the potential of the indigenous bacteria to grow and remove the weathered hydrocarbons from soils characterized by harsh conditions for long time. The isolated strains do represent only a fraction of all the indigenous population. The high toxicity pressure of the isolation program lead to the enrichment of the cultures with strains having a potential of application in-situ or ex-situ if properly bioaugmented in the soil from where they are originated and adapted. Isolated bacterial species had strong activities for all types of TPH, but Bacillus licheniformis from the AlZubara soil and Pseudomonas aeruginosa from the Dukhan soil had particularly strong potentials for degrading oil, indicated by the increases in the n-C17/pristane and n-C18/phytane ratios in cultures of these species. Selfpurification by endogenous bacteria was found to be possible after the C/N/P ratios for the soil had been adjusted to 100/10/1. Self-purification removed 30 % of the TPH-DRO and 20 % of the TPH-ORO. More interestingly, Bacillus licheniformis and Pseudomonas aeruginosa removed >50 % of the TPH-DRO and 30 % of the TPH-ORO in 90 d. Interestingly, these results show that the introduced bacteria cooperated with the rest of the endogenous bacteria through co-metabolic activities or commensalism [39]. They did not exhibit inhibition or produce intermediates to other active bacterial strains of importance to the removal of TPH. These findings are important because they indicate that in situ bioremediation of sites polluted with strongly weathered hydrocarbons is possible, if highly adapted endogenous bacteria which are isolated though an appropriate isolation and screening program based on high toxicity pressure, are bioaugmented. A similar conclusion, that any spillage problem should be considered separately, and a sustainable strategy based on suitable technology should be developed, was drawn by Ivshina et al. [42]. Author statement Nasser AlKaabi: Designed the research, performed the experiments, analyzed the data, and drafted the manuscript. Nabil Zouari: Senior Author, supervisor of PhD Thesis of Nasser AlKaabi. Designed the research, analyzed the data, provided equipment and infrastructure, wrote/edited the manuscript. Mohammad AlGhouti: Co-supervisor of PhD Thesis of Nasser AlKaabi. Designed the research, analyzed the data, provided equipment and infrastructure, contributed in writing/editing of the manuscript. Samir Jaoua: Designed the experiments of identification of the bacteria, analyzed the corresponding data, provided equipment and infrastructure, contributed in writing/editing of the manuscript Funding The publication of this article was funded by the Qatar National Library. Declaration of Competing Interest The authors report no declarations of interest.
2020-10-28T19:11:39.996Z
2020-10-12T00:00:00.000
{ "year": 2020, "sha1": "9b84082ab6c345cf34419212a622301bdcce3d88", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.btre.2020.e00543", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "76afe89740ab5d4b40d328887dae790913148df8", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
196618300
pes2o/s2orc
v3-fos-license
RETRACTED ARTICLE: Loss of exosomal MALAT1 from ox-LDL-treated vascular endothelial cells induces maturation of dendritic cells in atherosclerosis development Statement of Retraction We, the Editors and Publisher of the journal Cell Cycle, have retracted the following article: Hongqi Li, Xiang Zhu, Liqun Hu, Qing Li, Jian Ma, and Ji Yan. Loss of exosomal MALAT1 from ox-LDL-treated vascular endothelial cells induces maturation of dendritic cells in atherosclerosis development. Cell Cycle. 2019;18(18):2225-2267. doi: 10.1080/15384101.2019.1642068. Since publication, significant concerns have been raised about the integrity of the data and reported results in the article. When approached for an explanation, the authors did not provide their original data or any necessary supporting information. As verifying the validity of published work is core to the integrity of the scholarly record, we are therefore retracting the article. The corresponding author listed in this publication has been informed. We have been informed in our decision-making by our policy on publishing ethics and integrity and the COPE guidelines on retractions. The retracted article will remain online to maintain the scholarly record, but it will be digitally watermarked on each page as ‘Retracted.’ Introduction Atherosclerosis (AS) is a chronic inflammatory and autoimmune disease with increased morbidity and mortality globally [1,2]. Dendritic cells (DCs) are the most potent antigen-presenting cells in the immune system and are hyperactive in atherosclerotic plaques [1,3]. DCs are present in immature forms in the arterial wall under physiological conditions and become activated following capturing antigens during atherogenesis [3,4]. DCs contribute to atherogenesis and have been identified as a major target for the control of this harmful immune response in AS [3,5]. The nuclear factor erythroid 2-related factor (NRF2) has antioxidant and anti-inflammatory effects in AS. Recent data revealed that NRF2 deficiency promotes features of plaque instability in hypercholesterolemic mice [6]. Furthermore, NRF2 activation exerts anti-atherosclerosis effects [7] and attenuates oxidized low-density lipoprotein (oxLDL)-induced endothelial cell injury [8]. In addition, NRF2 is involved in the regulation of the activation [9], maturation [10], and immune tolerance of DCs [11]. Moreover, inhibition of NRF2 in DCs in glioma-exposed microenvironment enhances DCs maturation and the subsequent T cells activation [12]. However, it has not been reported whether the NFR2 signaling pathway participates in the development and progression of AS by mediating DCs immune tolerance. Long non-coding RNA (lncRNA) are important regulators of gene expression and are crucial mediators in various diseases, including AS [13][14][15]. One prominent lncRNA known as metastasis-associated lung adenocarcinoma transcript 1 (MALAT1) has been widely shown to be involved in various cancers [16][17][18]. For AS, it was demonstrated that MALAT1 knockdown promotes AS progression [19]. Recent data has shown that MALAT1 overexpression induces tolerogenic DCs and immune tolerance in heart transplantation and autoimmune disease [20]. However, whether MALAT1 affects the immune tolerance of DCs in the setting of AS is still uncertain. Exosomes are small vesicles delivered by many cells of the organism and have recently been recognized as important mediators of intercellular communication by transmitting and exchanging donor cell-specific proteins, mRNA, small noncoding RNA including lncRNA, and so on [21]. It has been widely demonstrated that ox-LDL is involved in the AS development by inducing oxidative stress and endothelial dysfunction [15,22]. In addition, MALAT1 has been shown to activate NRF2 signaling in HUVECs [23]. Accordingly, this study explored the role of MALAT1 expressed in exosomes from oxLDL-treated vascular endothelial cells (VECs) in regulating DCs maturation in the context of AS. Furthermore, we investigated whether the underlying mechanisms involved NRF2 signaling. Human sample collection This study was conducted in accordance with the protocol approved by the Clinical Research Ethics Committee of Affiliated Anhui Provincial Hospital, Anhui Medical University. AS patients (AS group, n = 25, mean age 65.3 ± 8.8 years, 14 male) and healthy participants (Normal group, n = 20, mean age 55.6 ± 10.1 years, 12 male) who underwent physical examinations during the same period were enrolled in this study. AS was diagnosed if brachial-ankle pulse wave velocity (baPWV) >1400 cm/s. All subjects with other complications were excluded, including valvular heart disease, severe arrhythmia, diabetes, malignant tumor, and severe liver and kidney dysfunction. Whole blood from each participant was exsanguinated, cooled at 4ºC for 1 h, and then centrifuged at 3000 rpm for 10 min. The resulting supernatant was sera that were stored at −80ºC for subsequent experiments. Cell culture and oxLDL treatment Human umbilical vein endothelial cells (HUVECs) and mouse VECs were purchased from Procell (Wuhan, China). HUVECs and mouse VECs were cultured in VECs-specific complete medium (Procell). oxLDL (50 mg/L) was added into HUVECs and mouse VECs for 24 h of incubation. Isolation and identification of serum-or VECs-derived exosomes Isolation of serum-derived exosomes was performed using miRCURY Exosome Serum/Plasma Kit according to the manufacturer's instructions. Isolation of VECs-derived exosomes was performed using miRCURY Exosome Cell/Urine/CSF Kit (QIAGEN, Germany) according to the manufacturer's instructions. Briefly, samples were centrifuged at 300 × g for 10 min and the resulting cell supernatant was then centrifuged again at 2,000 × g for 10 min to discard dead cells. The supernatant was subject to additional centrifugation at 10,000 × g for 30 min to discard cell debris. Afterward, the supernatant was centrifuged again at 100,000 × g for 70 min. The resultant exosome pellets were resuspended in PBS and prepared for subsequent analysis. For identification, total protein was extracted from exosomes using Total exosome RNA and protein isolation kit (Invitrogen, USA). The protein expression of exosomal surface markers TSG101 and CD63 were examined by western blot. RNA extraction and qRT-PCR analysis Total RNA from exosomes was extracted using Total exosome RNA and protein isolation kit (Invitrogen). Total RNA from DCs was extracted using TRIzol reagent (Invitrogen). RNA was reverse transcribed to cDNA using the PrimeScript RT reagent Kit (Takara Bio Company, Shiga, Japan). Relative MALAT1 expression was detected using an SYBR Green Kit Endocytosis was measured as the cellular uptake of FITC-dextran. Briefly, FITC-Dextran (0.5 mg/ mL) was added into DCs (approximately 3 × 10 5 cells per sample) for 2 h of incubation at 4°C and 37°C, respectively. Afterwards, cells were washed with cold (4°C) PBS three times to remove excess dextran and subjected to FCM analysis. The quantitative uptake of FITC-dextran by the cells was determined using FCM analysis. Inactive intake in each group was excluded by subtracting the fluorescence intensity at 4°C. Values are presented as fold induction (median intensity values) relative to uptake by untreated cells. Detection of reactive oxygen species (ROS) content ROS content in DCs was determined using the Reactive Oxygen Species Assay Kit (YEASEN, Shanghai, China) according to the manufacturer's instructions. ROS content in mouse sera was determined using the Mouse ROS ELISA Kit (Wuhan EIAab Science Co. Ltd, China) according to the manufacturer's instructions. Enzyme-linked immunosorbent assay (ELISA) The levels of IL-12, IL-6, IL-10, and TGF-β in mouse sera were measured using their commercial ELISA kits (R&D Systems) according to the manufacturer's instructions. RNA pull-down assay The interaction between MALAT1 and NRF2 protein was determined by RNA pull-down assay. Briefly, the DNA probe complementary to MALAT1 was synthesized and biotinylated by GenePharma Co., Ltd (Shanghai, China). RNA pull-down assay was performed using the Pierce™ Magnetic RNA-Protein Pull-Down Kit (Thermo Fisher Scientific) according to the manufacturer's instructions. The RNA-binding protein complexes were washed and eluted and subjected to western blot analysis. RNA immunoprecipitation (RIP) RIP was conducted to verify the binding between MALAT1 and NRF2. RIP was performed using the RNA-Binding Protein Immunoprecipitation Kit (Millipore) according to the manufacturer's instructions. The cells were lysed and the cell lysis solutions were incubated with NRF2 antibody or isotype control IgG. RNA-protein complexes were immunoprecipitated with protein A agarose beads and RNA was extracted by using TRIzol (Invitrogen). qRT-PCR was performed to quantify the MALAT1. Western blot Cell lysates were prepared in protein extraction reagent (Pierce Biotechnology, IL) containing protease inhibitor (Pierce Biotechnology). Proteins were then separated by 10% SDS-PAGE and transferred to PVDF membranes (Bio-Rad, USA). After being blocked with 5% nonfat dry milk, the membrane was then incubated with the primary antibody against NRF2, HO-1, and NQO1 (all from Santa Cruz Biotechnology, USA), at 4°C overnight, and incubated with horseradish peroxidase-conjugated secondary antibodies at room temperature for 1 h. Blots were developed using an enhanced chemiluminescence kit (ECL kit, Pierce Biotechnology, IL) and band intensity was quantified with Quantity One software. GAPDH or tubulin served as the loading control. Nuclear NRF2 detection Nuclear and cytosolic proteins were extracted using the Nuclear and Cytoplasmic Protein Extraction Kit (Beyotime) according to the manufacturer's instructions. Detection for NRF2 protein expression in nuclear lysates was performed by western blot as described above. Lamin B1 served as the nuclear loading control. Animals Specific pathogen-free (SPF) ApoE knockout (ApoE −/-) mice were purchased from Changzhou Cavans Experimental Animal Co., Ltd. (Changzhou, China). All mice were kept under constant temperature and humidity with 12 h light-dark cycles, and had free access to food and water at a temperature of 25°C ± 1°C and humidity of 50%. The animal experiment was approved by the Ethics Committee of the Affiliated Anhui Provincial Hospital, Anhui Medical University. Animal experiments Mice were randomly divided into five groups (n = 10/each group): Control, AS, AS+PBS, AS +VECs-Exos, and AS+ox-LDL-VECs-Exos. The ApoE −/mice were fed with a high-fat diet containing 21% fat and 0.15% cholesterol for 12 weeks to establish a mouse model of AS. The mice in the control group received an ordinary diet instead. One week before the completion of AS modeling, mice in the AS+PBS, AS+VECs-Exos, and AS+ox-LDL-VECs-Exos group received an intravenous injection of either PBS (control), exosomes from mouse VECs (VECs-Exo; 1.2 μg/g), or exosomes from ox-LDL-treated mouse VECs (ox-LDL-VECs -Exos; 1.2 μg/g), respectively, twice for a week. At the end of the twelfth week, when Exos had been injected for one week, these animals were sacrificed and their serum samples were prepared for detection of MDA, ROS, IL-10, IL-12, IL-6, and TGF-β. The aortas were cut into sections for histological examination. Histology Oil red O staining was performed in the aorta to analyze vascular lipid deposition and plaque area. Hematoxylin and eosin (HE) staining in aortic arch was used to analyze the gross morphology of tissue cells. Briefly, the aorta sections were fixed in 4% buffered paraformaldehyde, embedded in paraffin, and then sectioned at 4 μm thickness. The resulting sections were prepared for HE and Oil red O staining according to standard protocols. All sections were evaluated using a light microscope (Olympus BH-2; Olympus Corporation, Japan). Statistical analysis All statistical analyses were performed using SPSS version 16.0 (SPSS, Inc., Chicago, USA). Values are presented as the mean ± standard deviation (SD) from three independent experiments. p < 0.05 was considered to indicate a statistically significant difference. The unpaired Student's t-test was used to analyze differences between the two groups. Oneway analysis of variance (ANOVA) was used to analyze differences among two or three groups. MALAT1 expression is decreased in exosomes from AS-sera and ox-LDL-HUVECs Exosomes were isolated from sera from normal and AS humans, also from HUVECs treated with PBS or ox-LDL. Western blot analysis confirmed enrichment of the exosomal surface markers TSG101 and CD63 (Figure 1(a)). Importantly, the qRT-PCR analysis showed that MALAT1 expression was significantly decreased in AS-exosomes when compared with the exosomes from normal humans (Figure 1(b)). Furthermore, we also observed a notable lower MALAT1 expression in exosomes derived from ox-LDL-treated HUVECs than that in the exosomes from PBS-treated HUVECs (Figure 1(c)). Exogenous overexpression of MALAT1 from ox-LDL-HUVECs-Exos inhibits DCs maturation The iDCs were treated with LPS to induce oxidative stress injury. Data revealed that LPS treatment significantly decreased cell endocytosis activity evidenced by lower cellular uptake of FITC-dextran in iDCs co-cultured with LPS (Figure 2(e)). Furthermore, LPS treatment notably increased expression of DCs markers CD80, CD86, and HLA-DR (Figure 2(f)). Reduction of internalization ability is an early signal of DC maturation. Thus, these data indicated that LPS promoted DCs maturation. Importantly, HUVECs-Exos treatment significantly increased cellular uptake of FITC-dextran in iDCs (Figure 2(e)) and decreased expression of DCs markers CD80, CD86, and HLA-DR (Figure 2(f)), suggesting that HUVECs-Exos attenuated the LPS-induced DCs maturation. Furthermore, ox-LDL-HUVECs-Exos showed weaker anti-DCs maturation effects when compared with HUVECs-Exos group (Figure 2(e,f)). Exos are important mediators of intercellular communication by transmitting donor cell-specific proteins and RNA. Notably, consistent with the decreased MALAT1 in ox-LDL-HUVECs-Exos (Figure 1(c)), MALAT1 expression was also downregulated in the iDCs co-cultured with ox-LDL-HUVECs-Exos when compared with iDCs co-cultured with HUVECs-Exos (Figure 2(d)). These data indicated that iDCs co-cultured with ox-LDL-HUVECs-Exos absorbed lower MALAT1 when compared with the iDCs co-cultured with HUVECs-Exos. Thus, we may suggest that, the mechanism underlying the ox-LDL-HUVECs-Exos-mediated weaker inhibitory effect on DCs maturation might be associated with lower MALAT1 expression. MALAT1 interacts with NRF2 and activates NRF2 signaling in DCs Next, we explored the mechanisms underlying the MALAT1-mediated activation of NRF2 signaling. Results of RNA pull-down assay showed that R E T R A C T E D NRF2 was abundantly detected in the pull-down complex of MALAT1 (Figure 4(a)). Furthermore, results of RIP assay further confirmed the binding between MALAT1 and NRF2, as indicated by abundantly expressed MALAT1 when using the NRF2 antibody as compared to using the nonspecific antibody (IgG control) (Figure 4(b)). To verify how MALAT1 expression in DCs regulated NRF2, we overexpressed and silenced MALAT1 in DCs to examine the effect of MALAT1 expression on NRF2 signaling. Data revealed that MALAT1 overexpression significantly upregulated protein levels of NRF2, HO-1, and NQO1 ( Figure 4(c)) and increased NRF2 nuclear translocation (Figure 4(d)). In contrast, MALAT1 knockdown exerted the opposite effects ( Figure 4(e,f)). These findings indicated that MALAT1 upregulation in DCs activated NRF2 signaling, whereas MALAT1 downregulation in DCs inhibited NRF2 signaling. MALAT1 expression in mouse VECs-Exos is associated with AS Finally, we verified the in vivo role of mouse VECs-Exos treatment in AS progression in AS mice. As shown in Figure 5(a), the AS mice displayed obvious formation of atheromatous plaques in comparison with the control mice. Furthermore, the aorta of AS mice showed obvious atherosclerotic plaque, a large amount of porridge-like amorphous substance in the R E T R A C T E D lipid pool, loose and structurally disordered smooth muscle layer of the plaque, and inflammatory cells infiltration ( Figure 5(b)). Moreover, serum levels of oxidative stress indexes including MDA content and ROS content ( Figure 5(d)) and pro-inflammatory cytokines (IL-12 and IL-6) ( Figure 5(e)) were significantly higher in the AS group compared with the control group. In contrast, serum levels of antiinflammatory cytokines (IL-10 and TGF-β) were lower in the AS group than that in the control group ( Figure 5(e)). These data indicated that the mouse model of AS was successfully established. We also found that mouse VECs-Exos treatment alleviated AS progression, as evidenced by less atheromatous plaques and inflammatory cells infiltration ( Figure 5(a,b)), decreased serum levels of oxidative stress indexes ( Figure 5(d)) and proinflammatory cytokines (IL-12 and IL-6) ( Figure 5 (e)), as well as increased anti-inflammatory cytokines (IL-10 and TGF-β) ( Figure 5(e)). Furthermore, compared with the AS+VECs-Exos group, the mice in the AS+ox-LDL-VECs-Exos group showed more atheromatous plaques and inflammatory cells infiltration ( Figure 5(a,b)), as The interaction between MALAT1 and NRF2 protein was further validated by RIP assay. Effect of MALAT1 overexpression on the protein expression of NRF2, HO-1, and NQO1 in total cell lysates (c) as well as nuclear NRF2 in the nuclear fraction lysates (d) was evaluated by western blot. Effect of MALAT1 knockdown on the protein expression of NRF2, HO-1, and NQO1 in total cell lysates (e) as well as nuclear NRF2 in the nuclear fraction lysates (f) was evaluated by western blot. *p < 0.05 vs. IgG (b) or Vector (c), or si-Ctrl (e). R E T R A C T E D well as increased serum levels of oxidative stress indexes ( Figure 5(d)) and pro-inflammatory cytokines (IL-12 and IL-6) ( Figure 5(e)), as well as increased anti-inflammatory cytokines (IL-10 and TGF-β) ( Figure 5(e)). As shown in Figure 5(c), MALAT1 expression was significantly decreased in the AS group compared with the control group, which was consistent with human data (Figure 1(b)). Furthermore, serum MALAT1 expression was significantly higher in the AS+VECs-Exos group than that in the AS+PBS group. Moreover, serum MALAT1 expression was decreased in the AS+ox-LDL-VECs-Exos group when compared with the AS+VECs-Exos group. Taken togenther, these results indicated that a decrease in MALAT1 content from mouse VECs-Exos was associated with AS progression. Discussion DCs maturation contributes to atherogenesis [3,5]. DCs have functional differences between their immature and mature status. Compared with mature DCs (mDCs), iDCs possess higher phagocytosis and are weaker in antigen presentation and feeble in immunostimulation [3,4]. Furthermore, it is well accepted that iDCs possess tolerogenic and anti-inflammatory properties [3]. Our previous study has demonstrated that captopril treatment inhibits DCs maturation and maintains their tolerogenic property, which is closely associated with their anti-atherosclerosis activity [3]. In this study, our results revealed that exogenous overexpression of MALAT1 from ox-LDL-HUVECs-Exos inhibited DCs maturation, suggesting the potential antiatherogenesis effect of MALAT1. MALAT1 has been reported to be less expressed in the atherosclerotic plaques [25]. Furthermore, MALAT1 knockdown promotes AS progression in the MALAT1-deficient ApoE −/mice compared with the MALAT1-wild-type ApoE −/mice [19]. These findings indicated the potential protective role of MALAT1 in AS. Several studies have shown that MALAT1 play different roles through exosomes as a medium of transmission. For example, exosomal MATAL1 from human adipose-derived stem cells promoted ischemic wound healing [26] and traumatic brain injury recovery [27]. Exosomal MALAT1 derived from oxLDL-treated HUVECs promoted M2 macrophage polarization [28]. Delivery of MALAT1 mediated by breast cancer cells-secreted exosomes induced cell proliferation in breast cancer [21]. Our in vivo assay showed that MALAT1 expression from AS mouse sera-derived exosomes showed an opposite trend to AS progression, indicating that a decrease in MALAT1 content from mouse VECs-Exos was associated with AS progression. Thus, the above-mentioned findings support our notion that MALAT1 has potential antiatherogenesis effect in AS. We next investigated the underlying mechanism by which increased MALAT1 expression from ox-LDL-HUVECs inhibited DCs maturation. As one of the master regulators of anti-oxidative responses, NRF2 plays critical roles in the regulation of activation [9], maturation [10], and immune tolerance [11] of DCs. Furthermore, NRF2 activation exerts antiatherosclerosis effects [7] and attenuates ox-LDLinduced endothelial cell injury [8]. Our results showed that exogenous overexpression of MALAT1 from ox-LDL-HUVECs-Exos interacted with NRF2 and activated NRF2 signaling in DCs, and thereby inhibited ROS accumulation. Recent evidence indicates that ROS production promotes maturation and activation of DCs [24]. Hence, we may suggest that exogenous overexpression of MALAT1 from ox-LDL-HUVECs inhibited DCs maturation by interacting with NRF2 and activating NRF2 signaling. Although Chen et al. [29] have found that MALAT1 interacted with NRF2 and inhibited NRF2 downstream gene expression, studies revealing the positive regulation of NRF2 by MALAT1 have been reported. For example, Zeng et al. [23] demonstrated that MALAT1 downregulated NRF2negative regulator KEAP1 to activate NRF2 signaling in HUVECs. Recent data also revealed that antagonism of MALAT1 downregulated NRF2 in multiple myeloma cells [30]. Consistent with this, our results showed that MALAT1 interacted with NRF2 and activated NRF2 signaling in DCs. Evidence indicates that endothelial cell-derived microvesicles or exosomes can regulate DCs maturation in vascular wall [31]. DCs are present in their immature forms in non-diseased arteries and become activated during atherogenesis. Some DCs cluster with T cells directly within atherosclerotic lesions, while others migrate to lymphoid organs to activate T cells [3,4]. The R E T R A C T E D interaction between endothelial cell-derived microvesicles/exosomes and DCs was complicated and requires further investigation [31][32][33]. In the present study, our in vitro results showed that exogenous overexpression of MALAT1 from ox-LDL-HUVECs-Exos inhibited DCs maturation. Further assays in AS model mice demonstrated that mouse VECs-Exos treatment alleviated AS progression. In addition, a decrease in MALAT1 content in mouse VECs-Exos might be associated with mouse AS progression. However, whether the mechanism underlying the protective effect of VECs-Exos on AS was associated with MALAT1-mediated regulation of DCs maturation remains to be further studied. Conclusion In conclusion, loss of exosomal MALAT1 derived from ox-LDL-treated VECs represses NRF2 signaling pathway, thus failing to effectively eliminate oxidative stress, which results in DCs maturation in AS. Disclosure statement No potential conflict of interest was reported by the authors.
2019-07-16T14:31:23.310Z
2019-07-29T00:00:00.000
{ "year": 2019, "sha1": "6eb0645dc24a70784a3728734476028d1682c3f9", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/15384101.2019.1642068?needAccess=true", "oa_status": "BRONZE", "pdf_src": "TaylorAndFrancis", "pdf_hash": "9309439e6e5352dd9d785df43127690a7ec6c758", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
269973950
pes2o/s2orc
v3-fos-license
The clinical relevance of fixation failure after pubic symphysis plating for anterior pelvic ring injuries: an observational cohort study with long-term follow-up Background Open reduction and plate fixation is a standard procedure for treating traumatic symphyseal disruptions, but has a high incidence of implant failure. Several studies have attempted to identify predictors for implant failure and discussed its impact on functional outcome presenting conflicting results. Therefore, this study aimed to identify predictors of implant failure and to investigate the impact of implant failure on pain and functional outcome. Methods In a single-center, retrospective, observational non-controlled cohort study in a level-1 trauma center from January 1, 2006, to December 31, 2017, 42 patients with a plate fixation of a traumatic symphyseal disruption aged ≥ 18 years with a minimum follow-up of 12 months were included. The following parameters were examined in terms of effect on occurrence of implant failure: age, body mass index (BMI), injury severity score (ISS), polytrauma, time to definitive treatment, postoperative weight-bearing, the occurrence of a surgical site infection, fracture severity, type of posterior injury, anterior and posterior fixation. A total of 25/42 patients consented to attend the follow- up examination, where pain was assessed using the Numerical Rating Scale and functional outcome using the Majeed Pelvic Score. Results Sixteen patients had an anterior implant failure (16/42; 37%). None of the parameters studied were predictive for implant failure. The median follow-up time was six years and 8/25 patients had implant failure. There was no difference in the Numerical Rating Scale, but the work-adjusted Majeed Pelvic Score showed a better outcome for patients with implant failure. Conclusion implant failure after symphyseal disruptions is not predictable, but appears to be clinically irrelevant. Therefore, an additional sacroiliac screw to prevent implant failure should be critically discussed and plate removal should be avoided in asymptomatic patients. The clinical relevance of fixation failure after pubic symphysis plating for anterior pelvic ring injuries: an observational cohort study with long-term follow-up The aim of the present study was to identify potential predictors of implant failure following plate fixation of traumatic symphyseal disruptions.Secondarily, the impact of implant failure on functional outcome and pain was investigated. Study design A single-center, retrospective, observational non-controlled cohort study was performed in a level-1 trauma center.All patients were consecutively enrolled and included if they were treated with a plate fixation of a traumatic symphyseal disruption between January 1, 2006, and December 31, 2017, were ≥ 18 years of age, and had a minimum follow-up of 12 months.Patients with pathological fracture, a lethal injury, acetabulum fracture, AO type A fracture, Young and Burgess lateral compression injury or posterior implant failure were excluded.Of the 42 patients identified, 37 patients could be reached by telephone, 25 patients consented to participate in the study.Patient selection is shown in Fig. 1. Outcome measures Implant failure was defined according to the criteria published by Collinge et al. (interval backout, lysis halo around the screw threads, breakage of plate or screws or separation between screw head and plate) [20].If implant failure occurred more than once in a patient, the implant failure was counted only once. The following parameters influencing the occurrence of implant failure were evaluated: age, body mass index (BMI), injury severity score (ISS), presence of polytrauma, time to definitive treatment, postoperative weight bearing, occurrence of a surgical site infection.Fracture severity was also analyzed.All pelvic injuries were classified according to the AO classification of 2018 and Young and Burgess classification.For analyses regarding the impact of the posterior injury, sacral fractures were compared to injuries of sacroiliac joint.If patients had a sacral fracture and a sacroiliac joint injury, they were classified as having a sacroiliac joint injuries.To assess surgical predictors of implant failure, the type of anterior fixation, plate type and type of posterior stabilization was examined. The impact of implant failure on pain was assessed using the Numerical Rating Scale and functional outcome was investigated using Majeed Pelvic Score.Since 3 patients did not have a regular job at the time of injury, additionally the relative Majeed Pelvic Score was assessed in order to compare all patients.It was defined as the percentage of the maximum score that could be achieved. Statistical analysis The data processing and statistical analysis was carried out using IBM SPSS Statistics 27® (IBM Corporation Armonk, NY, USA) and Microsoft Office Excel 2021® (Microsoft Corporation, Redmond, WA, USA).Mean ± standard deviation was given for Gaussian distributed data.For non-Gaussian distributed data, median [interquartile range (IQR) 25% ; IQR 75% ] was given.Group comparisons of nominal data were carried out using crosstabs and chi-square tests.Gaussian distributed data were analyzed using the t-test and non-Gaussian distributed data by the Wilcoxon / Mann Whitney U test.The level of statistical significance was defined at a p-value < 0.05. Results A total of 47 patients with a traumatic symphyseal disruption aged ≥ 18 years were identified.Of these, two were excluded due to non-operative treatment and three related to solely posterior implant failure.Thus, a total of 42 patients were included and analyzed. The distribution of fractures is shown in Table 2.When comparing type B and C fractures, there were no significant differences regarding implant failure in either anterior-only (p = 0.18) or anterior-posterior (p = 0.20) treated patients.When comparing anterior-posterior compression injuries (APC) II to > APC II injuries, there were no differences.There was no difference in APC II vs. > APCII, for either anterior-only (p > 0.99) or anterior-posterior (p = 0.55) treated patients. Surgery-related predictors A total of 40/42 patients were treated with a single anterior plate, and 2/42 with a double plate (1/2 with implant failure).In 3/42 patients, reconstruction plates with 3.5 mm screws were used for anterior stabilization.The remaining 39/42 patients were treated with a dynamic compression plate of 4.5 mm.Of the patients with a single plate, a four-hole plate was used in 36/40 cases (22/36 without and 14/36 with implant failure).The remaining four patients were treated with a five-, six-ten-or 12-hole plate.Of these only the patient treated with the five-hole plate had an implant failure.The plate choice does not influence the occurrence of implant failure (p > 0.99).Of the patients with implant failure, 3/16 (7.14%) required revision surgery, each one was treated by double plate fixation, a longer plate with spinopelvic fixation, and single plate exchange. Pain and functional outcome The median time to follow-up was 6 (2.5; 7. ).The Majeed Pelvic Score was 82.8 ± 18.39 for all patients, 90.13 ± 8.37 for patients with implant failure and 79.35 ± 20.91 for patients without implant failure.There was no significant difference (p = 0.177) between patients with and without implant failure. Three patients (two with implant failure) had no regular work before their pelvic injury.The relative Majeed Pelvic Score was 84.77%±17.86%for all patients, 95%±4.14%for patients with implant failure and 79.96%±19.85% for patients without implant failure, revealing a better outcome for patients with implant failure (p = 0.047).Analyzing the categories of the Majeed Pelvic Score by comparing the most favorable outcome to the remaining answers, presented no significant differences in any category between patients with and without implant failure(p > 0.05). Table 2 Fracture distribution according to the AO and Young and Burgess classification for all patients and splitted regarding implant failure.Patients were further subdivided regarding the presence (+ PF) or absence (-PF) of a posterior fixation.The type of posterior fixation is presented: sacroiliac screws (SIS), SIS combined with a spinopelvic fixation (TSPF).Patients treated with an iliac plate and SIS are marked by *.One case treated with an iliac plate is not included in the table ( Discussion No significant associations between patient characteristics (e.g.age, BMI, ISS) or treatment-specific factors (e.g.time to surgery, post-operative weight-bearing protocol) and the occurrence of implant failure was observed.Factors, such as fracture severity, additional posterior stabilization, and the specific type of posterior injury did not influence implant failure rates.The Majeed Pelvic Score was higher in the implant failure group after adjusting it to the previous work of patients.The implant failure rate of 37% is comparable to previous reports [12,15,18,19] [5][6][7]12]. The inability to predict implant failure was previously reported in a more heterogeneous group of pelvic ring injuries [21].As in the study of Frietman et al. no predictors of implant failure in patients' demographics could be detected [15].Tseng et al. reported, that males suffer more often from implant failure [19].Due to the gender inhomogeneity of the cohort presented here, with 93% male patients, this finding could either be proven or disproven. Conflicting reports exist, regarding the effect of fracture severity according to the AO classification, and it is poorly documented for the Young and Burgess classification [5,6,15,20].The advantage of our study is the use of both the Young and Burgess and AO classification, particularly because of the conflicting recommendations for comparable injuries associated with the use of different classification systems.Performing a global survey yielded a predominant use of stand-alone anterior plating especially in Europe for AO type B1.1 injuries [22].In contrast, a survey in the UK revealed a favored treatment using an anterior plate with an additional SI screw for APC II injuries [10].Different recommendations may result from to a more heterogeneous injury pattern and displacement within similar classified injuries as known from lateral compression fractures [23].This hypothesis is supported by the recommendation of Gill et al. performing an individual assessment of stability and required stabilization even in similarly classified injuries [10].The fracture classification was not predictive of implant failure in the present study. While the choice of a two-vs.a four-hole plate affects the occurrence of implant failure [17], the choice of longer plates or double plating does not affect implant failure [6,15,18,19]. Besides fracture classification, the type of posterior injury may affect implant failure.Eastman et al. determined implant failure predominantly in patients suffering from sacroiliac joint injuries [7].This may be due to the underestimation of instability or micro-instability caused by these injuries, or the lack of ability to detect them on static imaging [7,18].Such instabilities could be addressed with an additional posterior fixation resulting in a reduction of implant failure [12].However, the present study as well as previous studies were unable to support these finding [5,6,15,18,19]. In addition to different classifications, different weight bearing recommendations for the same injury pattern can affect implant failure [10].The present study could not support this thesis, which can be explained by a possible incompliance of the patients with partial weight bearing which could not be excluded [7]. The impact of implant failure on functional outcome is still a matter of debate [15,17].Frietman et al. supported the view, that implant failure could be the result of healing and the return of mobility within the pelvic ring and therefore should not be considered as a complication [15].Pain levels did not differ in this study comparable to previous reports [17]. Compared to previous studies, the Majeed Pelvic Score was higher in the present study [15,18,19,24,25].However, there are differing opinions on the impact of implant failure on the functional outcome as followed: no impact [19,26], a tendency for better outcome without significance for intact implants [5,17] or implant failure [15].In the present study, the implant failure group showed a significantly better outcome adjusting the Majeed Pelvic Score to the work category. The present study was limited by the retrospective design, the predominance of male patients, and the small number of patients, which reduced the power.Functional outcome could be estimated in only 60% (25/42) of the cohort. In conclusion, implant failure is a common radiologic phenomenon with little or no relevance to revision indication or functional outcome [20].In particular, screw loosening should not be overemphasized and, as previously suggested, radiologic analysis may not necessarily predict functional outcome [15,27].Therefore, plate removal in asymptomatic patients is not recommended and the addition of a sacroiliac screw should be critically discussed. Conclusion Anterior implant failure after symphyseal disruption is common and there are currently no factors that predict the occurrence of implant failure.Of note, the group without implant failure is not superior to patients with implant failure in terms of functional outcome, challenging the general recommendation of additional sacroiliac screws to prevent implant failure and the consideration of plate removal in asymptomatic patients. days and in 5/16 during initial hospitalization.Implant failure occurred in 4/16 patients during the first 30 days after surgery.Implant failure occurred twice in 2/16 patients.According to the criteria of Collinge et al., screw loosening occurred in 13/16 patients, screw breakage in 2/16 and plate breakage in 1/16. Table 3 Distributions of answers (n) given to the categories of the Majeed Pelvic Score for all (A), patients with implant failure (IF) and patients without implant failure (NIF).The answers are followed by the number of points in brackets assigned to the answer . Furthermore, the median time to implant failure of approximately 10 weeks is comparable to Rojas et al. (seven weeks), Eastman et al. (13 weeks) and Avilucea (16 weeks) et al. but earlier than reported by Morris et al. (one year)
2024-05-24T06:17:12.836Z
2024-05-22T00:00:00.000
{ "year": 2024, "sha1": "72449dee2956dc3ebdb5438e5dd1444cf68cd672", "oa_license": "CCBY", "oa_url": "https://pssjournal.biomedcentral.com/counter/pdf/10.1186/s13037-024-00401-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e46ffe82dfeff3f4cf01c912b7e5db9eafd2c764", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51720407
pes2o/s2orc
v3-fos-license
Is Variation in Resident-Centered Care and Quality Performance Related to Health System Factors in Veterans Health Administration Nursing Homes? The purpose of this research was to explore and compare common health system factors for 5 Community Living Centers (ie Veterans Health Administration nursing homes) with high performance on both resident-centered care and clinical quality and for 5 Community Living Centers (CLC) with low performance on both resident-centered care and quality. In particular, we were interested in “how” and “why” some Community Living Centers were able to deliver high levels of resident-centered care and high quality of care, whereas others did not demonstrate this ability. Sites were identified based on their rankings on a composite quality measure calculated from 28 Minimum Data Set version 2.0 quality indicators and a resident-centered care summary score calculated from 6 domains of the Artifacts of Culture Change Tool. Data were from fiscal years 2009-2012. We selected high- and low-performing sites on quality and resident-centered care and conducted 12 in-person site visits in 2014-2015. We used systematic content analysis to code interview transcripts for a priori and emergent health system factor domains. We then assessed variations in these domains across high and low performers using cross-site summaries and matrixes. Our final sample included 108 staff members at 10 Veterans Health Administration CLCs. Staff members included senior leaders, middle managers, and frontline employees. Of the health system factors identified, high and low performers varied in 5 domains, including leadership support, organizational culture, teamwork and communication, resident-centered care recognition and awards, and resident-centered care training. Organizations must recognize that making improvements in the factors identified in this article will require dedicated resources from leaders and support from staff throughout the organization. performance. For example, RCC could be a means of improving resident quality of life, which in turn can lead to improvements in clinical care quality via having more information about resident status or medical issues. 4 Studies in the private sector on RCC on quality outcomes such as those measured through the Minimum Data Set (MDS) or other survey deficiencies (eg, The Joint Commission citations) have found no effect or mixed effects. [4][5][6][7][8] In the Veterans Health Administration (VA), cross-sectional research from 130 Community Living Centers (CLCs) (ie, VA-owned and VA-operated nursing homes) showed a relationship between the use of RCC and a composite measure of quality based on the MDS, 9 but more recent longitudinal research (fiscal years [FY] [2009][2010][2011][2012] found no relationship. 10,11 Sullivan et al noted there were a few CLCs that were consistently high and low performers on care quality and RCC measures (ie, "high"-and "low"-performing facilities). 15 In the high-performing facilities, improvements in RCC were associated with increases in quality, whereas in the low-performing facilities, declines in RCC were associated with decreases in quality. These findings suggest that there may be system factors that distinguish between high-performing facilities (ie those that perform well on both quality and RCC) and lowperforming facilities (ie those that perform poorly on both quality and RCC). To date, research exploring clinical care quality and RCC has primarily focused on the relationship between the 2 paradigms. Far less is known about how and why some facilities are able to provide high levels of both RCC and care quality, whereas others are not. Furthermore, qualitative research in NHs has centered on examining factors associated with either care quality or RCC. The literature regarding factors affecting NH performance are influenced by 2 facets: (1) structural characteristics, including patient mix, [12][13][14] staffing levels, 15,16 administrator tenure, [17][18][19] and monetary resources, 20,21 and (2) organizational infrastructure, including organizational culture, [22][23][24] utilization of quality improvement/systems redesign infrastructure, 21,[25][26][27] alignment/coordination, 20,22,[28][29][30][31] teamwork and communication, [32][33][34] and supportive leadership. 25,[35][36][37][38] To address gaps in the literature, the objective of this article is to examine factors that distinguish between facilities providing high and low levels of both care quality and RCC utilizing data from a larger mixed methods study focused on care quality, RCC, and costs in VA CLCs. 10 The VA is an ideal setting to conduct this research because it is an integrated national network with a large number of CLCs. Similar to the private sector, the VA requires CLCs to monitor quality by undertaking assessments of long-stay residents (those staying over 90 days) using the Resident Assessment Instrument (RAI) MDS. The RCC paradigm was implemented in VA about 10 years ago and data on RCC have been collected using the Artifacts of Culture Change Tool to assess progress. In 2017, VA also began incorporating a new RCC measure-resident/staff engagement. 39 Methods This study used a sequential explanatory mixed methods design. 40,41 Figure 1 displays a visual model. First, we identified high and low performers on clinical care quality and RCC using previously published methods. 10,15 Next, we selected CLCs based on these data and conducted primary data collection by means of staff interviews at 12 CLCs and systematic content analysis of the transcripts. The purpose of the qualitative in-person site visits was to investigate the "how" and "why" some CLCs are able to deliver high levels of RCC and high MDS-based quality of care. After analyzing the qualitative data, we integrated the quantitative and qualitative data to identify the qualitative health system factors related to performance on Quality and RCC dimensions (high versus low on both dimensions). We received Institutional Review Board (IRB) approval from VA's Central IRB to conduct this study. Site Selection In total, 130 CLCs were identified based on their rankings on a clinical care quality composite measure calculated from 28 MDS version 2.0 quality indicators 10 and an RCC summary Note. RCC = resident-centered care; CLC = community living center. score calculated from 6 domains of the Artifacts of Culture Change Tool. We were unable to use Artifacts data from 12 CLCs due to lack of matching MDS quality indicator data. Data were from FY 2009-2012. The Supplementary Appendix contains information on how we calculated the quality and RCC measures. We assessed both the most recent score from FY2012 Quarter 3 and change in the score over an 18-month period prior to FY12 Quarter 3. We ranked the CLCs on both quality and RCC and selected 4 sites ranked highest on both quality and RCC domains, 4 sites ranked lowest on both quality and RCC domains, and 4 sites with mixed performance (eg, high on quality and low on RCC). Overall, this site selection process is driven by what Yin describes as "replication logic" for qualitative research involving multiple cases: Cases should be chosen such that some are expected to be similar on theoretically relevant dimensions, whereas others will be expected to differ. 42 As previously reported, due to changes in performance status between initial site selection and qualitative data collection, our final data set included 5 high-and 5 low-performing sites. 15 Because available quality and RCC data were 2 years older than when the site visits occurred, we triangulated the qualitative data we collected with site visitor impressions of performance on RCC and quality at the time of data collection. Comparison of the data with the impressions resulted in the recategorization of 2 mixed sites. In particular, 1 site became a high performer and 1 site became a low performer. We omitted 2 mixed sites from our analysis because both the qualitative data and site impressions suggested their performance status no longer met our inclusion criteria (ie, either high performance on both quality and RCC or low performance on both quality and RCC). Sample of CLC Staff We recruited a diverse group of staff members from the CLCs in our sample, including executive leaders, middle managers, and frontline CLC staff, to provide viewpoints from staff at all levels at the CLC. Prior to scheduling the site visits, VA Central Office sent a notice to the selected sites indicating support for the study. Thereafter, we contacted each CLC director (or their administrative support staff), who provided us with the names of potential respondents (eg, CLC staff members) via e-mail group lists. We then contacted staff members and set up in-person site visits. To avoid coercion, the CLC director and the employee's actual supervisor were not involved or told which employees had agreed to be interviewed. Data collection Semi-structured interview guide. We developed our interview questions based on the literature including structural characteristics and organizational infrastructure. We created an interview guide consisting of 33 semi-structured questions corresponding to these domains. We also asked questions regarding clinical care and RCC processes to understand how care was provided at the CLC. We pilottested the guide with a CLC staff member at our home site and found the respondent understood the questions and did not feel additional questions were need. We then created 3 versions of the interview guide-one each for frontline staff, middle managers, and leaders. The consolidated interview questions can be found in the Supplementary Appendix. Site visits. We collected data through 2-day in-person site visits. Teams consisted of 2 experienced health care researchers with specific knowledge of CLC care and VA structures and processes. All team members were blinded to the performance status of the CLCs they visited in an effort to minimize the selective attention that could be induced by preconceptions about conditions at "highly ranked" and "poorly ranked" CLCs. Site visit team pairings were rotated to mitigate any bias that might arise if 2 people always worked together. Individual interviews were approximately 60 minutes in length and were audio-recorded. Data analysis. Figure 2 displays our data analysis workflow for this study. Verbatim interview transcripts were the primary source for data analysis. A team of 5 analysts (J.L.S., R.L.E., D.T., M.K.A., K.G.) coded the transcripts in NVivo qualitative software for evidence of a priori domains based on the literature. We used an inductive approach to identify additional emergent domains relevant to the study's goals. The domains and definitions can be found in the Supplementary Appendix. Inter-rater reliability of 75% was established using a "check-coding" process where all coders independently coded the same interview transcript, and initial reliability estimates between all coders were computed. Coders then met to compare their coding, discuss areas of disagreement, and reach consensus. This process was repeated until a stable level of agreement of 75% was achieved across all coders. 43 Within-case site summaries for each site were created from the coded transcripts and were organized by a priori and emergent domains. Quotes included in the site summary represented views from multiple levels within the organization (frontline, middle manager, and senior leader). To determine data sufficiency within a CLC, the analysts assessed the level of agreement among informants within the organization (eg, if 2 or more informants described a phenomenon, then it would be considered for inclusion). We not only captured views of the majority of staff within a site but also included verbatim quotes when there was an alternate viewpoint from at least 2 staff members. All team members participating in analysis were blinded to hospitals' performance status until all single-site summaries were complete. We then created cross-site summaries based on performance categories. Guided by the analytic approach Miles and Huberman, 43 the coding team assessed whether there were similarities for sites in comparable performance categories. We took several steps to stay reflexive during this study. As mentioned above, we made use of site visitor impressions. Site visitors would write their impressions about the site and interviews at the end of each day. At the end of the site visit, visitors were also asked to rate the level of quality and RCC at the site. Team members were then expected to debrief on how the visit was going and what their impressions were. We used a semi-structured interview guide which helped mitigate some of our internal biases as the same questions were asked of the respondents. In addition, our team met regularly to discuss questions and potential biases with regard to the data collection and analysis. Finally, withincase site summaries were prepared by a team member who was not on-site for the site visit and unaware of the site teams' site impressions regarding status of quality and RCC at that site. Thus, these practices helped us be mindful of how we conducted the research. Results The purpose of this study was to identify health system factors present in CLCs that ranked highly on both quality and RCC dimensions in comparison with CLCs that ranked poorly on both care quality and RCC. Table 1 shows site characteristics and quality/RCC rankings by performance category. In particular, the rankings were used to select the sites for participation in our study. The selected CLCs were spread out throughout the United States and had average 18-month long-stay resident census that ranged from 14.8 to 86.6. Table 2 displays the distribution of our sample across site and by performance. In total, we interviewed 108 respondents distributed equally among high-performing CLCs (n = 51) and low-performing CLCs (n = 57). Frontline staff made up about half of our sample. Of the health system factors identified, we found variations between high and low performance sites on 5 domains, including leadership support, organizational culture, teamwork and communication, RCC training, and RCC rewards and recognition. Table 3 summarizes the themes. We present each domain in the sections below and include illustrative quotes from high-and low-performing sites in the Supplementary Appendix. We had insufficient data (ie, less than 2 or more informants described a phenomenon at that site) to make comparisons for 5 domains, including safety and medical care protocols, quality champions, personal involvement in quality initiatives, staff awareness of RCC practices, and personal involvement in RCC initiatives. Leadership support. There were many common themes across sites regarding leadership support. In many sites, participant perceptions of senior leader support for RCC were mixed, although the extent to which they were mixed varied. Common elements of senior leader support for RCC included providing resources, providing recognition, and being available (via mechanisms such as leadership rounds or town hall meetings). Senior leader support for quality was reported to be strong in every site, although the forms of that support differed. Almost all high-performing sites mentioned middle management support, while mentions of this support were absent from all but one of the low-performing sites. Middle management support was described in terms of middle managers being available, visible, accessible, listening and recognizing needs, and being more concrete. High-performing sites mentioned updates and open communication in the context of senior leader support for quality, as well as assistance in garnering necessary staffing. In high-performing sites, senior leader support for RCC was generally characterized as including encouragement, provision of necessary resources, and communicating support through large-scale communications such as broadcast messages. To the extent reports were mixed, it was a minority view, and the reported deficiencies related to lack of financial support for RCC. In low-performing sites, the mixed view of senior leader support for RCC was not a minority standpoint and was often linked to leader turnover and lack of continuity. Others reported that senior leadership was slow but ultimately responsive. Others found senior leaders to make decisions without input or to provide inadequate recognition. The low-performing site view of senior leader support for quality was less linked to availability and resources and more illustrated by leadership focus and desire for data. For example, leaders at low-performing sites seemed less focused on prioritizing quality or RCC and often requested seeing data supporting quality/RCC before supporting additional improvement efforts. Organizational culture. One common theme across high and low performance categories was the commitment to veterans as an influential component of the culture and values in the organization, even when faced with the challenges to delivering resident-centered care. Although there was not total agreement on cultural attributes within high-performing sites, there were some recurring themes. These included the sense that staff members were empowered to speak up and make decisions, a culture of continuous quality improvement, focus on veterans' preferences being central to care decisions, and norms of honest, open communication. In addition, facilities in this category reported that staff members were open to change and learning about new models of care. Some CLCs had difficulty making changes to negative aspects of past cultures. The focus on continuous culture change was voiced by many. If staff felt like management was listening to their suggestions, it was perceived as being much easier. At the low-performing sites, a number of negative cultural attributes were reported, although not all of them extended across all of the facilities in this group. These included a focus on quality to the exclusion of other aims (notably resident-centered care), lack of flexibility, us and them dynamics among different shifts and/or disciplines, high turnover (of staff, leadership, or both), and sense of laboring under negative stereotypes about CLC care and CLC staff. In some of these sites, the last issue is beginning to change, although slowly: In fact, many of the lowperforming sites reported recent changes for the better. However, these were often tempered by frustration over repeated changes in leadership perceived to be disruptive to progress. Teamwork and communication. There were common themes across all sites on the topic of teamwork and communication. All sites utilized interdisciplinary team meetings to work together as a team and communicate information between disciplines. These team meetings consisted of staff from all disciplines involved in care of the resident, as well as the resident and/or family when needed. All sites emphasized the need for a nonpunitive culture in order to foster teamwork and communication. All sites implored similar modes of communication including both formal (eg, electronic medical record, rounds) and informal (eg, one-on-one conversations). Finally all sites, regardless of whether they described teamwork and communication as positive or negative, mentioned that there was room for improvement. High-performing sites had open communication both at various levels and disciplines. Open door policies were cited that encouraged staff to speak up and bring issues to their managers, as well as encourage staff to communicate with each other about issues that may arise. High-performing sites positively described teamwork and communication between different disciplines to provide care (eg, communication between physicians and nurses). With regard to nonpunitive culture, while all sites recognized the importance of a nonpunitive culture, high-performing sites described communication and teamwork that were friendly and respectful, whereas low-performing sites reported the need to remove barriers that caused interpersonal stress. Low-performing sites had silos or pockets of positive teamwork and communication, but it was not pervasive throughout the CLC. Variation in teamwork and communication was present at the shift, unit, and department level. High-performing sites were less likely to cite this type of variation in teamwork and communication. Finally, low-performing sites consistently cited barriers to positive teamwork and communication. RCC training. Staff members at both high-and low-performing sites were able to discuss in detail the formal and informal training they received. All sites discussed having on-the-job training or coaching from their peers. In addition, all sites mentioned having formal training or classes from either an outside in-service or from the facility. Online trainings were also discussed as a training tool that all sites utilized to educate employees. Finally, all sites reported that they received education on RCC during orientation when starting their employment at the VA CLC. Staff members at high-performing sites were more likely to report attending training. Only 1 site had staff members from a high-performing site who reported they did not receive training, whereas 5 staff members at low-performing sites reported no training. In addition, staff members at high-performing sites reported attending more formal or national trainings, receiving more education between interdisciplinary staff (ie, huddles, in-services, mini conferences, etc.), having paper materials to refer to, and having the hospital make it a priority (ie, mandatory training through the VA Medical Center (VAMC). Additional training that high-performing sites reported were RCC training (ie, dementia, cultural transformation), mini conferences, facility-wide training, lunch and learn series, RCC conferences, national training, brochures, and a RCC handbook. On the contrary, staff members at low-performing sites were more likely to obtain RCC training or skill sets from their educational training or prior employment and carried those skills into their current position. Furthermore, low-performing sites were more likely to report online trainings as a method of learning about RCC. Also, 1 low-performing site reported that their medical director only got involved with trainings if there was an issue with funding or if there was a controversy around it. Additional training that low-performing sites mentioned receiving was cultural transformation training, ICARE values, and training or skill sets obtained from prior jobs. RCC rewards and recognition. There were common themes across all sites on the topic of RCC recognition. All sites made use of one or more awards and recognition mechanisms (eg, employee of the month, Integrity, Commitment, Advocacy, Respect, and Excellence (ICARE) values, Caught-in-the-Act) to acknowledge staff who exemplified RCC while providing care or services to veterans. In addition, the notion that there was not enough staff recognition for RCC was also articulated at most, if not all, sites. High-performing sites expressed leadership and/or middle management active support or promotion of RCC recognition, with leadership or middle managers initiating staff INQUIRY recognition events or activities (eg, award initiation, boastfulness about the unit, giving out incentives). RCC recognition at high-performing sites was visibly more formal and consistent in nature, with a wide variety of opportunities taken to recognize staff. Examples for formal recognition range from incentive awards such as star awards, on the spot awards, caught-in-the-act-of-kindness awards, I-saw-what-you-did awards, to employee of the month, shout-outs, recognition during staff meetings, wall postings, bulletin boards, and newsletters. Low-performing sites reporting on RCC recognition was less frequent with limited visibility, in the form of compliments or positive feedback to staff, performance appraisals, or the reading to staff of veterans letters praising them. Finally, low-performing sites strongly expressed gaps in RCC recognition on multiple levels, from frontline staff to senior leadership. RCC recognition was limited or inexistent and often portrayed as being part of the culture, or expectation, with the acknowledgement that improvements needed to be made to address this issue. Discussion In this article, we assessed whether health system factors varied for CLCs with high quality and RCC performance in comparison with low quality and RCC performance. We found that high performers reported more leadership support, better teamwork/communication, better fit with organizational culture, and greater use of training and provided more awards and recognition targeted at improving RCC. Our findings regarding leadership support, teamwork and communication, and organizational culture are supported by previous literature. Efforts are more successful when senior leaders recognize quality and RCC as organizational priorities and promote changes to create practices supportive of quality and RCC, create a learning environment by spreading lessons of successes and failure, and demonstrate commitment by spending time on activities that support RCC and quality. 25,[35][36][37][38] Interdisciplinary teamwork is crucial in the CLC setting for care provision. [32][33][34] Each team member provides a unique perspective on patients' care needs, and including them in care planning can facilitate improved quality. Communication among team members is critical for patient information to be relayed in a timely fashion to improve care. Training and rewards and recognition are modifiable ways to appreciate staff and improve job satisfaction. 44 To improve CLC performance on both quality and resident-centered care, a site could immediately begin focusing on improving training, policies, and having awards and recognition. Things such as having active leadership support, quality/RCC fit with organizational culture, and good teamwork/communication can take longer to build. Previous work in the private sector provides some actionable guidance for NHs looking to improve performance. As part of Centers for Medicare & Medicaid Services's (CMS) National Nursing Home Quality Care Collaborative, a change packet was developed. 45 The strategies presented are in line with many of our findings and include 7 strategies: Implementing the change packet resulted in higher levels of quality as measured by a composite measure of quality. 46 We feel that interventions such as this focused on both clinical quality (which also includes tenants of RCC) could help CLCs and private sector nursing homes target ways to focus on the factors that had an influence in our study. The characteristics which distinguish our study from past research are (1) the focus on both quality and RCC simultaneously and (2) assessing a large number of factors at one time. Much of the literature to date focus on factors affecting either quality or RCC. There is an inherent tension between quality and RCC, 47,48 for example, controlling a diabetic resident's sugar level while also letting that resident choose the foods they eat, which might not always be healthy options. Sites focused only on quality could find it difficult to implement RCC, or vice versa-sites solely focused on RCC may fall behind on quality expectations (eg, falls, weight). In addition, although many of the factors we report on have been identified in individual studies, our results add to the literature because we assessed these organizational factors together in one study. In terms of implications for practice within the CLC setting, a key insight of this work is that there is no one prescribed strategy to balance resident-centered care with care quality which works for every resident. In addition, the trade-offs between actions that are resident-centered and those that optimize quality metrics can also vary by resident, and thus the approaches to staff training and staff decision making must be more nuanced than adhering to simple guidelines (such as "always make snacks available"). Despite the apparent emphasis RCC places on personalizing care, within a CLC, there were a limited number of ways by which such personalization could be achieved, suggesting that staff would benefit from a deeper understanding of both RCC goals and the variety of strategies to deploy. Our results may also be of interest to CLCs or private nursing homes trying to implement multiple priorities at once. We find that many of our differentiating factors were ones where organizational supports and resources are necessary, including leadership support, organizational culture, training, and rewards and recognition. The Organizational Transformation Model suggests there are 3 necessary drivers to successful change culture including active leadership, alignment throughout the organization, and implementation of the innovation or new processes. 49 We found there were high-performing CLCs with strong leadership support for RCC in the face of also meeting quality expectations, where organizational cultures were able to be aligned with providing more RCC and where practices for improving RCC and quality could be implemented simultaneously. However, given limited resources, implementation of a new initiative may challenge the systematic, sustained implementation of evidence-based approaches or at least limit the ability to focus on more than one priority at once (as was the case in low performers in this study). Our study has several strengths. We used secondary quality and RCC data on 130 CLCs to identify sites experiencing high and low levels of both quality and RCC. Not every health system has these types of data over time available to draw from. In addition, being able to study both quality and RCC together provides a unique perspective. Most literature focus on one domain or the other when CLCs are to some extent expected to utilize both models, although utilizing standardize care protocols and resident preferences can sometimes conflict. 48 Although VA's patient population has more men and patients often have more functional limitations and mental health issues, we feel our results are applicable to the private sector. Community nursing homes are increasingly part of vertically integrated health care systems, and as more hospitals convert to Accountable Care Organizations, more facilities may resemble VA. In addition, the VA has also become more like the community in recent years in that patient acuity has been changing to be more oriented around short-stay acute needs of patients. 50 Although the patient population would result in focusing on different types of RCC interests and activities, provision of RCC tenets as a whole would be very similar (eg, taking account of resident's preferences and providing residents with a home-like environment). Our study also has limitations. We were only able to collect data on a small number of sites due to budgetary constraints. However, the sites participating in this study differed in size (based on average patient census) and geographic region, suggesting there was some diversity in the sites we visited. The VA CLCs selected may not be representative of all CLCs. For example, our sites were not implementing the small house or greenhouse model of care. We were unable to use data from the time of site visits because the MDS converted from version 2.0 to version 3.0 between our data pull and site visits and the quality indicators also changed. In our previous work, we have found that MDS version 2.0 data are highly correlated year to year. 51 Given budgetary constraints, we were unable to collect additional types of data (eg, observations, statements of practice) which may have provided more insight into our study question. While it is possible that research bias was present even though site visitors were blinded to both quality and RCC status of a site, we did take measures to remain reflexive and mitigate internal bias (eg, semi-structured interview guides, site visitor impressions). Finally, our analyses were conducted at the site level and tried to incorporate viewpoints from staff at all levels; however, we did not do a specific discipline-by-discipline comparison of viewpoints; because there were not always enough respondents in each category, we see this as an area for future research. In summary, our findings suggest there are some distinguishing characteristics between site high in quality and RCC and low in quality and RCC. Organizations must recognize this will require dedicated resources from leaders and support from staff throughout the organization. More research is needed on figuring out how to allocate limited resources to most efficiently improve CLC quality given multiple priorities. To fully integrate both quality and RCC, adapting current quantitative measures of quality to incorporate RCC may be necessary. In addition, integrating resident perceptions of the extent to which RCC is present and resident satisfaction may be especially useful for residents and caregivers making decisions about CLC placement. More research is necessary to understand the most practical ways to incorporate these data and if such a measure would distinguish high and low performers.
2018-08-06T13:39:53.994Z
2018-07-26T00:00:00.000
{ "year": 2018, "sha1": "85605005efe657fff97d82a88eda02a0487681f1", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0046958018787031", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85605005efe657fff97d82a88eda02a0487681f1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
236953284
pes2o/s2orc
v3-fos-license
Construction of a circRNA-Related Prognostic Risk Score Model for Predicting the Immune Landscape of Lung Adenocarcinoma The purpose of this study was to construct a circular RNA (circRNA)-related competing endogenous RNA (ceRNA) regulatory network and risk score model for lung adenocarcinoma (LUAD). The relationship of the risk score to immune landscape and sensitivity to chemotherapy and targeted therapy of LUAD was assessed. We downloaded mRNA and miRNA expression data, along with clinical information, from The Cancer Genome Atlas (TCGA) program, and circRNA expression data from the Gene Expression Omnibus (GEO) database and identified differently expressed circRNA (DEcircRNA), miRNA (DEmiRNA), and mRNA (DEmRNA) using R software. We then constructed the circRNA-related network using bioinformatics method. The risk score model was established by LASSO Cox regression analysis based on 10 hub genes. In addition, the risk score model was an independent predictor for overall survival (OS) in both the TCGA and CPTAC datasets. Patients in the high-risk group had shorter OS and disease-free survival (DFS) than those in the low-risk group and were more sensitive to chemotherapy and targeted therapy. The types of tumor-infiltrating immune cells were different in the high- and low-risk groups. Our data revealed that the circRNA-related risk score model is closely associated with the level of immune cell infiltration in the tumor and the effects of adjuvant treatment. This network may be useful in designing personalized treatments for LUAD patients. INTRODUCTION Worldwide, lung cancer is a leading cause of cancer-related deaths, and approximately half of cancers are lung cancers (Imielinski et al., 2012;Bray et al., 2018). Since most lung cancer patients are diagnosed at an advanced stage, the 5-year survival rate is only about 18%, even if diagnosis and treatment were improved . Therefore, exploring the molecular mechanism of lung adenocarcinoma (LUAD) and establishing an effective prognostic model for this cancer are critical in the formulation of effective individualized treatment regimens. Circular RNA (circRNA) derived from gene intron or exon region are a special type of non-coding RNA. They have a closed circular structure and no poly-A tail. Therefore, compared with linear RNA, circRNAs have a more stable structure and are not easily hydrolyzed by exonuclease or RNase (Wang et al., 2016). The competing endogenous RNA (ceRNA) hypothesis holds that circRNAs can compete with mRNA, the downstream targets of microRNAs (miRNAs), to bind miRNA response elements and, in turn, affect mRNA expression levels, thus forming a complex posttranscriptional regulatory mechanism (Salmena et al., 2011). To explore the potential function and mechanism of circRNA in LUAD, we established the circRNArelated ceRNA regulatory network. Based on the identification of downstream mRNAs, we then generated a prognostic risk score model. Previous studies demonstrated that circRNA participates in the regulation of immune cell infiltration in the tumors through the ceRNA mechanism (Song et al., 2020). Therefore, we also explored the relationship between the risk score and the level of immune cell infiltration and assessed the relationship between the risk score and the immunosuppressive molecules. Currently, adjuvant therapy planning after tumor resection is mainly designed according to TNM stage (Amin et al., 2017). Due to the tumor heterogeneity, adjuvant treatment plans based only on TNM stage have certain limitations. Therefore, we predicted the sensitivity of LUAD patients to chemotherapy and targeted drugs according to the risk score. In this study, we first constructed a circRNA-related ceRNA network through bioinformatics analysis, then constructed a prognostic risk score model. Finally, we explored the relationship between risk score and the level of infiltrated immune cells in LUAD, genes related to immune checkpoint inhibitors (ICIs), and sensitivity of chemotherapy and targeted therapy. Data Collection and Preprocessing Two circRNA expression datasets GSE101684 and GSE112214 were obtained from GEO database. 1 The normalizeBetweenArrays function in the "Limma" 2 package in R software was used to normalize the expression data of circRNA, and the batch effect was corrected by using ComBat function in "sva" package in R software after merging the two datasets (Leek et al., 2012). Linear fitting was performed on the data by using lmFit function. Finally, the mean expression value of 3,468 circRNAs in LUAD tissues and paracancerous tissues were analyzed by using empirical eBayes in the "Limma" package to determine the differentially expressed circRNAs (DEcircRNAs) based on a screening criteria of false-discovery rate (FDR) <0.05 and | log2 fold change(FC)| >1. However, we did not consider the paired nature of the circRNA data when we analyzed the differentially expressed genes. The "pheatmap" 3 package was used to visualize the DEcircRNAs, whose expression value had been normalized. Clinical information of 522 LUAD patients and the expression data of miRNA (513 tumor and 46 paracancerous samples) and mRNA(513 tumor and 59 paracancerous samples) were acquired from The Cancer Genome Atlas (TCGA). 4 Fifty LUAD patients were excluded from this research, because of unknown age (10 patients), no or less than 30 days of survival time (23 patients), no tumor stage (eight patients), and no mRNA expression data (nine patients). Finally, 472 LUAD patients with complete clinical information were included in our study. Clinical Proteomic Tumor Analysis Consortium (CPTAC) 5 datasets containing clinical information and RNA sequencing data of 102 LUAD patients were obtained for external validation of the risk score model. Low-expressing mRNAs with an average read counts of <5 and low expressing miRNAs with an average read counts of <1 were filtered out. The 17,143 mRNAs and 817 miRNAs meeting the above requirements were included in this analysis. For the raw read counts of mRNA and miRNA, the calcNormFactors function in the "edgeR" package (Robinson et al., 2010) in R software was used to calculate the normalization factors in each sample to normalize the gene expression data. The exactTest function was used to identify the differentially expressed genes based on the screening criteria of FDR <0.05 and |log2 fold change (FC)| >1. For miRNA and mRNA correlation analyses, we transform the read count matrix of miRNA and mRNA into a matrix of transcripts per million (TPM) values. Constructing the ceRNA Network The target DEmRNAs of the DEmiRNAs were predicted using the miRTarBase and TargetScan databases (Hsu et al., 2011;Agarwal et al., 2015). To improve the reliability, the coexpression relationship of the DEmiRNA and DEmRNA from the DEmiRNA/DEmRNA pairs predicted by two database were further analyzed by Spearman's correlation analysis screened according to a criteria of the Spearman's correlation coefficient (ρ) < -0.2, FDR <0.05, and the standard deviation (sd) >0.5. We named these gene pairs NC-DEmiRNAs/DEmRNAs pairs. The target miRNAs of DEcircRNA were predicted using the circBank database, 6 then we took the intersection of these targeted miRNA and DEmiRNAs from the NC-DEmiRNAs/DEmRNAs pairs. The expression patterns between circRNA and miRNA of a circRNA/miRNA pairs must be opposite, that is, if a circRNA expression is upregulated, the corresponding miRNA must be downregulated, and vice versa. According to the above result, we utilized Cytoscape (version 3.7.2) to construct a circRNA/miRNA/mRNA network. Functional Enrichment Analysis We performed Gene Ontology (GO) function and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses on these 122 DEmRNAs in this network to evaluate their enrichment for biological processes (BP), molecular function (MF), and cellular component (CC) and to annotate their signal pathways. The "clusterProfiler" 7 package was used to perform GO and KEGG analysis based on the screening criteria of adjusted p (q-value) <0.05. Protein-Protein Interaction Network Using the Search Tool for the Retrieval of Interacting Genes (STRING) database, 8 the interaction of these DEmRNAs from the circRNA/miRNA/mRNA networks were explored. Interactions among proteins with a comprehensive score >0.7 were thought to be statistically significant. Then, we established a proteinprotein interaction (PPI) network for these DEmRNAs using STRING and visualized it with Cytoscape. The CytoHubba application was used to extract hub genes from the PPI network according to the degree method. Survival Prediction Model of Hub Genes Using univariate Cox regression analysis, we explored the relationship between hub gene expression levels and overall survival (OS) in LUAD patients. A prognostic signature was constructed using the least absolute shrinkage and selection operator (LASSO) Cox regression analysis for 10 prognosticrelated hub genes and the coefficient of each hub gene was calculated in the TCGA cohort. The optimal penalty parameter that was calculated by 10-fold cross validation was used to filter out signatures. Risk score = sum of coefficients * TPM value of hub genes. The formula was used to calculate a risk score for each LUAD patient in the TCGA cohort and CPTAC cohort, and patients were divided into low-and high-risk groups based on the median of the risk score from TCGA cohort. The "survival" package 9 was utilized to carry out Kaplan-Meier (K-M) survival analysis for the two groups. The receiver operating characteristic (ROC) curve was generated using the "survivalROC" package. 10 Exploration of Immune-Infiltrating Cells To explore the relationship between the level of immune cells and risk score, we calculated the immune cell status of each tumor sample in the LUAD dataset from the TCGA database using seven currently accepted methods [including XCELL (Aran, 2020), TIMER , MCPCOUNTER (Dienstmann et al., 2019), QUANTISEQ (Plattner et al., 2020), EPIC (Racle et al., 2017), CIBERSORT (Chen B. et al., 2018), and CIBERSORT-ABS (Tamminga et al., 2020)]. Wilcoxon signed-rank test was performed to analyze immune cell differences between highand low-risk groups as calculated by these seven methods. The correlation between the level of immune cells in tumors and risk score was analyzed using Spearman's correlation analysis; the results are shown as a lollipop chart. The ggplot2 package 11 was used to this procedure. p-Value <0.05 was considered statistically significant. Investigation of the Relationship Between ICI-Related Genes and Risk Score To analyze the relationship between ICI-related immunosuppressor genes and risk score, we used the "ggstatsplot" 12 package to visualize the above results. Evaluation of the Significance of Risk Score Model in Chemotherapy and Targeted Therapy To evaluate the clinical significance of the risk score for LUAD chemotherapy and targeted therapy, we converted the TCGA gene expression matrix into a half inhibitory centration (IC 50 ) data matrix of the corresponding antitumor drugs with the "pRRophetic" package (Geeleher et al., 2014), then analyzed the IC 50 difference between the high-and low-risk groups by the Wilcoxon signed-rank test. Results were depicted by bar chart. Functional Enrichment Analysis To explore the biological functions of the identified circRNAs, we carried out GO function and KEGG signaling pathway enrichment analyses for the 122 downstream DEmRNAs regulated by circRNAs. The top 10 GO terms of BP, CC, and MF are shown in Figure 2A. The BP terms were mainly enriched in "positive regulation of cell cycle" involved in cell cycle regulation, CCs were mainly enriched in "chromosomal region" and "chromosome and centromeric region, " and MFs were mainly enriched in "protein C-terminus binding, " "SMAD binding, " and "histone deacetylase binding." Finally, in the KEGG signaling pathway, "MicroRNAs in cancer" was the common signaling pathways for these genes. The all KEGG pathway enrichment results are shown in Figure 2B. Construction of the PPI Network Using the STRING online tool, we established a PPI network for the 122 DEmRNAs to further examine their interactions ( Figure 2C). This PPI network contained 54 nodes and 117 edges after removing isolated nodes. According to the degree method, the top 10 hub genes (UBE2C, BIRC5, TOP2A, RRM2, CDCA8, HJURP, OIP5, RACGAP1, GINS2, and CDT1) in the PPI network were extracted by the cytoHubba plugin ( Figure 2D). Construction and Validation of Risk Scoring Model We next explored the relationship between 10 hub genes and OS by univariate Cox regression analysis; the 10 hub genes were identified as those with p-value <0.05 ( Figure 3A). We then analyzed these hub genes using LASSO Cox regression analysis (Figures 3B,C). According to the minimum standard, three hub genes (HJURP, RRM2, and OIP5) were selected to build a risk score based on the risk coefficient and TPM value of genes. The risk score was calculated as follows: risk score = (0.0599 * HJURP expression) + (0.1113 * RRM2 expression) + (0.0652 * OIP5 expression). K-M survival analysis also indicated that highly expressed OIP5, HJURP, and RRM2 had a lower OS in the TCGA cohort (Supplementary Figure 1). To determine whether the risk score model was an independent risk predictor for OS, we analyzed age, gender, TNM stage, and risk score by univariate and multivariate Cox regression analyses in the TCGA cohort. In the univariate Cox regression analysis model, there was a significant correlation between risk score and OS ( Figure 3D). Moreover, the risk score was an independent risk predictor for OS after adjusting for other confounding factors in the multivariate Cox regression analysis model ( Figure 3E). The heatmap and survival status plots showed that the risk score was closely related to the expression levels of the three genes, and the number of deaths in the high-risk group was significantly higher than that in the low-risk group (Figures 3F-H). To evaluate the applicability of the risk score model constructed from the TCGA dataset, the cases from the CPTAC program were also divided into low-and high-risk groups by the risk score median from the TCGA cohort. As with the TCGA results, the risk score model was an independent risk predictor in the CPTAC cohort (Figures 3I,J). And, the expression levels of these three genes and distribution of survival state in the CPTAC cohort were similar to those in the TCGA cohort in the high-and low-risk groups (Figures 3K-M). Kaplan-Meier survival analysis indicated that the OS (Figures 4A,C) and DFS (Figure 4B) in the high-risk score group were lower than those in the low-risk score group in both TCGA cohort and CPTAC cohort. Finally, we established a ROC curve of a risk score to examine its prediction power for OS. The area under the curve (AUC) of the 3-year survival data was 0.660, showing moderate accuracy and specificity in the TCGA cohort ( Figure 4D). The AUC of risk score was 0.784 at 3 years in the CPTAC cohort ( Figure 4E). Regulatory Networks for Risk Score Models To visualized the upstream genes that regulate the risk score model, we extracted ceRNA subnetwork from the total ceRNA network. This subnetwork contained three prognostic hub genes, three miRNAs (miR-101-3p, miR-218-5p, and miR-6720-3p), and three circRNAs (hsa_circ_0077607, hsa_circ_0005699, and hsa_circ_0092283) (Figure 5A). In addition, the expression level of the three circRNAs and three hub genes were upregulated in LUAD samples, while the three miRNAs were downregulated ( Figure 5B). Moreover, there was a negative correlation (r < −0.2) between three hub genes and three miRNA expressions ( Figure 5C). Immune Landscapes Affected by Risk Score Model To evaluate the associations between risk score and responses of LUAD patient to immunotherapy, we analyzed whether the risk score was associated with the types of immune cells present and ICI-related genes. Our results showed that the risk score was negatively correlated with mast cell activated, T cell CD4 + memory resting, myeloid dendritic cell resting, monocyte, T-cell regulatory (Tregs), myeloid dendritic cell activated, and NK cell activated, macrophage M2, whereas they were positively correlated with T cell CD4 + memory activated, mast cell resting, macrophage M0, macrophage M1, T cell follicular helper, and CD8 + T cells (Figure 6A and Supplementary Figure 2A). To clarify the correlation between risk score and the immune cells, we performed Spearman's correlation analysis, and the results were shown as a lollipop chart ( Figure 6B and Supplementary Figure 2B). In the same time, the result also revealed that the expression levels of genes related to ICI-related genes, such as CD274 (PD-L1), PDCD1 (PD-1), LAG3, CTLA4, and HAVCR2 were higher in the high-risk group than low-risk group (Figure 6C), but CTLA4 and HAVCR2 were not statistically different (Supplementary Figure 2C). Analysis of the Relationship Between the Effectiveness of Chemotherapy and Risk Score Model In addition to immunotherapy, we analyzed the relationship between risk score model and the effectiveness of chemotherapy and targeted therapy in the LUAD cohort. Our result revealed that LUAD patients in the high-risk score group were more sensitive to chemotherapies such as cisplatin, docetaxel, gemcitabine, and paclitaxel and targeted drugs such as erlotinib and gefitinib. This suggests that the risk score model is a potential predictor of sensitivity to chemotherapy and targeted therapy (Figure 7). DISCUSSION Numerous studies have shown that the circRNA-related ceRNA mechanism plays a critical role in tumor function. The ceRNA mechanism hypothesis states that some non-coding RNAs, such as circRNAs and lncRNAs share miRNA response elements on mRNAs, and therefore compete with miRNAs to regulate the expression of mRNA indirectly, forming a complicated posttranscriptional regulatory network (Salmena et al., 2011). An increasing amount of evidence has shown that circRNAs are involved in several physiological and pathological processes of tumor development and progression (Song and Fu, 2019;Wang et al., 2019;Zhang et al., 2019a,b,c;Bai et al., 2020;Zhang N. et al., 2020;Zhang S. J. et al., 2020). CircRNAs have also been shown to be involved in resistance to immunotherapy, targeted therapy, and chemotherapy (Zhang et al., 2019b;Wen et al., 2020;Li et al., 2021). In this study, we identified abnormal gene expression in LUAD with data from the GEO database. According to the conjoint analysis of two databases, we then constructed a circRNA-related ceRNA regulatory network. Next, we constructed a risk score model using three mRNAs in this network and demonstrated that it is an independent risk factor for the prognosis of LUAD. Because circRNA is related to the resistance of adjuvant therapy drugs, we also analyzed the relationship between the risk score model and the status of tumor infiltrating immune cells and explored its application value in immunotherapy, chemotherapy, and targeted therapy. In the prognostic circRNA/miRNA/hub gene subnetwork, three circRNAs (hsa_circ_0005699, hsa_circ_0092283, hsa_circ_0077607) acted as "sponges" to adsorb three miRNAs (hsa-miR-101-3p, hsa-miR-218-5p, and hsa-miR-6720-3p), thus indirectly regulating the expression level of three mRNAs (HJURP, OIP5, and RRM2) by sequestering these target miRNAs. A growing body of research has demonstrated that circRNA expression is dysregulated in lung cancer and may be related to lung cancer progression and prognosis. For example, compared with paracancerous tissue, the expression of circFGFR1 in lung cancer tissues was increased, and patients with higher circFGFR1 had a worse prognosis (Zhang et al., 2019b). Similarly, circTP63 and circular RNA100146 are highly expressed in NSCLC cells. Knockdowns of these cicrRNAs significantly inhibited tumor cell proliferation and invasion and promoted apoptosis. Further studies revealed that circTP63 and circular RNA100146 acted as a "sponges" for miR-873-3p and miR-361-3p/miR-615-5p, respectively, to suppress the expression of these miRNAs, increase FOXM1 and SF3 levels, and facilitate the progression of NSCLC (Cheng Z. et al., 2019). However, the exact mechanism FIGURE 7 | The risk score model serves as a potential predictor of sensitivity to chemotherapy and targeted therapy, as high-risk scores are associated with lower IC 50 of chemotherapy and targeted therapy drugs such as cisplatin, docetaxel, gemcitabine, paclitaxel, erlotinib, and gefitinib (***p < 0.001). of the circRNA-mediated ceRNA regulation network in LUAD is still unknown. The roles of the three circRNAs in our ceRNA subnetwork have not been reported yet. Therefore, the roles of these circRNAs need to be further confirmed in future experiments. In the subnetwork, three miRNAs were identified, of which miR-101-3p and miR-218-5p had been previously shown to act as tumor suppressors in lung cancer. MiR-101-3p can target downstream genes to inhibit cell invasion, viability, and migration in lung cancers (Hou et al., 2017). The expression levels of miR-218 (miR-218-5p) are decreased in NSCLC tissues, and overexpression of this miRNA was shown to suppress the proliferation of NSCLC cells by regulating CDK6 (Shi et al., 2017). However, the function of miR-6270-3p in tumor has not been studied. Therefore, we can speculate that they may play critical roles in the progression of lung cancer. According to results of LASSO Cox regression analysis for the 10 identified hub genes, OIP5, HJURP, and RRM2 were selected to establish a ceRNA subnetwork. LUAD patients with high-risk scores tend to have shorter OS and DFS. In addition, the risk score was an independent risk predictor for OS after correction for age, gender, and TNM stage. Previous work demonstrated that the expression level of OIP5 was elevated in NLCSC and esophageal cancer tissues, and silencing of OIP5 could suppress tumor cell growth (Koinuma et al., 2012). Moreover, the expression level of OIP5 was closely related to the prognosis of NLCSC and esophageal cancer and was an independent prognostic factor for LUAD (Koinuma et al., 2012). In addition to lung cancer, the expression level of OIP5 is also increased in nasopharyngeal carcinoma, and its knockout inhibits ability of the proliferation, migration, and invasion of tumor cells . HJURP expression is increased in NSCLC tissues. HJURP knockdown suppressed the migration and invasion of NSCLC cells via inhibition of the activation of Wnt/β-catenin signaling (Wei et al., 2019). Similarly, ectopic expression of HJURP can promote proliferation, migration, and invasion of other tumor cells (Chen T. et al., 2018(Chen T. et al., , 2019Kang et al., 2020). Furthermore, high expression of HJURP is associated with poor prognosis in patients with colorectal and ovarian cancer and is an independent prognostic biomarker for those cancers (Li et al., 2018;Kang et al., 2020). Many studies have shown that RRM2 plays an important role in tumorigenesis and tumor progression. RRM2 overexpression, for example, promoted the gastric cancer invasion capacity, while its silencing inhibited the proliferation, invasion, and migration of lung cancer cells and other malignant phenotypes (Morikawa et al., 2010;Yang et al., 2019;Jiang et al., 2021). Moreover, the expression levels of RRM2 in lung cancer tissues is also closely related to the prognosis of patients and the level of tumor-infiltrating CD8 + T cells (Jiang et al., 2021). Gene Ontology function and KEGG signaling pathway enrichment analyses for the 122 genes in ceRNA network provided insight into the pathogenic mechanism of LUAD. The most enriched BP was "positive regulation of cell cycle" that is involved in the regulation of the cell cycle. As we all know, there are more cells in the division phase in tumor cells than normal cells. KEGG signaling pathway enrichment analysis indicated that "microRNAs in cancer and cellular senescence" were significantly enriched. The relationship between miRNAs and tumors has been extensively studied (Hou et al., 2017;Shi et al., 2017;Cheng Z. et al., 2019;Zhang et al., 2019b;Li et al., 2020) and we also found a close relationship between miRNAs and LUAD. More importantly, in the ceRNA network constructed in this paper, all the 122 DEmRNAs identified were regulated by the upstream miRNAs. These results confirmed the reliability of our ceRNA network. To construct a more effective prognostic model for LUAD patients, we carried out Cox regression analyses for age, sex, TNM stage, and risk score. The multivariate Cox regression analysis results showed that risk scores and the TNM stage were independent risk predictor factors for OS in TCGA and CPTAC cohort. Moreover, the risk score exhibited good prediction power for OS. This model not only has good prediction power for OS but we also found that the risk score generated by this regulatory network is closely related to the state of tumor-infiltrating cells and the response to immunotherapy. It has been shown that patients with more infiltrating CD8 + T cells in tumor tissues are more sensitive to pembrolizumab (Garon et al., 2019). In this study, we evaluated the status of LUAD tumor-infiltrating immune cells in the TCGA cohort by seven common methods. Considering that these methods have their own advantages, disadvantages, and complexity, few studies have compared these algorithms. Through integration analysis, the results showed that in the high-risk score group, the level of CD8 + T-cell infiltration was higher than that in the low-risk group. This implied that the high-risk group may be more sensitive to ICIs. In addition, our analysis results also showed that the expression levels of ICIsrelated immunosuppressive genes, especially PDCD1 (PD-1) and CD274 (PD-L1), were significantly higher in the high-risk group than the low-risk group. These results revealed that the risk score model can accurately predict the therapy response to ICIs. This model can not only predicted the response of patients to immunotherapy, but also effectively predict the response of patients to chemotherapy and targeted therapy. Compared with the low-risk group, the IC 50 values for cisplatin, docetaxel, gemcitabine, paclitaxel, erlotinib, and gefitinib in the highrisk group were lower. This means that patients in the highrisk group are more sensitive to these drugs. This risk score model is constructed using four mRNAs, and these mRNAs are indirectly regulated by five circRNAs. Therefore, these circRNAs can be considered affecting the immune landscape by indirectly regulating the risk score-constructing transcripts to affect the patient's response to chemotherapy and targeted therapy. There were some limitations in this study that should be considered. First, the circRNA-related ceRNA regulatory network was established based on databases and bioinformatics algorithms. These predictions need to be validated with experimental results. Second, due to the small sample size of the GEO datasets and the lack of clinical information, we were unable to assess the relationship between circRNA and survival. Lastly, as the circRNA expression data were acquired from the GEO database, we could not combine the circRNA results with miRNA and mRNA results from TCGA for circRNA/miRNA coexpression analysis of ceRNA correlation and connectivity. In summary, the risk score calculated from the circRNA regulatory network can predict the prognosis of patients with LUAD and might be helpful in distinguishing patients who could benefit from adjuvant therapy. However, the conclusions in our study were inferred through bioinformatics analysis and needed to be confirmed by further experimental. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material. ETHICS STATEMENT Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS HL performed statistical analyses, analyzed the data, and wrote the manuscript. JW collected the literature and analyzed the data. LZ designed the study and reviewed the manuscript. All authors approved it for publication.
2021-08-09T13:17:34.788Z
2021-08-09T00:00:00.000
{ "year": 2021, "sha1": "b9f111b20f34435fce200f3ba055fc01011b44d7", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2021.668311/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9f111b20f34435fce200f3ba055fc01011b44d7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237382623
pes2o/s2orc
v3-fos-license
Chemical hazard and healthcare workers; findings from a tertiary health institution in south-western Nigeria Health care workers are prone to various forms of hazard, including chemical hazards in the course of discharging their duty. The use of chemicals has increased in healthcare settings particularly the tertiary health care facilities due to advances in diagnostic, curative, and preventive services. The effect of exposed chemical hazards may vary from mild to toxic ones depending on the job specifications and nature of service rendered in various units of the hospitals. It could also have acute or chronic effects from long-term exposure. hospital. 2,4 Despite the high rate of chemical use among healthcare workers, chemical hazard has generated the least attention in terms of research compared with biological and physical hazards. Common examples of chemical hazards with which healthcare workers routinely had contact include medications like anaesthetic agents, antineoplastic agents, aerosolised medication and radioactive substances. Other common toxic chemicals used by various cadres of healthcare workers include mercury, methyl methacrylate, xylene, and other organic solvents like formaldehyde; building maintenance materials, such as asbestos; cleaning and sterilizing compounds, such as ethylene oxide, sodium hypochlorite (bleach), glutaraldehyde, phenol; and gloves used to prevent blood exposure such as latex. 2,4 Prevention against hazard exposure at the healthcare facility level is mainly through the use of personal protective equipment and administrative controls. The common administrative control measures include limitation of duration of exposure through the practice of shift system, provision of standard operating procedure (SOPs), provision of opportunities for leaves, and on-thejob training. Since most studies on hazards among health care workers focused mainly on biological hazards, this study aimed to assess the chemical hazard exposure and the perception of healthcare workers. The study also assessed hazard control with a focus on administrative control and use of personal protective equipment (PPE). A tertiary health facility was chosen for this study because it offers a wide range of services that may not be available at lower levels of health care; thus, providing a comprehensive view of chemical hazard exposure among healthcare workers. METHODS This study was conducted among health care workers at Obafemi Awolowo University Teaching Hospital, a tertiary health care facility in Ile-Ife, South-West Nigeria. Obafemi Awolowo University Teaching Hospital offers diagnostic, curative, preventive, and rehabilitative services for various illnesses. The institution's staff can be broadly classified into those providing direct health services and indirect services (administrative staff). Preliminary investigations revealed that there are chemicals that are common in all units of the hospital while some chemicals are peculiar to some departments, depending on the nature of services being rendered at the unit. Study design and study population The study was conducted using a descriptive crosssectional design. The study population included only health care workers in the units that provide health care services directly to patients. The temporary staff of selected units, like healthcare workers under training, were excluded from the study. Sample size and sampling technique A sample size of 107 was calculated using the sample size formula for a single proportion. The sample size was proportionally allocated to various units including the medical and nursing services, laboratory, morgue, theatre and environmental health unit. The cleaners were also involved in the study. The respondents in these subgroups were enrolled until the allocated numbers of respondents were achieved. The enrolment was spread across different shifts to ensure adequate representation of staff in various units that operate based on shift systems. Data collection Data were collected in January 2020. Data were collected using a self-administered questionnaire while intervieweradministered method was used for data collection among cleaners and other workers with lower levels of education. The questionnaire consisted of six sections: section A contained questions on the socio-demographic data while section B assessed the knowledge of workers about chemical hazards and PPE. Section C contained questions on the perception of healthcare workers about chemical hazards and protective measures based on the constructs of the health belief model. 9 This section was rated on a 5point Likert scale where 1 represented "strongly disagree" and 5 represented "strongly agree". Section D assessed the use of PPE while section E assessed common perceived symptoms while at work. Section F assessed the administrative control of hazard exposure. Data analysis Data were analysed using IBM statistical package for the social sciences (SPSS) version 25 for Windows. 10 Categorical variables like socio-demographic variables, level of knowledge of PPE, and exposure to chemical hazards were summarized using frequency and proportion. Associations between sociodemographic variables, knowledge, and perception of chemical hazards with the use of PPE were assessed using Chi-square. Those who use appropriate PPE every time while on duty were classified as using PPE regularly. Determinants of PPE use were further assessed using binary logistic regression. A p value of less than 0.05 was considered to be statistically significant. Ethical consideration Ethical approval was obtained from the research and ethics committee of the Institute of Public Health (IPH), Obafemi Awolowo University, Ile-Ife, Nigeria. Verbal consent was sought from each respondent after an adequate explanation of the objectives of the study. Confidentiality and data security were assured. Participation was made voluntary as each participant was at liberty to opt-out at any point in the study. RESULTS Most of the respondents were Christian (80.4%) while Islam accounted for 18.7% of the respondents. Both genders were almost equally represented, female (50.5%) and male (49.5%). Majority of the respondents have tertiary education 98.1% and were married 81.3%. More than half of the respondents (55.1%) were on shift system, while 29% of the respondents work between 8 am to 4 pm. Distributions of respondents across the units of the hospital and details of socio-demographic characteristics are as shown in Table 1. Table 2. Majority of the respondents used PPE regularly, 87 (81.3%) while others were inconsistent with PPE use. Symptoms mostly reported by the respondents were recurrent catarrh and cough; 44.0% and 32.7% respectively. Other reported symptoms were headache (26.2%), difficulty with breathing (23.4%), damage to the eyes (18.7%), and skin rash (25.2%). The least reported symptom was damage to any internal organs (7.5%). Most of the respondents (68.2%) were aware that the health facility had existing SOP specific to their work areas, but more than half of them (51.4%) said the SOP documents were not available to them. Almost all the respondents (92.5%) were willing to participate in on-thejob training. About three-fifths of the respondents (60.7%) were aware of the existence of a reporting structure for hazard exposure. Most of the respondents (63.6%) were not aware of existing points for reporting hazard exposure. Details are shown in Table 4. Table 6. DISCUSSION Majority of the respondents had tertiary level of education. This could be due to the minimum level of education required for employment in most units of the hospitals. Similar patterns were observed in related studies. 5,11,12 More than half of the respondents practice shift systems while 3 out of 10 worked for a fixed period of eight hours per day. There is thus, a limited period of exposure to chemical hazards. More than 8 out of 10 respondents were aware and had good knowledge of health risks associated with chemical hazards exposure. This could be due to the higher level of knowledge of respondents and the orientation programs for workers. The finding was similar to what was observed in similar studies conducted among healthcare workers in Lagos. 12 The studies were however not specific for chemical hazards, but awareness of occupational hazards generally. Poor knowledge of chemical hazards was however observed among health care workers in similar studies conducted in Southeast Nigeria and Turkey. 13,14 This could be due to variation in the level of education as the highest level of education among majority of respondents in the study conducted in the south-east of Nigeria was secondary. High level of knowledge of chemical hazards was observed among darkroom technicians and assistants practicing in the south-east of Nigeria. 14 The study was however limited to chemicals used in X-ray processing. In this study, administrative controls and use of PPE were the commonly adopted preventive measures against chemical hazards from the hierarchy of hazard control. Eight out of ten respondents use PPE regularly while others had poor use of PPE. The proportion that used PPE regularly was higher than findings from similar studies among other healthcare workers in Lagos and Niger states where only about 4 out of 10 and 6 out 10 participants adopted safety practices at work respectively. 12,15 Also, the use of PPE among healthcare workers at a tertiary healthcare institution was observed to be very low compared with the findings from this study. 16 This could be due to variation in the implementation of hazard control policies of the institution and availability of PPE because respondents in both studies demonstrated high level of awareness of PPE. Gender was a significant determinant of consistent use of PPE as females had higher odds of being consistent with PPE use compared with males. The finding was similar to the result from the study among healthcare workers in Lagos and Rivers states where gender was a significant factor affecting the adoption of safety practices. 12,17 This was at variance with the findings from a similar study conducted at North-Western Nigeria where there was no significant association between gender and practice of safety measures. 18 The study, however, focused more on biological hazards. Perception of effects of chemical hazards was also a significant determinant of PPE use among the respondents. Those with good knowledge of chemical hazards were also more likely to use PPE, though this was not significant. This was similar to findings from various studies that assessed the association between perception and knowledge of occupational hazards, and use of PPE among healthcare workers. [19][20][21] A study, however, showed no significant association between level of knowledge of hazards and intention to use PPE, as the intention to use PPE was low among the health care workers despite the high level of knowledge of hazards associated with management of patients with tuberculosis. 22 About one-fifth of the respondents had experienced inadvertent exposure to chemical hazards among the respondents. This finding was similar to the result of a similar study among healthcare workers in Ondo state. The prevalence of chemical hazard exposure in this study was lower than the prevalence of inadvertent chemical hazard exposure among healthcare workers in Ondo state where the prevalence was about 3 out of 10. 23 The study was however conducted at a secondary level of healthcare, thus, they may not have access to the same protective measures relative to tertiary health facilities. The low prevalence of inadvertent chemical hazard exposure reported in this study could be due to high level of knowledge of chemical hazards and PPE use among respondents. It could also be due to administrative control measures like accessibility to SOPs, limited period of work for the majority respondents, and the existence of monitoring and reporting structure for hazards exposures. The common perceived symptoms were constitutional symptoms like recurrent catarrh, cough, and headache, which may not be occupationally related. Few people, however, reported skin rash, damage to the eye, or other organs. These could be due to inappropriate and inconsistent use of PPE among the affected respondents. There are few limitations to this study. Involvement of other levels of care and private hospitals would have been more appropriate, but this would have been logistically difficult. A tertiary health facility was selected which provides a fair estimate of experience in healthcare settings because it provides more comprehensive service, some of which may not be available at lower healthcare facilities. The assessment of inadvertent exposure is also prone to recall bias. The period of assessment was however limited to the last three months before the study. CONCLUSION Majority of the respondents had good levels of awareness and knowledge of chemical hazards in the healthcare facilities. Majority of the respondents equally used PPE consistently and appropriately. However, SOPs were available to only about half of the respondents. Although majority of the respondents used PPE regularly, about onefifth still experienced inadvertent exposure to chemical hazards. There is, therefore, a need to strengthen compliance with existing safety measures like correct and consistent use of PPE. There is also a need for management to make relevant SOPs available for all workers at the service points to enable the delivery of services according to the guideline. This will consequently reduce hazard exposure.
2021-09-01T15:09:26.137Z
2021-06-25T00:00:00.000
{ "year": 2021, "sha1": "0cb48cca6519cb90d02fe272a44604e241c7bf6f", "oa_license": null, "oa_url": "https://www.ijcmph.com/index.php/ijcmph/article/download/8136/5079", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1316488b3b422388086a32da6782e8a6455ed2b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
258487871
pes2o/s2orc
v3-fos-license
Iron metabolism and ferroptosis in diabetic bone loss: from mechanism to therapy Osteoporosis, one of the most serious and common complications of diabetes, has affected the quality of life of a large number of people in recent years. Although there are many studies on the mechanism of diabetic osteoporosis, the information is still limited and there is no consensus. Recently, researchers have proven that osteoporosis induced by diabetes mellitus may be connected to an abnormal iron metabolism and ferroptosis inside cells under high glucose situations. However, there are no comprehensive reviews reported. Understanding these mechanisms has important implications for the development and treatment of diabetic osteoporosis. Therefore, this review elaborates on the changes in bones under high glucose conditions, the consequences of an elevated glucose microenvironment on the associated cells, the impact of high glucose conditions on the iron metabolism of the associated cells, and the signaling pathways of the cells that may contribute to diabetic bone loss in the presence of an abnormal iron metabolism. Lastly, we also elucidate and discuss the therapeutic targets of diabetic bone loss with relevant medications which provides some inspiration for its cure. Introduction glucose-lowering therapeutic measures mainly consist of metformin and sulfonylureas. However, their efficacy might be enhanced. Meanwhile, a few therapies targeting diabetic mellitus were discovered to increase the possibility of fractures, such as Thiazolidinediones (TZDs) and possibly sodium-glucose cotransporter-2 (SGLT2) (2,7,8). Investigating the mechanisms underlying DOP can contribute to the development of new therapeutic strategies, despite the fact that researchers do not fully comprehend these mechanisms. Recent studies have demonstrated that the onset of Type 2 Diabetic Osteoporosis (T2DOP) may be correlated with the buildup of peroxides and reactive oxygen species (ROS) resulting from ferroptosis. It is also investigated that some signal molecules and signal pathways, such as NRF2/HO-1/ GPX4 and SLC7A11 can ameliorate the above symptoms, providing novel possible therapeutic targets and research directions for T2DOP (9)(10)(11). Iron metabolism is the process of iron being absorbed, transported, distributed, stored, utilized, transformed, and excreted in living organisms. The metabolism of iron is of great significance for cells. It has been discovered that iron can switch between its ferric (Fe 3+ ) and ferrous (Fe 2+ ) forms, allowing it to take and give electrons with relative ease (12). Therefore, iron metabolism is crucial to the regular functioning of several intracellular processes, and the disturbance of iron homeostasis could potentially increase the risks of many diseases. For example, iron deficiency is perceived as one of the most prevalent causes that induces anemia, while iron overload is recognized as one of the main culprits of heart diseases, bone diseases, and cognitive level-related diseases (13)(14)(15)(16)(17). As a significant mechanism for disease exploration, iron metabolism has gotten attention from plenty of research in exploring the relationship between iron metabolism and bone metabolism and the underlying pathways that induce osteoporosis (18)(19)(20). Iron metabolism also impacts bone homeostasis through ferroptosis, which is a kind of iron-dependent cell death characterized by an aggregation of lipid peroxides and ROS (21). Ferroptosis has been found to be associated with the pathophysiology of diverse ailments, which includes malignant tumors, ischemic diseases, neurodegenerative diseases, as well as metabolic disorders. Substances that are induced by ferroptosis performs the ability to diminish the activated state of glutathione peroxidase 4 (GPX4) via multiple routes, resulting in a significant decrement in oxidation resistance and oxidative death in cells eventually. ROS buildup has a significant impact on the creation and survival of osteoblastic cells and their differentiation into osteocytes, thus, oxidative stress might be a major contributor to T2DOP. Wang et al. discovered ferroptosis in T2DOP rats' bone tissue, and therapies with ferroptosis inhibitors dramatically could reduce the stress of oxidation and ameliorates the symptoms of osteoporosis, although the underlying mechanisms were still far from fully understood (19). Furthermore, findings from existed publications been proved that some medications that targeting on iron metabolism, including melatonin, Qing' e pills (22), and Artesunate (ART) (23) could relieve the systems of T2DOP to some extent, indicating the necessary to deeply invest in the research of iron and bone metabolism. Therefore, this review intends to systematically summarize the research progress of iron metabolism and diabetic bone loss, the underlying mechanisms exploration, and clinical therapies for the comprehensive evidence of further study. Bone fragility in diabetes According to the International Diabetes Federation, the alarming number of diabetics worldwide has surpassed 537 million as of 2021, and the most striking features of diabetes include chronically higherthan-standard fasting and random blood glucose, which either induce insulin deficiency [Type 1 diabetes mellitus (T1DM)] due to damage to pancreatic beta cells or progressive insulin secretion defect [Type 2 diabetes mellitus (T2DM)] from insulin resistance (24). Diabetic complications could significantly increase the patient's risk of morbidity and mortality. Long-term diabetes is known to cause macrovascular and microvascular damage to the heart, brain, nerves, eyes, and kidneys, while significantly less attention has been given to the musculoskeletal system. The high glucose (25) environment brought on by these two factors could further potentially affect the bone metabolism, bone loss or even osteoporosis (26). Osteoporosis is defined as bone mineral density (BMD) at the femoral neck that is 2.5 standard deviations (SD) or more below the mean for young female adults (T-score less than or equal to −2.5 SD (27)), on the basis of dual-energy X-ray absorptiometry (DXA). Osteoporosis induced by diabetes mellitus, sometimes referred to as diabetic bone disease, is a chronic disease that subsequently increases bone fragility and fracture risk owing to a decrease in bone density and damage to the bone microstructure (28,29). Research found that patients with diabetic bone disease are at a higher risk of long-term bone pain, motor dysfunction and fractures (30). More than 35% of individuals with Type 2 diabetes displayed bone loss, with 20% meeting the diagnostic criteria for osteoporosis. Diabetic bone loss is characterized by altered bone density, altered bone turnover, reduced bone microarchitecture, and increased fracture risk. Multiple independent research demonstrate that the BMD of diabetic individuals may be decreased, constant, or even enhanced. The femur and vertebrae are the major sites of elevated BMD in patient with T2DM (31)(32)(33). Generally, having unnecessarily abundant energy and being overweight are the main causes of the rise in BMD in T2DM patients. Adaptive changes in the bone that enable the body to sustain a heavier load may also contribute to the increment of BMD (34, 35). Nonetheless, despite greater mean BMD and T-score values, there is increasing evidence that the T2DM-associated increased fracture risk is related to decreased bone quality, which may be termed "diabetic osteopathy" (36-38). The apparently contradictory finding is based on the changes in bone turnover, decreased bone microarchitecture, accumulation of AGEs, muscular weakness, antidiabetic medication, etc., which might have the possibility to enhance the fracture risk of T2DM patients (39). Patients with T2DM usually display aberrant bone microstructure, particularly in the cancellous bone, with both a reduction in the number of trabeculae and morphological defects (40); also they also have a considerably decreased number of trabeculae and trabecular thickness in the femoral head compared to non-diabetic patients (41). Thinner cortical bones and higher porosity show a direct correlation with a decreased breaking load. Compared to the general population, individuals with T2DM had a 3% drop in radial cortical bone density and a 25% increase in cortical bone porosity (42); a smaller cross-sectional area, more cortical porosity; and a lower cortical vertebral BMD in the tibia, but not the radius with the assistance of HR-pQCT (43). Bone cell biology in high glucose condition Resorption and creation of bone are two essential components of bone remodeling. A major element in the development of osteoporosis is an imbalance in bone reconstruction. Bone remodeling, the coordinated activities of bone-resorbing osteoclasts and bone-forming osteoblasts, is required for continuous bone turnover and regeneration. Diabetes may affect all types of bone cells and promote adipose tissue formation in bone marrow. In this part, we intend to describe separately for four different cells in bone microenvironment in the context of HG: mesenchymal stem cells, osteoblasts, osteoclasts, and osteocytes ( Figure 1). Mesenchymal stem cells in HG condition Osteoblasts are derived from multipotent mesenchymal stem cells (MSCs), which may move to the site of impairment, proliferate, and differentiate (44). MSCs may be separated from peripheral blood and nonhematopoietic tissues such as adipose tissue, trabecular bone, dermis, dental pulp, synovium and lung, despite the fact that bone marrow is assumed to be the primary source of these precursor cells (45). As the most significant MSCs obtained from bone marrow, bone marrow-derived mesenchymal stem cells (BMSCs) play crucial roles in bone tissue regeneration. Different microenvironments such as high glucose levels, inflammation, and hypoxia, would change the physiological functioning of stem cells (46). Recent study has shown that osteoporosis was associated with an increase in circulating MSCs with low osteogenic potential, highlighting the importance of BMSCs for successful bone remodeling and/or repair in vitro (47). A number of studies have shown that the biological activities of BMSCs were modified by chronic exposure to a diabetic pathogenic environment (48,49). In addition, the serine/threonine kinase glycogen synthase kinase-3, also known as GSK-3, contains two remarkably homogeneous isoforms, GSK-3a and GSK-3b, which is a broadly expressed enzyme (50). GSK-3b inhibition could increase bone density (51). In high glucose microenvironments, GSK-3b activation as well as Wnt pathway suppression impede BMSC migration and proliferation, however, lithium chloride, an inhibitor of GSK-3b, may restore the functionality of BMSCs (46), according to Zhang et al. Moreover, Yu's study demonstrated the activation of GSK3b in diabetic osteoporosis and its deleterious osteogenic affected BMSCs in a high glucose milieu through the β-catenin/Tcf7/Ccn4 signaling axis inhibition, and thus provide unprecedented perspectives into diabetes osteopathy (48). Furthermore, as a common denominator of the numerous osteogenic signaling pathways, it's suggested to strictly mange the ROS levels for MSCs to undergo osteogenic differentiation (52). It is reported that usage of deferoxamine in vitro, the anti-osteogenic impact of superparamagnetic iron oxide nanoparticles was abolished, indicating that the free form of iron is significant to the inhibition of MSCs from differentiating into osteoblasts (53). Balogh et al. also approved that iron specifically prevents BMSCs from differentiating into osteoblasts without affecting adipogenic or chondrogenic differentiation (54). In summary, high glucose condition shows an impact on mesenchymal stem cells and suppresses its differentiation process. Bone cell biology in high glucose condition. Osteoblasts in HG condition Osteoblasts, which serve as bone-forming cells, originate from the sequential activity of transcriptional factors on mesenchymal precursors to osteoprogenitor lineages and eventually differentiate into osteocytes. Osteoblasts produce extracellular proteins such as osteocalcin, alkaline phosphatase, and type I collagen, the latter of which accounts for more than 90% of bone matrix protein. The extracellular matrix is initially secreted as unmineralized osteoid and becomes gradually mineralized when calcium phosphate concentrates as hydroxyapatite (55). It has been demonstrated that the high glucose conditions in T2DM severely impair the biological functions of osteoblasts, resulting in an increase in the density of mitochondrial bilayers and a decrease in the number of mitochondrial cristae, and leading to the accumulation of ROS as well as lipid peroxides causing the cells to exhibit excessive oxidative stress as well as lipid peroxidation, and causing the cells to exhibit excessive oxidative stress and lipid peroxidation, accelerating apoptosis and autophagy of osteoblasts. It is reported that the proliferation and differentiation of osteoblasts could be inhibited by excessive glucose in alveolar bone through the caspase-1/GSDMD/IL-1 pathway, indicating the opposite effects from usage of caspase-1 inhibitors in vivo and in vitro (56). HG condition could also affect osteoblasts by modulating iron metabolism as well. It was identified that iron overload reduces MC3T3 cell viability and causes apoptosis, in which they reported that an excess of iron may partially suppress osteoblast activity, and disturb the differentiation and mineralization processes of osteoblasts (57). What's more, the pathogenesis of T2DOP was significantly influenced by the osteogenic activity of osteoblasts, which was negatively influenced by iron overload caused by the increased expression of DMT1 in osteoblasts (58,59). In summary, HG condition not only suppress the differentiation process of osteoblasts, but also strongly affected its osteogenic function. Osteoclasts in HG condition Osteoclasts are end-differentiated multinucleated cells of the monocyte/macrophage lineage with unique function of resorbing bone matrix (60). Osteoclasts break down bone by secreting acids and proteolytic enzymes such cathepsin K, also known as CTSK, which break down matrix components like collagen during osteoclastogenesis (61, 62). As was known, monocytes could only differentiate into osteoclasts in vitro when co-cultured with cells comprising stromal cells and osteoblasts (63). Because osteoclasts and osteoblasts' respective bone resorbing and building processes are closely correlated, an adult's bone mass is generally steady. However, in many disease states such as osteoporosis, metastatic bone cancer, and inflammatory arthritis, the delicate balance is disturbed by an increase in osteoclast bone resorption activity (60). Various studies have revealed that the high glucose condition has a certain promotion effect on the differentiation of osteoclasts, which can strengthen their bone resorption ability (64,65). Clinical studies showed that osteoclastogenesis was more frequently accelerated by diabetes mellitus: (a) enhanced levels of tartrate-resistant acid phosphatase, a sign of increased osteoclast activity, were found in the blood of patients with T2DM (66); (b) tartrate-resistant acid phosphatase levels were higher in blood among T2DM patients (67). Studies on animals' models further approved that diabetes patients have higher osteoclast activity (68,69): compared to normoglycemic controls, osteoclastic bone resorption was increased in T2DM rats (70). TNF-a, macrophage-colony stimulating factor, receptor activator of nuclear factor kappa-B ligand (RANKL), as well as the vascular endothelial growth factor-A were all elevated in diabetic mice, which would differentiate and activate osteoclasts (71)(72)(73). Furthermore, osteoclasts could also be significantly influenced by iron overload, which is induced by DM. It's reported that ROS that arises from iron overload could activate the MAPK pathway, improving the differentiation capability and the bone resorption capacity of osteoclasts in bone metabolism (20). There's also evidence showed that ferritin autophagy took place when cells were irondeficient, which makes them more susceptible to ferroptosis caused by intracellular Fe 2+ (74, 75) Additionally, Mature osteoclasts require a greater amount of cytoplasmic free iron than other osteocytes. Hence, osteoclasts are more susceptible to ferroptosis (76,77). In summary, it implies that the HG condition can influence osteoclast activity, which may result in aberrant bone metabolism and osteoporosis. Osteocytes in HG condition Osteocytes are terminally developed osteoblasts that undergo substantial morphological changes when embedded in the mineralized bone matrix. It plays a key function throughout the homeostasis regulation of bone, with a main function to communicate with the surrounding environment (78,79): (a) their numerous dendritic processes that protrude from the osteocyte soma in all directions and enter the 'canaliculi' , which are tiny passageways by which the osteocytes could connect with other osteocytes and cells in the bone marrow or periosteum; (b) osteocytes in the interstitial tissue of the lacunar-canalicular structure come into touch with liquid, which enables these cells to function well. Consequently, the osteocyte lacunar-canalicular network provides a vast system that could detect changes in bone loading and regulate bone remodeling for the healthy skeleton, with the collaboration of other bone cells' (osteoblasts and osteoclasts) activities (80). Osteocytes may release various signaling substances in response to loading or unloading stimuli via the SOST/DKK/Wnt or the RANKL/Osteoprotegerin (OPG) axis. It may either promote bone resorption by producing RANKL and decreasing OPG, or decrease bone resorption by flipping the RANKL/OPG ratio. Osteocytes are also the substantial producers of Dkk1(the Lrp5/6 Wnt signaling inhibitor) and sclerostin (transcription product of the SOST genes) in connection to bone formation (78,81,82). It's interesting to note that patients with T1DM and T2DM had higher serum levels of sclerostin (83,84), indicating variations in glucose concentration may have impact on the cells most crucial for maintaining bone health as sclerostin is largely produced by osteocytes. Moreover, Blood glucose levels significantly above and below the normal range of 80-140 mg/ dl may have detrimental effects on osteocytes (85). Another study showed that diabetes caused osteocytes to alter over time and upregulate the sclerostin gene, that might be mediated by local glucose concentrations and could have a significant effect on the deterioration of bone quality (85)(86)(87). Frontiers in Nutrition 05 frontiersin.org Furthermore, it's suggested that inhibiting the ferroptosis pathway in diabetic mice prevented DOP and osteocyte death (10). Traditional cell death inhibitors such as Z-VAD-FMK and Nec-1 had no impact in rescuing osteocytes from the death induced by high glucose and high fat (HGHF) circumstances. Furthermore, they concluded excessive lipid peroxidation may be the primary source of cell damage in the diabetic milieu and that ferroptosis may be strongly associated with the underlying molecular process of cell osteocyte death. Altogether, high glucose level could induce longstanding changes in osteocytes via upgrading sclerostin expression and inducing ferroptosis, resulting in the imbalance of bone metabolism eventually (10). In summary, the high glucose level in the blood caused by T2DM alters the dynamic equilibrium between bone formation and bone resorption in a normal organism, resulting to a variety of complications such as T2DOP. Iron-related protein and bone formation in HG Studies have approved that proteins involved in iron metabolism have a very clear connection to bone metabolism. Here we give some summaries. Table 1 summarizes iron-related proteins and bone metabolism in high glucose condition. Ferritin Zarjou et al. found that the ferroxidase activity of ferritin was responsible for the suppression of osteoblasts' activities (75). By observing the effects of ceruloplasmin (a protein with ferroxidase activity but no iron sequestration ability) and examining the osteoblast-specific genes expression, they discovered that ferritin ferroxidase activity might inhibit the production and subsequent activity of alkaline phosphatase (ALP). Thus, the ferritin ferroxidase activity could not only inhibit the exclusive osteoblast product osteocalcin which in turn affect calcification, but also downregulate the osteoblast-specific genes such as core binding factor α-1, alkaline phosphatase and osteocalcin (75). Additionally, it has been demonstrated that mitochondrial ferritin (FtMt) reduces oxidative stress and maintains intracellular iron homeostasis (25). If FtMt expresses excessively, it will lessen ferroptosis that happens in osteoblasts under HG environments, whereas if FtMt becomes silent, it can stimulate autophagy in mitochondrial via the ROS/PINK1/Parkin pathway, leading to an increase in osteoblasts ferroptosis (74). In T2DOP, FtMt was showed to prevent ferroptosis in osteoblasts by decreasing oxidative stress produced by excess ferrous ions, while FtMt deficiency increased mitophagy in the pathogenesis of T2DOP (74, 89). HEP Hepcidin (HEP), which is produced and secreted by liver cells, regulates iron homeostasis. It can connect to the ferroportin (FPN) receptor, which is a type of transmembrane protein, to prevent cellular iron from entering the bloodstream (11,123). The sole iron output protein in vertebrates up to this point is FPN (90)(91)(92). If FPN activation induced by HEP is inadequate or inefficient, the organism may experience iron overload and perhaps iron deposition in the skeletons. Causing numerous ROS production, mitochondrial biogenesis, peroxisome proliferator-activated receptor gamma coactivator-1beta (PGC-1β) expression in osteoclasts and ultimately resulting in osteoporosis (93). In addition, there's also a study concluding that BMP/SMAD signaling pathway was discovered to possess the ability to regulate the expression level of HEP (94). Xu et al. not only found that HEP stimulated osteoblast intracellular Ca2+ in a dose-dependent manner, but also revealed that the process mention above is facilitated by voltage-dependent L-type calcium channels, which indicated an unignorable effect that HEP had on bone metabolism (95). Tfr2 In mammalian cells, there are two distinct transferrin receptors (Tfrs) (96). Transferrin receptor 1 (Tfr1) is predominantly expressed and binds to Fe 3+ -loaded holo-Tf with great affinity. Plasma iron flows attached to the iron transporter protein transferrin and is absorbed by endocytosis that mediated by Tfr1 under physiological circumstances. Tfr1 is regulated post-transcriptionally by intracellular iron status through the iron-regulatory protein system (97), resulting in elevated Tfr1 under low iron circumstances and diminished Tfr1 under high iron conditions (98). Bhaba's reported that Tfr1absence resulted in a >50% drop in osteoclast lineage cells in the total osteoblasts intracellular iron concentration (99). However, Tfr1-deficiency had no impact on the iron levels in monocytes and pre-osteoclasts. It has been determined that mature osteoclasts procured extracellular iron mostly via using Tf and heme (99). This study found that iron uptake regulated by Tfr1 is a key iron acquisition route in osteoclast lineage cells, which significantly regulates bone remodeling of trabecular in the perpendicular and axial bones via female and male mice models (99). Also, the increased cytoplasmic iron generated by Tfr1 was approved to be especially essential for mitochondrial energy consumption and cytoskeletal structure in osteoclasts, however, it still showed slight impact on the differentiation of osteoclasts (99). Transferrin receptor 2 (Tfr2) is another crucial regulator of hepcidin, which is proposed to control iron homeostasis. Tfr2 is known for controlling systemic iron levels, but it also promotes healthy erythropoiesis (100-103). Tfr2 has recently been identified a novel extrahepatic function, controlling bone mass directly by osteoblasts in the research from Martina Rauner's team (104). They reported that Tfr2, which is predominantly located in osteoblasts, governed bone production but had little effect on the systemic iron homeostasis. Furthermore, Tfr2 could also activate p38 MAPK signaling in osteoblasts, which leads to the induction of the Wnt inhibitor sclerostin and limits bone formation, hence, Tfr2 functions as a unique regulator of bone mass via modifying the BMP-p38 MAPK-Wnt signaling axis (104). (106). In mouse bone tissue after the deletion of IRP2, investigation has discovered the expression of the genes for the proteins that served as iron transporter (FLT, FPN1, and TFR1). This is a disease characterized by scant trabecular bone, which could induce the reduction of iron concentration and the downregulated expression of bone formation markers (107). Therefore, a lack of IRP2 may prevent the iron transporter from transferring, which results in a lack of iron and affects bone metabolism. However, additional research will be required in the future to understand this conclusion because the underlying process is currently elusive. METTL3 Methyltransferase-like 3 (METTL3), one of the m 6 A writers, is approved to play a role in the pathophysiology and growth of bonerelated disorders including osteoporosis, arthritis, and osteosarcoma (108). Nonetheless, there is controversy regarding the link between osteoporosis and METTL3 expression. For instance, one study found that overexpression of METTL3 in bone marrow monocytes protected mice against osteoporosis induced by estrogen deprivation, while disruption of METTL3 in mice destroyed bone formation, decreased osteogenic differentiation, and improved marrow obesity (109). Another study demonstrated a negatively regulatory role of METTL3 in osteogenesis process by activating NF-κB pathway, which was considered as a significant osteogenic differentiation inhibitor. And METTL3 was found to induce the expression of MYD88, an upstream regulator of NF-κB pathway, through control m6A methylation status of MYD88-RNA (110). Furthermore, researchers have discovered that METTL3 may be involved in high glucose and palmitic acid (HGPA)-induced osteoporosis via activating the ASK1/p38 signaling pathway, in which they noticed that METTL3 knockdown prevented HGPA-induced activation of ASK1/p38 signaling (111). The fact that the expression of the ferroptosis-inhibitory proteins GPX4 and SLC7A11 was markedly repressed further provided evidence that activating ASK1/ p38 pathway was responsible for the induction of ferroptosis (111). DMT1 Divalent metal transporter 1 (DMT1) is a 12-transmembranedomain protein that is present in various tissues, such as bone, kidney, and duodenum. DMT1 transports lots of divalent cations (112). It is the main apical transporter in charge of absorbing intestinal Fe 2+ and it is found to be widely expressed in endosomal compartments, in a place where it is in responsibility of exporting Fe 2+ throughout the transferrin cycle (112,113). As a result, iron overload and DMT1 expression are closely connected. DMT1 plays a role in the absorption of other metals in addition to its role in the metabolism of iron and manganese, and it also involves in the transfer of Cu 2+ and Cd 2+ (114,115). Further studies proved that the overexpression of DMT1 could lead to iron overload in osteoblasts, thus suppressing the osteogenic function of osteoblasts. Liu et al. discovered that human hFOB1.19 osteoblasts treated with ferric ammonium citrate (FAC) expressed more DMT1 compared with those untreated cells (58). Zhang et al. found that there were less of the autophagosome accumulation that was caused by FAC in DMT1-shRNA hFOB1.19 cells, which suggested that DMT1 controls the levels of Fe2+ in osteoblasts, which has an impact on the cellular accumulation of autophagosomes (59). In summary, DMT1 expression could enhance in the bone tissue of type 2 diabetic condition, then DMT1 induces iron overload in osteoblasts, and ultimately affects the osteogenic function of osteoblasts. HO-1 Heme oxygenase-1 (HO-1) is a cellular inducible oxidative stress regulator that oxidizes heme to produce biliverdin, carbon monoxide, and free ferrous iron (116). The role HO-1 plays in ferroptosis is still up for dispute at this time. Numerous studies showed that elevated HO-1 expression prevented oxidative stress in cells and prevented ferroptosis (37,117,118). For instance, Adedoyin et al. discovered that HO-1 −/− cells demonstrated higher erastin-induced cell death when compared to HO-1 +/+ renal proximal tubule cells (119). Other researchers, however, identified that excessive HO-1 caused organ failure and exacerbated ferroptosis (89,120). According to Fang et al., inhibiting HO-1 expression reduced ferroptosis in cardiomyopathy in models in vivo and in vitro (89). Tang et al. noted that blocking HO-1 activity should be a reliable way to prevent ferroptosis in the retinal pigment epithelium (120). It therefore demonstrated that HO-1 was a double-edged sword that functions differently in distinct tissues and disease models. HO-1 plays important roles in bone metabolism. Yang's team approved that the group with DOP had much more lipid peroxidation occurred in vivo via DOP mouse model, indicating that the highglucose microenvironment could induce osteocyte ferroptosis. Then they went further demonstrated the concrete mechanism of how highglucose microenvironment induced intracellular iron overload. In diabetic microenvironment, HO-1 transcription was activated upstream by the heterodimer of NRF2 and c-JUN and activation of HO-1 catalyzes heme oxidation produced a significant amount of free labile iron (10). What's more, Ma's finding also supports the theory that HO-1 might mediate HGHF-induced osteocyte ferroptosis (9). HO-1 activation and ferroptosis are both mutually causal and can lead to an endless loop of mutual promotion (10, 121). GSH Ferroptosis can also be induced by the depletion of glutathione (GSH) and the reduction in GPX4 activity (121). GSH is a protective substance in cells and the main substrate of GPX4, which can combine with lipid peroxide to reduce ROS, so as to play an important role in antioxidant. The body's lipid antioxidant system is regulated by GPX4 as its principal regulator. To protect biofilm Frontiers in Nutrition 08 frontiersin.org systems against ferroptosis damage, GSH is employed as a cofactor to convert peroxide (R-OOH) into alcohol (R-OH) and decrease the toxicity of lipid peroxides. However, the body's decreased GSH levels displays impacts on GPX4 activity, which is required for ferroptosis to occur. Numerous synthesis routes, such as glutathione synthetase (GSS) and nicotinamide adenine dinucleotide phosphate, are the source of GSH (121). A disulfide bond connecting the heavy chain SLC3A2 and the light chain SLC7A11 creates the cystineglutamate reverse transporter protein known as the XC-system. It mediates the 1:1 exchange of glutamate and cystine inside and outside the cell. The extracellular glutamate concentration influences the transport rate of the XC-system, and an elevated glutamate concentration inhibits cystine uptake and GSH production, which results in altering the GPX4 activity alteration and ferroptosis (11,122). This XC-system/GSH/GPX4 axis is one of the main pathways that HG induces ferroptosis. According to Zhao et al., system XC-mediated suppression of ATF3 activity resulted in the induction of osteoblast ferroptosis in high glucose conditions, and these occurrences aided in the pathogenesis of T2DOP (19). They found that ATF3 was upregulated by HG in vivo and in vitro, which reduced the expression of SLC7A11 and the amounts of intracellular GSH and extracellular glutamate (19). ATF3 inhibition then boosted GPX4 levels and decreased the buildup of ROS and lipid peroxides and these modifications reduced osteoblast ferroptosis and enhanced osteogenic activity. According to Ma et al., osteoblasts from osteoporotic individuals with T2DM developed a lot of ferroptosis lipid peroxides. The down-regulated expression of GPX4 and SLC7A11 in osteoblasts mitochondria and the XC-system were correlated with these lipid peroxides (9). In summary, high glucose condition induces the imbalance of iron metabolism (ferroptosis and iron overload) via abundant pathways like Nrf2/HO-1, METTL3, XC-system/GSH/GPX4. Some proteins, such as METTL3 and DMT1, also contribute dramatically to the regulation of iron metabolism. It indicated the necessary to explore deeply on the association between iron and bone metabolism and underlying pathways. Figure 2 showed the association between iron overload and osteoporosis in osteoblast and osteoclast. Activating the NRF2/HO-1 channel considerably lowers ferritin levels while reducing oxidative stress and it prevents ferroptosis and promotes bone production (124). The nuclear factor erythroid 2-related factor 2 (Nrf2) signaling pathway is directly downstream of ROS and controls the transcription of antioxidant response element-dependent genes to sustain cellular redox homeostasis and regulate oxidative mediators (125). Recent studies demonstrated that melatonin activated the Nrf2/HO-1 pathway and increased levels of the antioxidant enzymes HO-1 and NAD(P)H dehydrogenase [quinone] 1 to prevent kidney damage caused by diabetes and exert neuroprotective effects (126,127). Additionally, it has been noted that Nrf2 guarded cancer cells from ferroptosis brought on by erastin or RSL3 (128). NRF2/HO-1/GPX4 Researchers have found the NRF2/HO-1/GPX4 pathway had an impact on osteoblast. Ma et al. reported that activation the NRF2/ HO-1 channel considerably lowers ferritin levels while reducing oxidative stress. NRF2 initiates the cellular peroxidation and defense process by activating the downstream enzymes glutathione peroxidase and superoxide dismutase (SOD). Additionally, it eliminates hazardous elements like ROS, and further reduce the toxic effects to osteoblasts (9,129). Furthermore, in ferroptosis, the antioxidant system Nrf-2/HO-1 could be suppressed. In the absence of Nrf-2, the activity and expression of the GPX4 protein is reduced and the severity of iron death is enhanced. It indicates that both the Nrf-2/HO-1 antioxidant system and iron death may be regulated under inflammatory settings (130). Additionally, researchers found the Nrf2/GPX4 pathway played an important role in age-related osteoporosis. Using 18 female wild type and 16 Nrf2-knockout (KO) mice as experimental subjects, Kubo et al. found that old Nrf2-specific KO mice showed reduced bone mass, which significantly implied that chronic Nrf2 deficiency made a great contribution to the progression of osteoporosis specifically in aging females (131). Yang et al. determined that 1,25(OH)2D3 can delay age-related osteoporosis via activating Nrf2 antioxidant signaling pathway and inhibition of oxidative stress, which provided support for the significant impact Nrf2 signaling pathway had on age-related osteoporosis (132,133). Moreover, by evaluating the effect of 1,25(OH)2D3 on the Nrf2/GPX4 signaling pathway in MC3T3-E1 cells, other researchers also concluded that VDR activation inhibited osteoblast ferroptosis by activating the Nrf2/GPX4 signaling pathway, which indicates that there is a broad and profound link between the association of iron death and osteoblast (134). NF-κB signaling pathways To limit osteogenic development, nuclear factor κB (NF-κB) produces inflammatory molecules, suppresses Wnt signaling, and stimulates Smad and MAPK signaling pathways in osteoblasts. These changes caused by NF-κB mentioned above will ultimately activate ferroptosis (135,136). Through its control over the production of a network of inducers and effectors that characterize responses to pathogens, NF-κB plays a crucial part in the cellular stress response as well as in inflammation (137). Inflammatory cytokines are released as a result of host defense mechanisms in reaction to inflammation, which activates the NF-κB pathway (138). Postnatal bone development requires BMPs, which also promote the expression of the matrix proteins osteocalcin and bone sialoprotein. Osteopenia, bone fragility, and spontaneous fracture are caused by a decrease in BMP activity (139,140). The Wnt signaling system also promotes bone growth. When Wnt signaling is activated, β-nuclear catenin's expression rises, which in turn causes osteocalcin and bone sialoprotein to express more strongly. Inflammation inhibits Wnt signaling by increasing the expression of Wnt antagonists such as Dkk1 or sclerostin (82). NF-κB regulates transcription positively in practically every conditions. The latest research has shown that NF-κB may interfere with the transcription of gene and chemokines were suppressed when noncanonical NF-κB subunits bound to the κB sites (141,142). Interferon-b expression at the degree of promoter is directly suppressed by the activation of noncanonical NF-κB (143). Thus, RelB-p52 heterodimers were formed because of the noncanonical Frontiers in Nutrition 09 frontiersin.org pathway activation, which caused NF-κB to have a detrimental impact. In Tarapore's study, the researchers discovered that NF-κB was crucial for the decreased production of matrix proteins brought on by inflammatory reactions, which eventually affected bone formation. Activation of NF-κB inhibits the production of matrix proteins both Wnt-and BMP-stimulated. This suppression entailed b-catenin and Runx2 inhibition by binding to neighboring consensus sites and NF-κB, to directly interacted with the involvement of response elements in the promoter regions of bone matrix proteins (144). Furthermore, Other studies also found that significant impacts of NF-κB on bone formation, by approving that it stimulates inflammatory factors and stimulates Smad and MAPK signaling pathways in osteoblasts to prevent osteogenic differentiation (107,144,145). PI3K/AKT/FOXO3a/DUSP14 Iron overload significantly suppresses osteoblast proliferation and induces apoptosis through the PI3K/AKT/FOXO3a/DUSP14 channel, thus inhibiting bone formation in HG. It has been discovered that the PI3K/AKT signaling pathway contributes to signal transmission that is connected to cell proliferation, differentiation, invasion, and apoptosis (146). Specifically, researchers have reported that the proliferation and development of rat osteoblasts required activation of the PI3K/AKT signaling pathway (147). The FOXO3a gene belonging to the FOXO subfamily. The transcription of FOXO3a is suppressed by pAKT, which regulates the phosphorylated process of FOXO3a. Members of the DUSP family are intimately connected to cellular proliferation as well. According to a prior study, DUSP4 promotes the growth and invasion of colorectal cancer cells. Xia et al. discovered that iron overload reduced osteoblasts growth and promoted apoptosis greatly through the PI3K/ AKT/FOXO3a/DUSP14 channel (148). By noticing that the impact of iron overload in osteoblasts was greatly reduced by overexpressing DUSP14, their team demonstrated that through the inhibition of DUSP14 expression, iron overload may endanger the proliferation of osteoblasts. Additionally, iron overload enhanced p-AKT and p-FOXO3a expression in osteoblasts. FOXO3a could directly attach to the DUSP14 promoter and DUSP14 may therefore represent a unique element in the PI3K/AKT/FOXO3a pathway (149). In summary, PI3K/AKT/FOXO3a/DUSP14 signaling pathway is potentially in charge of cell defense in the presence of iron overload stress. RIPK1/RIPK3/MLKL In the iron overload-induced osteoblast apoptosis process, ROS could promote phosphorylation of RIPK1 and RIPK3 and create a positive vicious circle involving RIPK1/RIPK3/MLKL. Sufficient evidences suggest oxidative stress induced by iron overload is the primary factor in the pathophysiology of osteoporosis (150)(151)(152). It also appears that iron toxicity is intimately linked to cell death in illnesses from iron overload (153). Apoptosis and necrosis have been historically considered to be the two primary fundamental processes of cell death (154). ROS, as was already established, were crucial for Iron overload and osteoporosis in osteoblast and osteoclast. Frontiers in Nutrition 10 frontiersin.org the apoptosis that was induced by iron overload in the osteoblasts. Nevertheless, Tian's research revealed that necrosis may also be strongly related to the characteristics of osteoblasts death from iron overload (155). Similar occurrences have been observed in earlier research, which indicated that necrosis may be the principal mechanism of cell death for osteoblastic cells in iron overloadassociated bone disorders (156). The precise mechanisms through which iron overload induces osteoblastic cells to necrotize remains not fully understood. An example of planned necrosis is necroptosis, which is distinguished by morphological variations of necrosis and is greatly reliant on regulating RIPK1, RIPK3, and MLKL. The phosphorylated MLKL eventually goes to the plasma membrane via oligomerization and penetrates, and then triggers necroptotic cell death (157,158). Tian's team firstly demonstrated how ROS were crucially regulated in iron overload-induced necroptosis and found that ROS brought on by iron overload encourage necroptosis by creating a positive feedback loop with the involvement of RIPK1/RIPK3. The results of Tian's study showed a dose-dependent rise in RIPK1 and RIPK3 phosphorylation as well as total protein expression in the osteoblastic cells following exposure to FAC. Nonetheless, following FAC treatment, the osteoblasts' protein expression of MLKL showed no appreciable change. The addition of Nec-1, GSK872, or NSA inhibited iron overload-induced necrotic cell death in osteoblasts. Their findings illustrated iron overload induced necroptosis in osteoblasts cells, at least partially through the RIPK1/RIPK3/MLKL pathway, and finally inhibited bone formation (155). Iron-related signaling pathways and bone resorption Osteoclasts are multinucleated large cells that are differentiated from bone marrow monocytes and come from the hematopoietic cell lineage (64). Two essential cytokines, macrophage colony-stimulating factor (M-CSF) and receptor activator of nuclear factor-B ligand (RANKL), affects the development of monocytes into osteoblasts. The cytokine M-CSF regulates the process by which BMMSCs differentiate into preosteoblasts and their proliferation, whereas RANKL controls the process by which preosteoblasts differentiate into osteoblasts and the activity of mature osteoblasts (159). Furthermore, cytokines such as tumor necrosis factor (TNF) and interleukin (IL) (160) could regulate the formation of osteoblasts (161). It was shown that RANKL was also linked to the recruitment of the non-receptor tyrosine kinase and tumor necrosis factor-associated receptor (TNFR) (162). c-Src acts to activate signaling pathways involved in osteoclast differentiation and maturation, such as NF-κB signaling pathway (163), and MAPK signaling pathway while TNFR acts to activate the Akt signaling pathway, which in turn induces the expression of nuclear factor of activated T-cell (NFATc). NFATc is the core transcription factor of osteoclasts, which ultimately mediates osteoclast differentiation, fusion and degradation of inorganic and organic bone matrix (164). The common signaling pathways for osteoblasts include OPG/ RANKL/RANK, NF-κB, c-src-PIK3-AKT, MAPK, and CN-NFAT, all of which were approved crucial for controlling osteoclast development (165). However, the latest research revealed that the NF-κB signaling pathway and the MAPK signaling pathway were mostly responsible for T2DOP in the case of ferroptosis caused by HG conditions (20). NF-κB signaling pathway The intrinsic immune system's NOD, LRR, and pyrin domaincontaining protein 3 (NLRP3) inflammatory vesicles recognize pathogens like viruses and bacteria, and activate inflammatory factors to mediate inflammation. It has been discovered that in osteoclasts, however, NLRP3 played a critical role in promoting osteoclast maturation and increasing bone resorption (166). A recent study showed that mice osteoclasts that expressed NLRP3 in particular did not undergo systemic inflammation. The amount of osteoclasts stayed the same, but the bone mass decreased by around 50% (165). The NLRP3 inflammasome performs a variety of tasks in both young and old persons. Bone loss in old mice lacking NLRP3 is increased through bone resorption rather than bone formation. Similarly, MCC950 inhibited osteoclast development by reducing caspase-1 activation, but not observed in young mice. Moreover, the transcription factor NF-κB, could encourage the production of molecules that control the development of inflammatory vesicles with the NLRP3 gene (163). And it's demonstrated that the ROS generated in the high glucose state led to the phosphorylation of MAPK-related proteins, which in turn activated the MAPKs pathway and subsequently the NF-κB pathway. This increased the expression of NLRP3 in the internal environment, which in turn promoted the maturation of osteoclasts and increased osteoclastic bone resorption (163). ERK/JNK/p38 pathway Three different signaling pathways of MAPK, MAPK kinase (MEK or MKK), and kinase of MAPK kinase (MEKK or MKKKK), make up the MAPK signaling system. Together, these three kinases that can be activated in any order, regulates a range of significant physiological and pathological reactions, including cellular development, differentiation, stress, and inflammatory responses (167). ERK, JNK, p38/MAPK, and ERK5 are the four primary branching points of the MAPK pathway. JNK and p38 have comparable roles in inflammation, apoptosis, and cell growth; and the ERK pathway primarily controls cell growth and differentiation; and Ras/Raf protein serves as its upstream signal. These kinases used in the branching route are all different and can be used as biomarkers in the pathway. As a downstream branching pathway of the MAPK pathway, ERK/JNK/p38 pathway is another signaling pathway that might contribute to osteoporosis. Related studies have shown that the ERK/JNK/p38 pathway plays an important role in promoting the differentiation of preosteoclasts, promoting the survival of osteoclasts and inhibiting osteoclast apoptosis (164). In contrast, iron deficiency with moral hyperglycemia enhances the expression of ROS increases, which in turn increases the expression of RANKL, Frontiers in Nutrition 11 frontiersin.org thus promoting the ERK/JNK/p38 pathway for greater differentiation of pro-osteoclasts (164). This increases the bone resorption effect of osteoclasts, causing a disruption in the homeostasis of bone resorption and bone formation, which in turn leads to osteoporosis. 7. Therapeutic targets and drugs targeting iron metabolism for DOP 7.1. Preclinical monitoring: evaluating diabetes-specific risk factors for osteoporosis In addition to the age-related risk factors and other established fracture causes, a comprehensive investigation of risk variables is required for the clinical examination of bone fragility in diabetes patients. Bone fragility is a distinct risk factor for fractures in both T1DM (168) and T2DM (169), and is substantially linked with the length of the condition. Individuals with T1DM are more likely to fracture more frequently and experience bone loss even when they are young (131,170). Due to the fact that osteoporosis is a frequent complication of T1DM, DXA testing, and laboratory checks to identify additional risk factors, such as hypogonadism would be generally recommended by physicians. Hofbauer et al. advised testing for blood 25-hydroxyvitamin D (25[OH]D) in diabetics who were institutionalized (i.e., Living in a care facility such as a nursing home) or at risk of falls and fractures in order to identify a rapidly curable cause of falls and fractures. The initial bone assessment for determining fracture risk would also strongly be taken into consideration in the testing. Poor glycaemia control was identified strongly associated with increased bone fragility, with a HbA1c threshold of more than 9% (75 mmol/mol) in individuals with T2DM and more than 9% (63 mmol/mol) in individuals with T1DM (171). Moreover, routine assessments should be made of hypoglycemic episodes, which can result in cardiovascular events, falls, and fractures in both type 1 (172) and type 2 (173) of diabetes. Consequently, it is advised to maintain stringent glycemic control in individuals who are younger and have the condition earlier. Strict glycaemia control's skeletal benefits in patients with long-term disease, diabetic comorbidities, and a history of falls must be weighed against the elevated risk of falls and cardiovascular events brought on by hypoglycemia. Currently, sulfonylureas and thiazolidinediones are used cautiously in patients at risk of fractures, metformin, glucagon-like peptide-1 (GLP-1) receptor agonists, SGLT2 inhibitors, and DPP-4 inhibitors exhibits a safe bone profile for type 2 diabetes (174,175). Moreover, Metformin was found to lower the incidence of fractures in T2D patients and increased bone mass and bone quality in ovariectomized (OVX) rats. The underlying mechanism contained decreased RANKL expression and osteoclast inhibition (176,177). Another study demonstrated that metformin limits bone marrow stromal stem cells' ability to produce succinate and lessens the stimulatory effects of succinate in promoting osteoclast development, and bone resorption (178); while a recent study reported that metformin usage did not increase BMD (179), and similar osteo-protective effect was also seen in non-diabetic OVX (180). Further studies are needed for the inconsistent findings in clinical practice. General interventions and classic anti-osteoporosis drugs This is a consensus that unless their serum 25(OH)D concentration is at least 20 ng/ml, all diabetics are recommended to take vitamin D supplements. In obese patients, calorie restriction to lower body weight is frequently used to halt the onset of diabetes mellitus; nevertheless, weight loss is considered to be linked to decreased bone mass. Thus, it is strongly advised that people with T2DM and obesity control their weight by carefully supervised exercise (181), which could strengthen bones aid patients with diabetes mellitus in preventing bone loss. According to a metaanalysis, people with T2DM who follow a Mediterranean diet rich in fresh fruits, vegetables, and fish have a lower incidence of fractures and microvascular sequelae (182). The unhealthy eating habits of high sugar and fat should be quitted. Also, bad lifestyle choices like drinking too much alcohol and smoking need to be carefully avoided. Some clinical trials illustrated that alendronate (183) and teriparatide (184) displayed some therapeutic effects in diabetes mellitus through post-hoc analyses. According to Langdahl's study, teriparatide showed similar effects in lowering fracture risk for diabetic patients as general patients (185). Alendronate has been shown to reduce postmenopausal osteoporosis patients' fasting glucose and insulin resistance in preclinical diabetes mellitus (186). Dagdelen et al. found that alendronate had a more muted effect on increasing forearm BMD in postmenopausal osteoporosis patients with diabetes mellitus than in postmenopausal osteoporosis patients without diabetes mellitus, but it had no appreciable difference in effect on BMD in the hip and vertebrae between the two patient groups (187). Some studies proved that the mechanism of postmenopausal osteoporosis is also related to iron metabolism. For example, the postmenopausal spine may be protected against bone loss by dietary iron (188). Ni et al. suggested and tried to testify that an alternative method of treating postmenopausal osteoporosis might be to induce ferroptosis in osteoclasts by inhibiting Hypoxia-Inducible Factors (HIF-1) and ferritin (189). A recently created anti-osteoporosis medication called Romosozumab, the first sclerostin inhibitor licensed by the U.S. FDA, targets sclerostin, has demonstrated remarkable effectiveness in treating postmenopausal osteoporosis (190). Now that Picicca et al. demonstrated that diabetes caused osteocytes to alter over time and upregulate the sclerostin gene, we assumed that Romosozumab may be a very effective drug in treating DOP reasonably, which is also a potential research direction (85). Pharmacological regulating iron metabolism and anti-ferroptosis therapies for DOP To the best of our knowledge, there remain no randomized controlled trials examining the effectiveness and security of antiferroptosis medications in individuals with diabetes osteoporosis. And there are no specific medications to treat DOP currently, and many studies merely explored the potential treatment impact in animal tests, with many studies focusing on simply the prospective therapeutic targets. We know that the distance from animal studies to clinical trials is long and this would be a potential research direction, people are all looking forward to a potent drug for DOP. Figure 3 showed the Frontiers in Nutrition 12 frontiersin.org drugs and potential therapeutic targets about iron metabolism in high glucose condition. Iron affects several phosphate and bone illnesses, as was previously indicated (191). Iron homeostasis should always be maintained for healthy cellular activity. Many studies have revealed that the major feature of ferroptosis is iron excess-induced aberrant iron metabolism. Increased iron intake decreased stable iron, and iron outflow would together induce ferroptosis. The six-transmembrane prostate epithelial antigen 3 (STEAP3) transforms ferric iron to ferrous iron when the Tfr1 on the cell membrane binds to circulating iron. DMT1 then releases divalent iron into the cytoplasm's labile iron pool (LIP). Notably, because of their significant LIP storage, lysosomes are considered to be the major organelles responsible for cellular ferroptosis, which shows potentially desirable potential disease targets (192). Moreover, iron overload-induced liver ferroptosis in transferrin knockout mice is greatly reduced by both treatments with Fer-1 and hepatocyte-specific Slc39a14 deletion (193). Deferoxamine, an iron chelator, inhibits ferroptosis, has been demonstrated its clinical potential. Bordbar et al. found that in comparison to other regimens, combination therapy with Deferasirox and Deferoxamine had the greatest effect on lowering blood ferritin, despite its negligible value, and decreasing bone loss in the lumbar spine and femoral neck (194). Accordingly, Fer-1, was found to be an effective ferroptosis inhibitor because of its ability to scavenge lipid (195). Emerging studies have indicated that ferroptosis is involved in metabolic disease, cardiomyopathy, neurodegeneration, ischemia-reperfusion injury, and the effects of cancer (89,196). Targeting ferroptosis may be an effective strategy for treating DOP. Yang et al. applied a mouse model of DOP and established the critical involvement of ferroptosis in DOP-induced osteocyte death both in vivo and in vitro (10). The increased expression of HO-1 caused intracellular iron overload and heme breakdown, which subsequently triggered the oxidation of lipids. For this mechanism to work, nuclear factor-like 2 and c-direct JUN's binding were required. Furthermore, inhibiting ferroptosis greatly reversed trabecular degeneration and osteoclast death. Iron atrophylinkage and HO-1 activation are causally connected and may result in a self-feeding vicious cycle. These all offered prospective therapeutic targets for upcoming DOP therapy plans: ZnPP (an HO-1 inhibitor) and Fer-1. Intriguingly, treatment with Fer-1 consistently had a higher therapeutic outcome than that with ZnPP, indicating that using Fer-1 to scavenge intracellular lipid peroxides may be a more effective treatment plan for DOP. Furthermore, their study showed ZPP and Fer-1 therapy in diabetic mice also prevented lacunar emptying and osteocyte death in addition to restoring trabecular balance. In conclusion, stopping the ferroptosis pathway could prevent DOP and osteocyte mortality in diabetic mice. In addition, System Xc, an amino acid antiporter that is made up of two subunits of the xCT light chain (catalytic subunit, encoded by the SLC7A11 gene), and the heavy chain (chaperone subunit, encoded by the SLC3A2 (197, 198)), mediates the exchange of extracellular cystine and intracellular glutamate on the cell membrane. The expression level of SLC7A11 is typically positively correlated with the FIGURE 3 Drugs and potential therapeutic targets about iron metabolism in high glucose condition. Frontiers in Nutrition 13 frontiersin.org activity of the antiporter, playing a critical role in preventing ferroptosis caused by lipids. Because the light chain encoded by SLC7A11 is responsible for the primary transport activity, and the heavy chain subunit SLC3A2 primarily serves as a chaperone protein (19). Several research have demonstrated the therapeutic benefits of melatonin, which is a strong endogenous antioxidant. Thus, if melatonin may neutralize ROS, could this be a possible method by which melatonin treats DOP? Ma's study might provide the solution (9). It has been demonstrated that melatonin can enhance bone microstructure both in vivo and in vitro by inhibiting osteoblasts' ability to ferroptosis in which melatonin lowered ROS levels, elevated SLC7A11 levels, and boosted GPX4 activity by opening the NRF-2/ HO-1 antioxidant channel Also, it reduced the toxicity of lipid peroxides to shield the biofilm system from ferroptosis, enhancing osteoblast 's capacity for osteogenesis and bone microstructure (9). Another study discovered that melatonin can inhibit the ERK signaling pathway and lower osteoblast autophagy levels, delaying the pathological development of DOP (199). In other disease-related osteoporosis, such as postmenopausal osteoporosis, there are some specific medicines or therapeutic schedules. For example, some researchers suggested that some special types of osteoporosis, anti-resorptive medication should be used after anabolic therapy, similar to the therapeutic sequence used to treat common osteoporosis (200). And the best BMD improvements were seen in postmenopausal women with osteoporosis who received these sequential medications in this order in clinical studies (201). However, this has not yet been established for those with diabetes, and there is no clinical trial to prove this therapy (190). But this can be a potentially effective treatment option. According to Zhang et al. (93), postmenopausal osteoporosis is prevented by hepcidin-induced reductions in iron concentration and PGC-1 expression, which adversely affect osteoclast differentiation, so maybe hepcidin can play a role in treating DOP in the future? Conclusion and outlooks Long-term, poorly controlled diabetes commonly culminates in diabetic bone disease with fragility fractures, which has a considerable impact on socioeconomic and public health burdens. The advent of knowledge of the biological mechanisms and implicated pathways, coupled with improved multiscale imaging of bone, has made it feasible to gain new insights into the increased bone fragility in diabetes at many levels. In this review, we systematically summarize the diverse mechanism and pathways of ferroptosis in osteoblasts, osteoclasts, and other key cells, and attempt to comprehend the regulatory targets of interventions and treatments in clinical practice, applying the identified biomarkers as guides, aiming to highlight the near-term opportunities to elaborate the execution mechanisms and targeted therapeutics of iron metabolism and ferroptosis to T2DOP. For further research, it is necessary to clarify the diagnostic criteria for DOP in patients of varying ages and disease trajectories, and to reach a consensus. Although there have been some studies exploring the mechanism of iron metabolism and ferroptosis in diabetic bone loss, the mutual effect among these key proteins and pathways remains unclear, and the relative importance of each mechanism in the development of diabetic osteoporosis has not been explored, which is meaningful to find key therapeutic targets. Finally, there is no specific medicine to treat diabetic patient with osteoporosis, therefore developing new treatment strategy for patients with DOP is promising and significant. Advances in iron metabolism and ferroptosis are particularly noteworthy. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-05-05T13:20:09.433Z
2023-05-05T00:00:00.000
{ "year": 2023, "sha1": "fe1f41c99b5dd962e1d7fce6b9a80ba7a9973cc9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "fe1f41c99b5dd962e1d7fce6b9a80ba7a9973cc9", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
247767200
pes2o/s2orc
v3-fos-license
One year after the COVID-19 outbreak in Germany: long-term changes in depression, anxiety, loneliness, distress and life satisfaction Several studies have linked the COVID-19 pandemic to unfavorable mental health outcomes. However, we know little about long-term changes in mental health due to the pandemic so far. Here, we used longitudinal data from a general population sample of 1388 adults from Germany, who were initially assessed between April and May 2020 (i.e., at the beginning of the COVID-19 pandemic in Germany) and prospectively followed up after 6 (n = 1082) and 12 months (n = 945). Depressive and anxiety symptoms as well as loneliness did not change from baseline to 6-month follow-up. While anxiety symptoms did not change in the long run, depressive symptoms and loneliness increased and life satisfaction decreased from baseline to 12-month follow-up. Moreover, vulnerable groups such as younger individuals or those with a history of mental disorders exhibited an overall higher level of psychopathological symptoms across all assessment waves. Our findings suggest a deterioration in mental health during the course of the COVID-19 pandemic, which emphasizes the importance to implement targeted health promotions to prevent a further symptom escalation especially in vulnerable groups. Introduction The COVID-19 pandemic and related lockdown measures have disrupted people's everyday life. Specifically, social distancing measures have reduced social and physical activities and, thus, increased the risk of social isolation [1][2][3]. Moreover, financial and job insecurities as well as worries about people's own health and the health of loved ones might have led to increased distress [4,5]. Thus, the present pandemic situation is assumed to threaten mental health [6] especially in vulnerable populations [7,8]. In fact, several studies reported a worldwide increase of depressive and anxiety symptoms, loneliness, and distress from the time before the pandemic to the first wave of the pandemic [see 9 for a review, [10][11][12]. Moreover, these longitudinal and additional cross-sectional studies have identified sociodemographic correlates and risk factors (e.g., younger age, living without a partner, a previous mental illness, lower educational level, being unemployed) of elevated distress and psychopathological symptoms during the pandemic [13][14][15][16][17][18][19]. These studies have helped to identify individuals at risk for short-term mental health problems at the beginning of the pandemic. However, for an adequate implementation of further lockdown measures and targeted mental health interventions, it is crucial to (a) examine long-term mental health trajectories beyond the first months of the pandemic and to (b) identify vulnerable groups with particularly unfavorable trajectories. Recent prospective longitudinal studies investigating the course of mental health during the first months of the pandemic have demonstrated that mental health problems (i.e., general mental health and distress, as well as depressive and anxiety symptoms) decreased while lockdown measures were eased after the first COVID-19 outbreak [10,13,[20][21][22][23]. Interestingly, the recovery of mental health problems was observed to be stronger in vulnerable populations such as women (vs. men), younger (vs. older) individuals, individuals with a lower (vs. a higher) educational level, 1 3 and those with (vs. without) children [13,22]; although the level of mental health problems remained elevated in these specific populations even after easing of the first lockdown in the UK [22]. However, after the easing of the COVID-19 situation in summer 2020, in several countries including Germany lockdown measures were repeatedly tightened and extended in response to recurrent increases in COVID-19 cases. According to vulnerability-stress models [24], one would assume that repeated distress and social isolation resulting from repeated implementation of lockdown restrictions might be associated with a worsening of mental health in the long run, especially in vulnerable groups. However, we know little about long-term changes in mental health up to 1 year after the COVID-19 outbreak so far. Thus, it remains unresolved whether repeated implementations of lockdown restrictions do confer an increased risk for an escalation of mental health impairments in the general population and in particularly vulnerable subgroups. An improved knowledge hereon would help to inform policymakers and the health care system to implement targeted strategies to prevent adverse long-term effects on mental health. In the current study, we analyzed data from a general population sample of 1388 adults, who were initially assessed from April to May 2020 (i.e., during the first COVID-19 wave in Germany) and prospectively followed up after 6 (i.e., from November to December 2020, during the second COVID-19 wave in Germany) and 12 months (i.e., from May to June 2021, during the third COVID-19 wave in Germany). The aim was (a) to model long-term changes in mental health up to 1 year after the initial COVID-19 outbreak in Germany and (b) to assess whether these changes were more unfavorable in particularly vulnerable groups (e.g., women, younger individuals, individuals with a previous mental illness). Participants In the present study, we used data from a non-probability sample of the general population in Germany assessed at the beginning of the pandemic (see [14]) and then prospectively followed up after 6 and 12 months. In the present study, we only consider data from those individuals who participated in at least two assessment time points. In this longitudinal study, a total of 1388 participants repeatedly completed an online survey (soscisurvey.de) over 1 year (see panel A of Fig. 1 for an overview of the study design and study sample). The first assessment (baseline; n = 1388) started during the first peak of the COVID-19 pandemic in Germany, between April 17th and May 15th 2020, that is, four weeks after all German federal states had implemented public health measures (see Fig. 1 for further information on the containment measures imposed at the time of the assessment). The second assessment (6-month follow-up; n = 1082) was conducted between November 19th and December 8th 2020. At this time, COVID-19 cases rapidly increased and lockdown measures were extended and tightened (see Fig. 1). The third assessment (1-year follow-up, n = 945) was conducted between May 12th and June 14th 2021, that is, in the end of the third COVID-19 wave in Germany (see Fig. 1). As can be seen in Fig. 1B and C, the severity of lockdown-related restrictions during the 1-year follow-up was comparable to the severity of lockdown-related restrictions during baseline. However, at the 6-month follow-up, the overall stringency index and level of strictness of certain containment measures were slightly lower relative to the time of the baseline and 12-month follow-up assessment (see Fig. 1B and C). A total of 639 individuals participated at all three assessment time points. Participants were recruited via convenience sampling methods (e.g., via social media, personal contacts, or email). All participants provided informed consent. The study was approved by the local Ethics Committee of the University of Marburg (2020-33k). Measures At baseline, several sociodemographic factors were assessed (see Table 1). We also asked participants to indicate whether they do or do not belong to an officially designated risk group for a severe COVID-19 disease progression (COVID-19 risk group). Moreover, the following psychological outcomes were measured: Depressive symptoms were assessed with the Patient Health Questionnaire-9 (PHQ-9; [26]). Generalized anxiety was assessed with the 7-item Generalized Anxiety Disorder scale (GAD-7; [27,28]). Loneliness was assessed with the 3-item version of the UCLA Loneliness Scale [29]. Psychosocial distress (e.g., due to financial problems or worries, distress at work, distress resulting from childcare) was assessed with the Stress module of the Patient Health Questionnaire. Finally, and as in previous research (see [30]), general life satisfaction was assessed with a single item ("All things considered, how satisfied are you with your life these days?") and a 11-point Likert-scale ranging from 0 (completely dissatisfied) to 10 (completely satisfied). Data analysis Statistical analyses were conducted with SPSS 26 (SPSS for Windows, IBM). All analyses were conducted using mixed regression models with repeated measurement occasions (i.e., assessment time points, Level 1) nested or clustered within persons (Level 2). In all analyses, fixed-effect [25]) during the COVID-19 pandemic in Germany (March 2020 to July 2021). The gray bars represent the time points and durations of the three assessment waves (T0: baseline, T1: 6-month followup, T2: 12-month follow-up); C strictness of the containment and closure policies during the assessment time points (a higher score represent a higher level of strictness).The levels of strictness of the listed containment and closure policies are used to calculate the stringency index (i.e., the overall level of the government's response, see panel B). Values in parentheses represent the range of the restriction due to the containment and closure policies. Please see [25] for further information on the coding of the different levels of strictness. Data were obtained from the Oxford COVID-19 Government Response Tracker [25] regression models with an underlying compound symmetry covariance matrix and a restricted maximum likelihood estimation were used. To examine the change of psychological outcomes from the baseline to the 6-month and 12-month follow-up assessment, the assessment time point was dummy-coded (T0 vs. T1 and T0 vs. T2) and both dummy-coded variables were entered as continuous predictors into the regression models. First, psychological outcome measures were regressed on sociodemographic/risk factors and the dummy-coded assessment time point (T0 vs. T1 and T0 vs. T2) as multiple predictors. Second, interaction terms between the dummy-coded assessment time point and each sociodemographic/risk factor were computed and added to the analyses to explore whether symptom changes differed between individuals with and without specific risk factors. The alpha level was set at 0.05. Our main analyses on the effects of sociodemographic factors and the assessment time point on mental health refer to nine different sociodemographic factors (gender, age, educational level, employment, relationship, living alone, living with underage children, current or previous psychiatric/psychotherapeutic treatment, COVID-19 risk group) and two dummy-coded timing variables (T0 vs. T1, T0 vs. T2) * five outcomes (depressive symptoms, anxiety symptoms, loneliness, distress, and life satisfaction). Our main analyses on the interaction effects of sociodemographic factors on the change of mental health from baseline to the 6-month and 12-month follow-up refer to two time-dependent effects (6-month follow-up, 12-month follow-up) * five outcomes (depressive symptoms, anxiety symptoms, loneliness, distress, and life satisfaction) * nine different sociodemographic factors (gender, age, educational level, employment, relationship, living alone, living with underage children, current or previous psychiatric/psychotherapeutic treatment, COVID-19 risk group). We did not adjust for multiple testing because each effect refers to another research question based on clearly distinguishable constructs [31]. However, researchers who believe that adjustment for multiple testing is necessary may refer to this number of effects. Effect of sociodemographic/risk factors on psychological outcomes Associations between sociodemographic/risk factors and psychological outcomes are presented in Table 1. Younger age, a lower educational level, a history of mental disorders and belonging to a COVID-19 risk group were associated with increased anxiety and depressive symptoms. Younger age, living alone and a history of mental disorders were associated with higher loneliness. Female sex, younger age, lower educational level, cohabiting with children, a history of mental disorders and belonging to a COVID-19 risk group were associated with elevated psychosocial distress. Female sex, older age, higher educational level, cohabiting with a partner and no history of mental health disorders were associated with higher life satisfaction. Effects of sociodemographic and COVID-19-related factors on the change in depressive and anxiety symptoms, loneliness, distress and life satisfaction As shown in Table 2 Table 3). There was a stronger decrease in psychological distress from baseline to the 6-month follow-up in individuals with (vs. without) underage children (b = − 0.56, SE = 0.20, p = 0.006) as well as a stronger reduction in distress from baseline to 12-month follow-up in unemployed (vs. employed) individuals (b = − 0.62, SE = 0.27, p = 0.021). Employed (vs. unemployed) and those individuals without (vs. with) a history of mental disorders showed a stronger decrease in life satisfaction from baseline to the 6-month and 12-month follow-up (ps < 0.034, see Table 3). Life satisfaction decreased from baseline to 12-month followup in those with a high (vs. low) educational level (b = − 1.03, SE = 0.49, p = 0.036). Unemployed/non-working individuals showed a stronger reduction in loneliness from baseline to the 6-month follow-up assessment than employed individuals (b = − 0.35, SE = 0.13, p = 0.008). Other variables did not modulate the change in depressive and anxiety symptoms, loneliness, distress or life satisfaction (see Table 3). Table 2 Means Discussion Studies investigating the long-term consequences of the ongoing COVID-19 pandemic on mental health are still rare. However, the study of potential long-term consequences of the COVID-19 pandemic is important to inform the health care system and to implement preventive strategies to reduce potential negative mental health consequences. Therefore, we investigated how depression, anxiety, distress, loneliness and life satisfaction longitudinally changed over the course of 1 year, from the first to the second and third wave of the pandemic. Moreover, we investigated whether longitudinal changes differed between individuals with vs. without specific sociodemographic characteristics and risk factors (e.g., women vs. men and younger vs. older individuals). The present study documents a long-term deterioration of mental health during the COVID-19 pandemic in Germany. Specifically, we observed an increase of depressive symptoms and loneliness as well as a decrease in life satisfaction from the beginning of the COVID-19 pandemic to the 1-year follow-up. Anxiety symptoms persisted on a high level over the 1-year follow-up period. In contrast to these long-term effects, we found no change in loneliness, anxiety, and depressive symptoms in the short run (i.e., from baseline to the 6-month follow-up assessment), corroborating previous longitudinal data using a 6-month follow-up period [32,33]. However, life satisfaction and psychosocial distress decreased during the same period. Moreover, we identified vulnerable groups (e.g., younger individuals) who were at increased risk for an overall higher level of psychopathological symptoms across all assessment time points but also for a short-term deterioration of mental health problems. In the present study, depressive symptoms did not change in the short run (i.e., from the first to the second COVID-19 wave in Germany). However, after 1 year, we observed a worsening of depressive symptoms relative to the beginning of the COVID-19 pandemic in Germany which is in line with evidence from a longitudinal study among US adults [34]. Moreover, our data are in line with findings from a longitudinal population-based survey (COVID-19 Snapshot Monitoring) in Germany demonstrating that, at the time of our 12-month followup assessment, individuals felt more burdened than during the baseline and 6-month follow-up assessment [35,36]. Importantly, previous longitudinal studies conducted before the COVID-19 pandemic (i.e., under non-pandemic conditions) did not observe such significant changes in mental health problems over time [13], suggesting that the increase in depressive symptoms during the COVID-19 pandemic is not the result of annual or seasonal variations. It is to note that, between the 6-month follow-up and 12-month follow-up assessment, two long-lasting and highly restrictive lockdowns were imposed in response to increases in COVID-19 cases in Germany. However, the degree of lockdown-related restrictions during the 1-year follow-up assessment was comparable to the level of restrictions being present during the baseline assessment (see Fig. 1). One might suggest that repeated and long-lasting restrictions and isolations led to an increase in depressive symptoms, while anxiety symptoms persisted on a high level over the 1-year follow-up period. Most interestingly, the increase in depression was accompanied by an increase in loneliness and a reduction in life satisfaction. In contrast, during the same period, general psychosocial distress continuously decreased. Thus, the present data might suggest that deterioration of depressive symptoms during the pandemic is rather linked to increased loneliness and lower life satisfaction in response to reduction of social contacts and social isolation but not to an overall higher level of psychosocial distress related to the pandemic situation. This finding corresponds to previous studies that demonstrated that loneliness and social isolation are important risk factors for the onset or increase in depressive symptoms [37][38][39][40]. Moreover, the worsening of depressive symptoms and loneliness in the long run was preceded by a decline in life satisfaction indicating that life satisfaction may serve as a sensitive marker or early indicator for a subsequent deterioration of psychopathological symptoms [41]. In line with evidence from several cross-sectional and longitudinal studies worldwide [13-16, 18-20, 22], we demonstrated that, across all assessment waves, being young, a lower educational level, a history of mental disorders and belonging to a COVID-19 risk group are risk factors for high levels of depression, anxiety, distress and decreased life satisfaction during the pandemic. Moreover, being young, living alone and a history of mental disorders were associated with increased loneliness. Corroborating previous position papers that predicted an increase of mental health problems in specific populations [7,8], we identified vulnerable groups with particularly unfavorable trajectories in the short term. For example, we found that younger individuals showed an increase in depressive and anxiety symptoms while depression and anxiety symptoms decreased in older individuals. Moreover, females reported a slight increase in anxiety, while males exhibited a decrease in anxiety symptoms. These findings suggest that especially vulnerable groups fail to cope with the renewed tightening of lockdown restrictions and did not adapt as well to the ongoing pandemic situation as older individuals or men. Thus, these vulnerable groups might need tailored support to prevent a further escalation of symptoms. Surprisingly, the observed long-term increase in depression was much more pronounced in individuals without a history of mental disorders, while the level of depressive symptoms persisted on a high level over the 1-year follow-up period in those with a history of mental disorders. The present study should be considered in the light of the following limitations: First, in the present study, participants were recruited via convenience sampling methods which may lead to biases in the recruited sample (over-or under-representation of population groups) and, thus, may limit the generalizability of the present findings to the general population of Germany. In fact, as a result of our recruitment method (i.e., convenience sampling methods), in the present sample, older respondents and men as well as individuals with a lower educational level were relatively underrepresented, which might limit the generalizability of the findings, especially to these population subgroups. Thus, the present findings should be validated using representative probability samples. A relatively high number of participants lost to follow-up (45% of respondents participated in at least two assessment waves) which, however, is within the expected attrition rate ranging between 30 and 70% for longitudinal studies [11,13,32,42]. Notwithstanding this, the relatively high attrition rate in the present study should be considered when interpreting the present results with regard to the generalizability of the findings to the general population. Our study exclusively relied on self-report data which might have been subject to memory and recall biases. Please also note that we mainly focused on internalizing symptoms (i.e., depression and anxiety). Thus, additional studies are needed to investigate whether long-term changes during the COVID-19 pandemic were similar for externalizing symptoms (e.g., anger, aggression, alcohol abuse) [6]. Moreover, during all assessment waves, the stringency of lockdown measures was relatively high and comparable across all assessment waves. However, there is evidence that general mental health problems as well as depressive and anxiety symptoms significantly decreased during summer 2020 and 2021, i.e., during easing of lockdown restrictions in Germany and other European countries [10,13,22,23,35,36]. For example, data from a longitudinal study in Germany revealed a decrease in depressive and anxiety symptoms from April to June 2020, i.e., during easing of the first lockdown [23]. Thus, it might be that, after an initial reduction in psychopathological symptoms during easing of the lockdown in Germany, symptoms subsequently increased due to the tightening of lockdown restrictions. However, due to the relatively low temporal resolution of the assessment waves, in the present study, we were not able to reveal such potential changes. Conclusion In the present longitudinal observational study, we found no symptom change in the short run but a worsening of depressive symptoms, loneliness and life satisfaction in the long run. Younger individuals were identified as a risk group for overall higher levels of mental health problems and unfavorable trajectories of mental health outcomes. In line with vulnerability-stress models [24], the observed worsening of depressive symptoms may increase the risk for the onset or further deterioration of psychological disorders which may lead to a greater need for psychiatric or psychological treatment. This risk for developing psychopathological symptoms might be further increased in vulnerable groups (e.g., younger individuals) due to the overall higher psychopathological symptoms already present during the initial phase of the COVID-19 pandemic. Therefore, to prevent or mitigate these adverse long-term mental health consequences, interventions or prevention strategies should be implemented, especially in vulnerable populations. Specifically, according to the results of the present study, these interventions should target feelings of social isolation, loneliness and life satisfaction to counteract deterioration or persistence of anxiety and depressive symptoms. For example, based on evidence indicating that higher level of social support and more frequent social contacts were associated with lower depressive symptoms [14,16,43], interventions should target at strategies to boost social support and increase the number of social contacts. However, given that we found increases in mental health problems in individuals not identified as at-risk persons in previous studies (e.g., individuals with no history of mental disorders), special attention should also be paid to the long-term trajectories of people who are not supposed to be at higher risk for adverse mental health consequences. Funding Open Access funding enabled and organized by Projekt DEAL. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors All authors report no financial relationships with commercial interests. Conflict of interest The authors declare that they have no conflict of interest. Ethical standards The authors assert that all procedures contributing to this work have been approved by the local Ethics Committee of the University of Marburg (2020-33k) and have therefore been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. Consent to participate Informed consent was obtained from all participants included in the study. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-03-29T14:02:05.340Z
2022-03-29T00:00:00.000
{ "year": 2022, "sha1": "3f5fff07330acc71c50f3f335962bf1506f42960", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00406-022-01400-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3f5fff07330acc71c50f3f335962bf1506f42960", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
251949137
pes2o/s2orc
v3-fos-license
Developing capacity for implementation and evaluation of vaccine trials in Uganda: Perspective of the Makerere University Walter Reed Project Introduction Infectious diseases and neglected tropical diseases continue to be a major challenge in resource limited settings, causing significant morbidity and mortality. Although vaccines are a key biomedical prevention tool, resource limited settings often lack the infrastructure, regulatory frameworks, and skilled human resource to conduct vaccine clinical trials. To address this gap, the Makerere University Walter Reed Project (MUWRP) was established and has contributed to vaccine research in Uganda and globally. Methods This was achieved through training a strong vaccine clinical trial workforce; development of requisite clinical trial infrastructure for research activities and management of investigational products; conducting phase I-III vaccine trials and contribution to national ethical and regulatory frameworks that protect participants. Results As of 2022, MUWRP had successfully conducted and completed five phase I/II HIV vaccine clinical trials, five for Ebola and Marburg, while one phase I/II Schistosomiasis and one phase III COVTD-19 vaccine clinical trial are ongoing. Discussion The completed vaccine trials provided critical scientific knowledge on the safety and immunogenicity of investigational products which informed the design of better vaccines for diseases of global health importance. Conclusion Academia, through establishment of appropriate partnerships can contribute to the identification of solutions to complex public health challenges. Introduction The development of safe and effective vaccines has been vaccines to 25 infectious diseases 1 . However, there is an urgent need to develop vaccines for prevalent, endemic pandemic further underscored the relevance of vaccine development for the prevention of infectious diseases of public health concern 2 . That said, there is a paucity of information on developing capability for vaccine research in sub-Saharan Africa 3 . - [4][5][6] . Although numerous pub- Methods About MUWRP tion, whose mission is to mitigate disease threats through quality research, health care and disease surveillance. The project's scope includes clinical research; the Presidential -veillance, one health and global health security; and the pability program whose primary objective is to establish a strong clinical trial workforce; developed critical infrastructure for research activities, data management, information technology, laboratory testing and storage; aligned research awareness, buy in and political will; and instituted ethical and regulatory frameworks that protect participants. Hereunder we describe how these capabilities were achieved. Human Resource Development standards in order to generate credible data that informs prioritization of biological products 11 . To achieve this, MUWRP developed a diverse workforce including physinurses, laboratory scientists, clinical research coordinapharmacists, quality and compliance specialists, data management staff, and grants managers for clinical trial implementation. Training included general clinical trial training, Good Clinical Practice (GCP), Good Clinical Laboratory Practice (GCLP), Human Subjects' Protection (HSP), standard of International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) 12 . Regular competence assessments to ensure maintenance of these skills are conducted throughout the period of employment. Collectively the staff have re-Establishing VCT Infrastructure MUWRP's infrastructure development was guided by ICH and international partners. Initial space allocation at commodate the research clinic, laboratory and adminis-search clinic has 12 well-equipped clinic rooms for private physical examination and counselling, a well-equipped phlebotomy room, a waiting area, plus secure records and 13 . A research pharmacy equipped with a biosafety level II (BSL-2) 0 C and -40 0 C) and 2 refrigerators monitored by digital thermometers with temperature and humidity data loggers was established. It manages investigational products according to national and international pharmacy is restricted by key and biometrics. Data Management and IT Infrastructure Integrity, safety, and validity of trial data is critical to the evaluation of vaccines. MUWRP instituted a data manand process vaccine trial data according to Health Insur-ance Portability and Accountability Act (HIPAA) privacy -ized Enterprise grade core servers and clusters. A secure relational databases and synchronously replicates data backups across redundant secondary sites was also developed. Investments in a defense in-depth approach which secures all mobile devices, workstations and servers through endpoint signature-based security, web proxcontrol by training data entry and management specialists; acquiring analytical infrastructure and software (e.g., the Clinic Appointment Scheduling & Tracking (CAST) system which tracks attendance, scheduling and follow-up for vaccine trial participants. Laboratory Capacity Laboratory capacity for moderate and highly complex testing is a challenge in resource limited settings 14-16 . MU-WRP established its clinical and research BSL-2 laboratory at the MakCHS and immediately instituted processes leading to College of American Pathologists' accreditation in 2005. This accreditation has been maintained through a rigorous continuous quality assurance and quality improvement program, based on GCLP 17 . The laboratory has capacity for safety tests, light microscopy, diagnostic immunology, cellular immunology, molecular diagnostics, and biological specimen processing and cryopreservation. A robust laboratory information management syso C and liquid nitrogen freezers supported by 2 liquid nitrogen plants lished. There is in-house capacity for shipping specimens for complex testing following International Air Transport Association standards. All laboratory work is handled by scientists and technologists. Financial Management Capability nancial capability of institutions to manage grants and procure supplies. MUWRP invested in a robust accounting system that supports budgeting, expenditure tracking, forecasting and reporting per funder requirements. MU-WRP further adheres to Generally Accepted Accounting Principles and cost principles of (U.S.) federal contracting that meet the standards of the Comptroller General of the U.S Government, and is regularly audited by internal and external auditors. Relatedly, MUWRP developed a supply chain management system that adheres to internationally acceptable procurement principles and practices in accordance with the Federal Acquisitions Regulations of the U.S. Community engagement participation of communities most affected by the disease in question . Community engagement entails research education, efforts to allay fears and correct myths and misconceptions, and winning community trust to create partnerships in vaccine development [20][21][22] . MUwhere members of the community, policy makers and researchers work together to address health-related issues. MUWRP further formed a community advisory board with representation from former trial participants, religious leaders, the media, politicians, civil society, and key and vulnerable populations, who meet regularly to review are relevant to the population and community voices are heard. Policy makers and politicians are engaged on the importance of vaccine research in the country (Picture 1). When a clinical trial requires participation of vulnerable populations, peer leaders support recruitment and retention. Table 1 highlights some of the community engagement challenges and their solutions. Access to key populations such as commercial sex workers Use of sub-advisory boards that are populationfocused; an improvement from the snowballing mechanisms of identifying potential participants Participant expectations for healthcare beyond the provisions and lifetime of the trial Continuous participant education on the trial requirements, provisions and limitations Instituting an Ethical and Regulatory Framework for VCTs and Africa raised many questions concerning the availability of an ethical and regulatory framework for equity in biomedical research in Uganda 4, 10, 23, 24 . MUWRP set up a regulatory, quality and compliance department to ensure trial compliance with local (Research Ethics Committees, Uganda National Council for Science and Techetc.) guidelines for research involving human subjects. This department scrutinizes protocols prior to submission for review to national regulatory authorities, tracks and follows up all regulatory correspondence, internally monitors trials, reports adverse events and deviations, organizes continual staff trainings, submits annual progress reports, obtains subsequent trial approvals in a timely manner, and maintains all pertinent documentation. All these processes are guided by a quality management plan Training on laboratory research and quality systems MUWRP offers internships to students and graduates from Makerere and other universities on laboratory processes, GCLP, biosafety, among others. Students are actively supported to utilize the laboratory infrastructure Picture 1: Dr. Kibuuka (Executive Director) addressing parliamentarians about HIV vaccine research for postgraduate research, including basic sciences projects. In addition, the laboratory conducts didactic and hands-on training on GCLP for other clinical research sites (CRS) and private health facilities in Uganda. Results In light of the above processes, MUWRP has grown into master of science and 3 doctor of philosophy students, and has supported the improvement of quality management systems in two regional referral hospitals (Fort Portal and Kayunga), leading to laboratory accreditation 25 . In addition, MUWRP has successfully conducted 12 Phase has conducted and their key outcomes. RV 156, 2004 uated the safety and immunogenicity of a multiclade well-tolerated and this trial laid the foundation for design RV 172, 2006 --ed adults in East Africa. The vaccine was safe and well 63% of vaccinees, with titers of preexisting Adenovirus serotype 5 (Ad5) neutralizing antibody not affecting the frequency and magnitude of T cell responses in primeboost recipients 26 . The vaccines were further tested in unique subpopulations in a phase 2b trial in the US to 27 . RV 247, 2009 -Ebola or Marburg vaccine clinical trial in Africa, and the results showed that, given separately or together, both vaccines were well tolerated and elicited antigen-specific humoral and cellular immune responses, thus contributing to expedited development of more potent Ebola virus vaccines that use the same wild-type glycoprotein antigens . RV 262, 2012 GAG) administered by intramuscular Biojector® 2000 or Cellectra® intramuscular electroporation device fol--Results showed that cellular responses were observed in inant, including high rates of binding antibody responses to CRF01_AE antigens. Electroporation did not confer prime for this regimen . Discussion tious diseases. The current efforts to reach herd immuresearch in addressing national and global public health threats. There is need for stronger vaccine trial capability in resource limited settings for effective response to emerging and re-emerging infectious disease threats. This article has highlighted MUWRP's progress towards setdiscusses below, the impact of these initiatives towards national and global efforts in prevention of infectious diseases. Impact in Uganda and globally Although the majority of vaccine clinical trials presented are early phase trials, the knowledge attained has contrib-vectored vaccines that were tested during the 2014-2016 Ebola virus disease outbreak in West Africa 32 . The EBtested at two CRS in Uganda (including MUWRP) 31 were approved by the European Union for prophylactic use during Ebola outbreaks. In addition, safety and immunohave contributed to the body of knowledge guiding the design of investigational products for the prevention 26 and treatment 33 critical need for new preventive interventions. Contribution to Building Capacity of other Vaccine Research Institutions in Uganda MUWRP has contributed to the critical mass of clinical, laboratory and sociological researchers in the country. Many staff who trained and worked at MUWRP have moved on to successfully set-up or support other CRSs, establishing a foundation for locally designed research protocols. Staff actively contributed to the development of the National Guidelines for Research involving Humans as Research Participants, the Uganda National Health Laboratory Services policy and the national biosafety and biosecurity guidelines and regulations. MUthrough presentations at national and international sciengenerated from vaccine clinical trials. In addition, staff are members of key Ministry of Health technical working groups that de-ments within and outside Makerere University have resulted in collaborations with both local and international colleges, universities, pharmaceutical and research bodies that have created new training and research opportunities for scientists in Uganda. MUWRP's state-of-the-art infrastructure continues to build national capacity and capability for the conduct of complex vaccine trials through training, mentorship and internships. Conclusion cation of vaccine products for the prevention of diseases, but the lack of adequate infrastructure, skilled personnel and effective ethical and regulatory frameworks may limit their conduct in low resource settings. This article summarizes the efforts and results of building local capacity and capabilities for the successful conduct of impactful strates how partnerships between academia (MAKCHS) and non-academic organizations can develop skills, infrastructure and frameworks for the evaluation of appropriate solutions to infectious disease threats.
2022-08-31T15:08:22.878Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "6efedf4f77cf99c753c3211dafd2a85adc9f836a", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/ahs/article/download/230579/217699", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "af389117a4f35e941c5f1c86e56bb027d60110ef", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1484193
pes2o/s2orc
v3-fos-license
BEX2 promotes tumor proliferation in colorectal cancer BEX2 has been suggested to promote the tumor growth in breast cancer and glioblastoma, while inhibit the proliferation of glioma cells. Thus, the role of BEX2 in tumor was still in debate. Additionally, the biological functions of BEX2 in colorectal cancer (CRC) have not yet been clarified. Here, we reported that BEX2 was overexpressed in advanced CRC from both the GSE14333 database and fresh CRC tissue specimens, and positively correlated with clinical staging. Knockdown of BEX2 significantly decreased the in vitro proliferation of SW620 colorectal cancer cells, suppressed subcutaneous xenograft growth and enhanced the survival of mice with cecal tumors. These effects were mainly mediated by the JNK/c-Jun pathway. Knockdown of BEX2 inhibited JNK/c-Jun phosphorylation, while BEX2 overexpression activated JNK/c-Jun phosphorylation. Moreover, the administration of the JNK-specific inhibitor SP600125 to SW620 with BEX2 overexpression abolished the effect of BEX2 on SW620 cell proliferation. This study reveals that BEX2 promotes colorectal cancer cell proliferation via the JNK/c-Jun pathway, suggesting BEX2 as a potential candidate target for the treatment of CRC. Introduction Colorectal cancer (CRC) is a devastating malignant disease, ranking as the second most common cause of cancer death in Western countries [1] and the fifth most common cause of cancer death in China [2]. Despite the great achievements made in conventional cytotoxic chemotherapy for patients with CRC over last several decades, the overall patient survival rate has not significantly improved. Although the introduction of the anti-EGFR antibodies cetuximab [3] and panitumumab [4] has improved the survival of CRC patients with the wild-type K-Ras gene, there is still an urgent need for new effective therapeutics. It is anticipated that only new drugs with novel targets will improve the state of CRC care [5]. However, our understanding of the mechanisms of colorectal cancer development is still limited. Thus, it is crucial to further explore the molecular events of CRC biology, which will facilitate the discovery of novel therapeutic targets. Brain-expressed X-linked (BEX) genes are a family of genes that reside on the mammalian X chromosome, and BEX proteins are involved in the cell cycle, cancer and tumor growth [6,7]. BEX2 is highly expressed in the embryonic brain and can interact with transcriptional factor LMO2 to regulate transcriptional activity [8]. In addition, BEX2 has been reported to be involved in tumor development in several types of cancer, such as glioblastoma, glioma Ivyspring International Publisher and breast cancer [9][10][11]. Naderi et al. found that BEX2 was up-regulated in a subset of primary breast cancers and that down-regulation of BEX2 induced G1-phase arrest in breast cancer cell lines [10]. BEX2 is also highly expressed in glioblastomas and promotes cell proliferation and survival by mediating nuclear factor-kappa B activity [11]. However, other studies have obtained opposite findings, showing that BEX2 inhibits cell proliferation. For example, Foltz et al. showed that BEX2 was epigenetically silenced in primary glioma cells and exhibited extensive promoter hypermethylation, and BEX2 re-expression resulted in significant suppression of tumor growth [12]. Furthermore, overexpression of BEX2 in mouse pro-B and myeloid cells resulted in decreased FLT3 (FMS-like tyrosine kinase-3)-ITD (Internal tandem duplication)-dependent cell proliferation [13]. Additionally, decreased BEX2 expression in Hs683 oligodendroglioma cells did not lead to changes in cell proliferation [14]. Thus, BEX2 appears to exhibit different expression patterns and functions in different types of tumors. However, the expression and function of BEX2 in colorectal cancer still remain unknown. Here, to better characterize the role of BEX2 in CRC, we sought to explore the expression of BEX2 in CRC specimens at different tumor stages. BEX2 was shown to be associated with more aggressive characteristics in colorectal cancer. Subsequently, knockdown of BEX2 was proved to inhibit cell growth in a colorectal cancer cell line both in vitro and in vivo. Furthermore, the effects of BEX2 on cell proliferation appeared to be mediated by the JNK/c-Jun pathway. This study reveals that BEX2 plays an important role in cell proliferation in colorectal cancer and suggests that BEX2 might be a novel candidate target for the treatment of CRC. Validation of the relationship between the expression of BEX2 and clinical manifestations of illness in an independent dataset The relationship between the expression of BEX2 and clinical manifestations of illness was validated using publicly available independent microarray datasets (GSE14333) by Jorissen et al. [15]. The GSE14333 dataset consisted of 290 patients (Supplementary Table 1) with CRC and was downloaded from the Gene Expression Omnibus dataset (http://www.ncbi.nlm.nih.gov/geo). Patients and specimens Thirty-four patients (Supplementary Table 2) who were diagnosed with CRC and underwent colectomy between 2013 and 2014 were enrolled in this study. None of the enrolled patients had received any prior chemotherapy or radiotherapy. The disease stage was determined according to the pathological tumor-node-metastasis (pTNM) classification system [16]. Surgically resected colorectal cancer specimens and paired normal mucosal tissues banked at -80°C were obtained. Tumors were de-identified in accordance with the protocols approved by the Institutional Review Board (IRB) of the Second Affiliated Hospital of Zhejiang University School of Medicine. Cell culture and SP600125 treatment All of the cells were maintained in RPMI-1640 (Jinuo Biotech, HangZhou, China) supplemented with 10% fetal bovine serum (FBS) (Life Technologies, Carlsbad, CA, USA), penicillin and streptomycin at 37°C with 5% CO2. SW620 cells were obtained from the American Type Culture Collection (ATCC) (Rockville, MD, USA). The SW620/shBEX2 cell line and the SW620/Ctrl cell line were established by stably expressing shRNA targeting BEX2 or a scramble control shRNA in SW620 cells as described below. SW620/BEX2 cells and SW620/vector cells were established by transfecting the mammalian expression vectors pCMV-Myc-BEX2 and pCMV-Myc as described below. SP600125, a selective inhibitor of JNK [17], was purchased from Selleck Chemicals Company (Houston, TX, USA). SW620/Ctrl cells were treated with 10 µM SP600125 for 72 hours before protein extraction or cell proliferation analysis. Establishment of the SW620/shBEX2 and SW620/Ctrl cell lines Lentiviral particles containing a validated short hairpin RNA directed against BEX2 (sc-60271-V) and the corresponding scramble control (sc-108080) were purchased from Santa Cruz Biotechnology, (Santa Cruz, CA). Lentiviral infection was performed according to the manufacturer's instructions. Briefly, SW620 cells were plated at 50% confluence. On the day of infection, the culture medium was replaced with complete medium containing lentiviral particles (MOI=20) and polybrene (5 μg/ml). Following 24 hours of infection at 37°C, the viral supernatant was replaced with fresh medium. After an additional 48 hours, the infected cells were treated with 2.0 µg/ml puromycin dihydrochloride (Santa Cruz Biotechnology) for 2 weeks for stable clone selection. The knockdown efficiency was determined through quantitative real-time PCR and Western blot analyses of BEX2 using a rabbit anti-BEX2 polyclonal antibody (1:1000, Proteintech, Chicago, IL). BEX2 transfection in SW620 cells Sense cDNA for BEX2 was introduced into the multi-cloning site of the mammalian expression vector pCMV-Myc to construct sense plasmids. The plasmids were then transfected into SW620 cells using the GeneJet™ Plus reagent (SignaGen Laboratory, Rockville MD, USA) following the manufacturer's protocol. Cells were incubated with the transfection medium for 12 hours, after which the transfection complex-containing medium was gently removed, and fresh culture medium was added. The cells were grown for two days, and the transfection efficiency was analyzed before further use. Cell proliferation assay Cell proliferation was analyzed using a Cell Counting Kit-8 (CCK-8) (KeyGEN BioTech, Nanjing, China). All cells were seeded into 96-well plates at a density of 5000 cells/well in a 200 µl volume and incubated at 37°C under 5% CO2 for 24, 48, 72, or 96 hours, followed by the addition of 10 μl of CCK-8 solution. The absorbance in each well was measured at 0 and 2 hours using a microculture plate reader at a test wavelength of 450 nm. Four replicate wells were set up in each group, and three independent experiments were performed. Clone formation assay Cells in the exponential phase of growth were digested and resuspended in complete medium. The cell suspension was then serially diluted and inoculated into 6-well plates containing 5 ml of medium at a density of 500 cells/well, quantified using a hemocytometer. The culture medium was changed every 3 days. After 2 weeks, clone spheres had formed in the dishes. The cells were then rinsed with 0.01% PBS and fixed with 4% paraformaldehyde for 15 minutes. Next, crystal violet solution was added for 15 minutes, and the samples were rinsed with water and air-dried. The number of clones was counted using Image J software. [19] Mice Balb/c athymic nude mice (SLAC Laboratory Animal Co. Ltd., Shanghai, China) were maintained and subjected to experiments in accordance with the protocols approved by the Animal Care and Use Committee of the Second Affiliated Hospital of Zhejiang University School of Medicine. All of the animal experiments were performed on six-to-eight-week-old female Balb/c athymic nude mice. Subcutaneous xenograft model Tumor cells (1×10 6 ) were subcutaneously injected into mice. Tumor growth was monitored daily until the tumor was palpable. Then, the tumor diameter was measured with calipers twice a week. Tumor-free survival was measured from the day of tumor inoculation until the long axis of the tumor was shorter than 2 mm and analyzed using the Kaplan-Meier curve. Mice were euthanized at week 5 following tumor inoculation. The long (L) and short (S) axes of each tumor were measured for the harvested tumors with calipers. Tumor volume (V) was calculated as V = (L×S 2 )/2. Colorectal cancer orthotopic model The orthotopic mouse model of colorectal cancer was established using a previously described cecal wall injection technique [20,21]. In brief, nude mice were anesthetized, and the cecum was exteriorized via laparotomy. A total volume of 50 μl of a cell suspension containing 1×10 6 tumor cells was injected into the cecal wall using a 27G needle. After injection, the injection point was slightly pressed with a cotton stick and inspected to ensure no leakage. The cecum was subsequently returned to the abdominal cavity and closed with running sutures. All of the mice were maintained until death caused by the neoplastic process or until the end of the experiment (140 days). All mice were monitored twice daily. Overall mouse survival was analyzed using the Kaplan-Meier curve. Cell cycle analysis Cells were washed twice in ice-cold 10 mM phosphate-buffered saline (pH 7.4) and fixed in 75% ethanol at 4°C for 24 hours. After additional washing, 1×10 6 cells were digested with RNase (50 mg/ml) and stained with 4 µg/ml Hoechst 33342 (Sigma, Aldrich, Hamburg, Germany) for 30 minutes at 37°C. DNA content was determined using a FACSCalibur flow cytometer (Becton Dickinson, San Jose, CA) and Modfit software. All experiments were performed in triplicate and were independently repeated three times. Cell apoptosis analysis A total of 1×10 6 cells were harvested and double-stained with FITC-conjugated annexin V and propidium iodide (PI) using an Annexin V-FITC Apoptosis Detection Kit (Bio-Rad Lab., Hercules, CA, USA), according to the manufacturer's protocol. The cells were analyzed with a BD FACSCalibur Flow Cytometer (Becton Dickinson) within 1 hour of staining. Apoptotic cells were defined as annexinV-FITC-positive and PI-negative cells. All experiments were performed in triplicate and were independently repeated three times. Statistical analysis All of the graphing and statistical analyses, except for the trend of BEX2 expression, were performed using GraphPad Prism version 5.0 software (GraphPad Software, La Jolla, CA). The data are presented as the mean ± standard error of the mean (SEM). The qPCR results from paired clinical samples were analyzed using the two-tailed paired Student's t-test. The trend of BEX2 expression across tumors with different pTNM stages was analyzed via the Cuzick nonparametric test, performed using Stata 13 (StataCorp LP, College Station, TX). Comparison of tumor-free survival between groups was performed using the log-rank (Mantel-Cox) test. The other results were analyzed with the two-tailed unpaired Student's t-test. P values <0.05 were considered statistically significant. BEX2 is highly expressed in more advanced colorectal cancers. To understand the function of BEX2 in colorectal cancer, we first sought to assess BEX2 expression in colorectal cancer tissues. We analyzed the expression of BEX2 in 290 CRC patients in the GSE14333 sample cohort and observed that the expression of BEX2 was significantly correlated with the aggressive characteristics of CRC (Cuzick nonparametric test for trend, p=0.0147, Figure 1A). The clinical characteristics of this cohort are listed in Supplementary Table 1. To confirm our findings, we verified the expression of BEX2 in 34 colorectal cancer specimens via quantitative real-time PCR. Higher levels of BEX2 were found to be expressed in more advanced tumors (Cuzick nonparametric test for trends, p<0.05; Figure 1B) when the patients were stratified based on the diagnosis of pTNM staging, where stage I was the least advanced, and TNM stage IV was the most advanced. Down-regulation of BEX2 inhibits colorectal cancer cell growth in vitro. Because BEX2 overexpression was found to be associated with more advanced stages of colorectal cancer, we next examined whether knockdown of BEX2 would inhibit the progression of colorectal cancer in vitro. We first generated the SW620/shBEX2 cell line utilizing a lentivirus shRNA system to inhibit BEX2 expression in the SW620 cell line. BEX2 expression was reduced by 70% in SW620/shBEX2 cells compared with control SW620/Ctrl cells, as assessed via qPCR ( Figure 2A) and Western blot ( Figure 2B). We next examined the effect of the down-regulation of BEX2 on cell growth using the CCK8 assay. We observed significant growth inhibition in the SW620/shBEX2 cell line (Student's t-test, p<0.01) ( Figure 2C). Similarly, the number of cell colonies in monolayer cultures of SW620/shBEX2 cells was smaller than in the SW620/Ctrl group (Student's t-test, P<0.01) ( Figure 2D, 2E). These results suggest that BEX2 knockdown may prevent colorectal cancer progression by suppressing cell growth. . The Cuzick nonparametric test for trends was employed to evaluate trends. B. Quantitative real-time RT-PCR was used to measure BEX2 expression levels in 34 colorectal cancer specimens. BEX2 is expressed at higher levels in more advanced tumors (p=0.002). Colorectal cancer tumor tissues were ordered according to pTNM staging: stage I (n=0), stage II (n=7), stage III (n=7) and stage IV (n= 10). Figure 2. Down-regulation of BEX2 inhibits SW620 colorectal cancer cell proliferation in vitro. A. BEX2 mRNA expression was quantified via qPCR. BEX2 expression levels in SW620 cells transduced with BEX2 shRNA (SW620/shBEX2) or with control shRNA (SW620/Ctrl) are shown. GAPDH expression was used for normalization. B. BEX2 expression was examined through Western blot analysis and quantified using ImageJ software. The relative ratio (RR) of BEX2 protein expression in SW620/shBEX2 cells compared with SW620/Ctrl cells is shown. C. The growth curves of SW620/shBEX2 cells and SW620/Ctrl cells were measured using CCK-8 cell proliferation assays. Compared with the control cells, SW620/shBEX2 cells showed significant growth inhibition from the third day onward. D. Quantitative analyses of colony formation were conducted using Image J software. Knockdown of BEX2 in SW620 cells resulted in significant inhibition of colony formation. Three independent experiments were performed in triplicate. E. Representative images of colonies of SW620/shBEX2 cells and SW620/Ctrl cells (**p< 0.01). Down-regulation of BEX2 suppresses colorectal cancer growth in vivo. To further assess whether the down-regulation of BEX2 inhibits colorectal cancer growth in vivo, we employed a subcutaneous xenograft model. SW620/shBEX2 cells or SW620/Ctrl cells were subcutaneously inoculated into Balb/c athymic nude mice. The xenografts grew significantly more slowly in the BEX2-knockdown group (p=0.0043) ( Figure 3A). Five weeks after inoculation, the xenografts formed by SW620/shBEX2 cells were significantly smaller than those formed by SW620/Ctrl cells (p=0.0047) ( Figure 3B, 3C). Within 4 weeks, all of the xenografts in the SW620/Ctrl group had increased in size. However, half of the mice (n=5) showed no palpable tumors in the SW620/shBEX2 group, even after 5 weeks. The tumor-free survival of Balb/c athymic nude mice inoculated with SW620/shBEX2 cells was significantly longer than that of the SW620/Ctrl group (Log-rank test, p=0.0021) (Supplementary Figure 1). We further explored the role of BEX2 in colorectal cancer using a cecum orthotopic model of colorectal cancer. Similar to our previous results, mice injected with SW620/shBEX2 cells survived significantly longer than mice injected with SW620/Ctrl cells (log-rank test, p=0.0169) ( Figure 3D). A post-mortem examination was performed for the majority of the animals, and tumor growth was identified as the cause of death. In addition, we validated the down-regulation of BEX2 in tumors from the SW620/shBEX2 group subjected to the subcutaneous xenograft model via both qPCR and Western blotting ( Figure 3E, 3F). Taken together, these results indicated that BEX2 knockdown suppressed colorectal cancer proliferation in vivo. BEX2 knockdown inhibits colorectal cancer cell proliferation by inactivating the JNK/c-Jun signaling pathway Next, we focused on identification of the molecular mechanisms through which BEX2 suppresses colorectal cancer proliferation. It is known that the cell cycle duration, growth fraction and cell apoptosis can affect tumor growth. BEX2 was previously shown to interrupt cell apoptosis and reduce the growth fraction in breast cancer cells [8], leukemic cells [21] and glioma cells [22]. However, flow cytometry assays demonstrated that there was no difference in the percentage of cells in each phase of the cell cycle between SW620/shBEX2 cells and SW620/Ctrl cells ( Figure 4A) and that the apoptotic ratio was not increased in SW620 cells after BEX2 knockdown ( Figure 4B). Therefore, down-regulation of BEX2 may inhibit colorectal cancer cell growth by extending the cell cycle, rather than by inducing cell cycle arrest or activation of cell apoptosis. showed no significant differences between SW620/shBEX2 cells and SW620/Ctrl cells. B. Knockdown of BEX2 in SW620 cells resulted in no obvious changes in cell apoptosis. C. Analysis of the NF-κB, Akt and MAPK signaling pathway in BEX2-knockdown SW620 cells. NF-κB (P65) and total and phosphorylated Akt and MAPK kinases, including extracellular signal-regulated kinase 1/2 (Erk1/2), p38 and JNK, were detected through Western blot analysis. The results revealed that down-regulation of BEX2 inactivated JNK, which in turn inhibited the activation of the transcription factor c-Jun. D. Analysis of MAPK/JNK signaling pathway in BEX2-overexpressed SW620 cells. The results showed that SP600125 treatment suppressed BEX2 overexpression-activated JNK/c-Jun phosphorylation. E. Administration of the JNK-specific inhibitor SP600125 to SW620/BEX2 cells suppressed the relatively high phosphorylation of JNK/c-Jun activated by BEX2, thus eliminating the BEX2-induced proliferation advantage. Data are expressed as the mean ± SD (**p < 0.01). Because the NF-κB, Akt and MAPK signaling pathway is the most relevant pathway in the regulation of cell growth, we evaluated the activity of NF-κB, Akt and MAPK in colorectal cancer cells before and after down-regulation of BEX2. Western blot analysis demonstrated that JNK, a mediator of the MAPK pathway, and its downstream transcription factor c-Jun were inactivated, whereas p65, Akt, Erk1/2 and p38 showed no significant changes in SW620 cells in which BEX2 has been stably knocked down ( Figure 4C). To determine whether JNK/c-Jun phosphorylation is required for BEX2-induced proliferation, we overexpressed BEX2 in SW620 cells (SW620/BEX2). BEX2 overexpression was confirmed via qPCR (Supplementary Figure 2A) and Western blotting (Supplementary Figure 2B), which demonstrated a 20-fold increase compared with the control cells (SW620/Vector). BEX2 overexpression promoted cell growth compared with the SW620/Vector (Student's t-test, p<0.01) ( Figure 4D). We subsequently treated SW620/BEX2 cells with the JNK-specific inhibitor SP600125 (10 µM). SP600125 treatment suppressed BEX2 overexpression-activated JNK/c-Jun phosphorylation ( Figure 4E) and eliminated the BEX2-induced proliferative advantage ( Figure 4D). In summary, down-regulation of BEX2 inhibited colorectal cancer cell growth by extending the cell cycle, which was mediated by JNK/c-Jun pathway inactivation. Discussion In the present study, we showed that BEX2 expression was associated with the most advanced stages of colorectal cancer, indicating that BEX2 expression is a causal factor in the progression of colorectal cancer. Subsequently, BEX2 knockdown in a colorectal cancer cell line was demonstrated to suppress cell growth and tumor proliferation both in vitro and in vivo. Furthermore, the effects of BEX2 on colorectal cancer cell proliferation appeared to regulate the cell cycle duration through the JNK/c-Jun pathway. To the best of our knowledge, this is the first study to elucidate the role of BEX2 in colorectal cancer, suggesting that BEX2 might be a novel candidate target for the comprehensive treatment of colorectal cancer. BEX2 is down-regulated in malignant glioma [12] and acute myeloid leukemia [13], and BEX2 re-expression results in significant suppression of tumor growth, supporting the role of BEX2 as a tumor suppressor. However, BEX2 is highly expressed in a subset of estrogen receptor-positive breast cancers [10] and glioblastoma [11] and plays a key role in promoting cell survival and growth in breast cancer cells. The BEX protein family has been reported to contain long regions of intrinsic disorder that may form signaling hubs, and the hubs formed by intrinsically disordered proteins play important roles in cellular differentiation and cancer [6]. Thus, BEX2 appears to play different roles in different tumor types. It is possible that BEX2 expression levels are tightly regulated, and overexpression or down-regulation of BEX2 could lead to unstable cell growth. The present study showed that BEX2 was expressed at higher levels in more advanced tumors ( Figure 1A and 1B), indicating that BEX2 is a key modulator in the proliferation of colorectal cancer. Previous studies have shown that BEX2 plays a critical role in regulating the cell cycle and apoptosis [10,18,22,23]. In breast cancer cells, down-regulation of BEX2 induces mitochondrial apoptosis and results in G1-cell cycle arrest. In glioblastoma cells, BEX2 knockdown also induces apoptosis by activating caspase 9. However, decreasing BEX2 expression was not found to influence apoptosis and the cell cycle in glioma cells [14]. Therefore, BEX2 regulates either the cell cycle or apoptosis in different tumor types. In the present study, neither the cell cycle nor cell apoptosis was affected after BEX2 knockdown in colorectal cancer cells ( Figure 4A, 4B), indicating that BEX2 down-regulation can cause growth suppression due to extension of the cell cycle. Our findings suggested that phospho-JNK and phospho-c-Jun were significantly decreased after BEX2 down-regulation in colorectal cancer cells. This result was consistent with a report by Naderi et al. [24] showing that BEX2 exhibits functional interplay with JNK/c-Jun in breast cancer cells. In particular, BEX2 has been identified as a target gene of c-Jun and is necessary for the phosphorylation of c-Jun and JNK kinase activity. In summary, our study demonstrates that BEX2, a novel causal factor in the progression of colorectal cancer, promotes colorectal cancer cell proliferation via the JNK/c-Jun signaling pathway. Future studies will be necessary to investigate whether BEX2 expression is associated with the prognosis of colorectal cancer and to assess the potential of BEX2 as an effective therapeutic target for colorectal cancer. Such studies will provide additional insight into colorectal cancer and provide a rationale for the utilization of innovative therapy targeting BEX2 to improve colorectal cancer treatment.
2018-04-03T00:50:38.551Z
2017-02-12T00:00:00.000
{ "year": 2017, "sha1": "e7cbc1a1c997c58fa9e67a37292a3cebd6bd6836", "oa_license": "CCBYNC", "oa_url": "http://www.ijbs.com/v13p0286.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7cbc1a1c997c58fa9e67a37292a3cebd6bd6836", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255276305
pes2o/s2orc
v3-fos-license
Recent advances in ultrasonic-assisted machining for the fabrication of micro/nano-textured surfaces In this paper, the state of art of ultrasonic-assisted machining technologies used for fabrication of micro/nano-textured surfaces is reviewed. Diamond machining is the most widely used method in industry for manufacturing precision parts. For fabrication of fine structures on surfaces, conventional diamond machining methods are competitive by considering the precision of structures, but have limitations at machinable structures and machining efficiency, which have been proved to be partly solved by the integration of ultrasonic vibration motion. In this paper, existing ultrasonic-assisted machining methods for fabricating fine surface structures are reviewed and classified, and a rotary ultrasonic texturing (RUT) technology is mainly introduced by presenting the construction of vibration spindles, the texturing principles, and the applications of textured surfaces. Some new ideas and experimental results are presented. Finally, the challenges in using the RUT method to fabricate micro/ nano-textured surfaces are discussed with respect to texturing strategies, machinable structures, and tool wear. Introduction The scientific importance and industrial value of functional micro/nano-textured surfaces are getting increasing attention. Typical examples include optical retroreflective and antireflective structures [1], medical biocompatible [2] and antimicrobial structures [3], and tribological friction tunable structures [4], which have been widely studied and industrially applied. To efficiently fabricate tailored structures on surfaces to get these functional performances is of great industrial value. Recently, the rapid development of biomimetics has also given much inspiration to researchers when designing functional textured surfaces [5]. However, the reproduction or imitation of structures on surfaces of organisms in nature has been found to be a great challenge for engineers, because many useful nature surfaces possess complex hybrid, multi-layer or directional structures with a scale as small as nanometers. Thus, utilizing the existing methods for such purpose would be unfeasible or time-consuming. To satisfy the requirements, researchers are always developing new methods or improving the capability of existing methods. Many technologies have been established for the fabrication of functional textured surfaces [6]. Diamond machining as the most widely used method for manufacturing precision parts is also capable of fabricating various surface structures, which have been illustrated in a published review paper [7]. It can be found that the machinable structures are limited and the machining efficiency is dependent on machined structures. Our group has also been focusing on researching and exploring new diamond machining methods for the efficient fabrication of micro/nano-textured surfaces with high precision. Ultrasonic-assisted machining as a traditional method to machine hard-to-machine materials has gotten our attention. The high-frequency tool-work interaction induced by ultrasonic vibration has been proved to be useful in the fabrication of micro/nano-textured surfaces [8][9][10][11][12]. It can be noticed that some other researchers worldwide have also followed a similar principle in fabricating fine surface structures [13][14][15][16]. This paper briefly reviews the methods with the assist of ultrasonic vibration for fabrication of micro/nano-textured surfaces in Section 2, and then mainly addresses a new rotary ultrasonic texturing (RUT) method developed by our group in Sections 3 and 4. The texturing principles, machinable structures, and the merits and demerits of these ultrasonic-assisted machining methods are discussed as well. The entire development processes of the RUT method, including the development of applicable ultrasonic spindles, the calculation of surface generation processes, the designing of diamond tools, and the texturing strategies are focused. This new method provides designers with additional freedom to efficiently fabricate various fine structures on surfaces of different materials. Classification of ultrasonic-assisted texturing methods Ultrasonic-assisted machining refers to mechanical processing methods that apply high-frequency vibration (generally greater than 20 kHz) to a tool or a workpiece with its vibration amplitude ranging from several micrometers to several tens of micrometers [17]. There has been a large amount of research in terms of integrating ultrasonic vibration into conventional mechanical machining processes to achieve machining performances. Only those works aimed at fabricating surface structures will be discussed in this paper. We called ultrasonic-assisted machining for the fabrication of micro/nano-textured surface as ultrasonic-assisted texturing methods, which are classified into two categories according to the role of ultrasonic vibration motion in the present paper. Ultrasonic-assisted machining for the fabrication of structures In the first category, the surface fabrication mechanism is the same as that of conventional diamond machining methods, i.e., removing materials with cutting edges at a high speed to generate structures. The details of the mechanical fabrication processes for surface structures can be found in Ref. [7]. Figure 1 shows the various machinable structures. By integrating ultrasonic vibration motion to the conventional diamond machining methods, material removal efficiency can be improved and tool wear can be minimized, which is the role of ultrasonic vibration motion. Four typical ultrasonic-assisted diamond machining methods integrated with grinding, milling, cutting, and turning, are schematically shown in Fig. 2. There are usually two types of ultrasonic vibration modes, 1D (reciprocating) vibration mode and 2D (elliptical or circular) vibration mode. The integration of ultrasonic vibration to these conventional diamond machining processes can promote the successful fabrication of very fine structures as small as several micrometers or even nanometers dimensions, which is an impossible task without the assistance of ultrasonic vibration. For example, the most widely used elliptical vibration-assisted cutting technology has been used to successfully fabricate different micro/nanostructures on hardened steel [18] and various 3D microstructures on hard plated copper [19]. Figure 3 shows two typical surface structures. Another example, micro-grooves in a hard-to-machine material were successfully fabricated by ultrasonic-assisted grinding using a diamond grinding pin with a diameter of several tens of micrometers [20]. Newly proposed ultrasonic-assisted texturing methods The second category consists of texturing methods that actively modulate the ultrasonic vibration motion. The surface structures are generated by controlling every step of the cutting motion at each vibration circle. These methods can be subdivided into several categories according to the type of integrated conventional diamond machining methods. Turning [13][14][15], grooving [21], and rotary machining [10][11][12] have been used to fabricate micro/nano-textured surfaces by integrating ultrasonic vibration motion. The cutting motion is achieved with a rotating cylindrical workpiece, a linearly-feed cutting tool, and a rotating cutting tool, respectively. The machinable Fig. 1 Classification of machinable structures of diamond machining methods by shape and extension. Reprinted from Ref. [7] with permission from Elsevier structures mainly depend on tool geometry, vibration mode, feed path, and their combinations. Figure 4 illustrates the principle of ultrasonic-assisted turning processes for micro-texturing [13,14]. 1D and 2D vibration modes have been verified. When 1D vibration mode is applied, the continuous cutting process is transformed into an intermittent cutting process, allowing for the fabrication of microdimples. As for the 2D vibration mode, the authors developed a new elliptical ultrasonic vibration spindle that can make the tool vibrate in the cutting and depth-of-cut directions. By modulating the cutting depth and the vibration amplitude, a high-frequency intermittent contact between the cutting edge and workpiece was obtained, and intentionally controlled for the fabrication of surface meso/ micro-textures. Figure 5 shows two typical textured surface fabricated with this elliptical vibration assisted texturing method. In our group, a novel RUT method was developed by integrating ultrasonic vibration into rotary machining processes. The combination of ultrasonic vibration, [18] with permission from Elsevier rotation, and feed motion can lead to high-frequency periodic change in the cutting motion, which has been proposed to be deliberately controlled to fabricate micro/ nano-textured surfaces. A rotary ultrasonic spindle is theoretically applicable for fabricating structures on surface of any shape, making it more feasible than other ultrasonic spindles. However, the conventional rotary ultrasonic spindle can only generate 1D reciprocating vibration along the axis of the spindle, thereby limiting the applicable shapes of the cutting loci in the RUT process. If the diamond tool can freely vibrate in 3D space, a cutting locus with a considerably greater flexibility can be obtained, and more textural patterns can be fabricated. Therefore, the structure of ultrasonic spindle plays a key role in the feasibility of the RUT method. A new 3D rotary ultrasonic spindle was proposed and designed by our group. Figure 6 shows the schematic of the proposed 3D RUT processes. A 3D rotary ultrasonic spindle can generate ultrasonic vibration along all three axes, namely, the longitudinal vibration (LV) along the axis of the spindle, the circular vibration (CV) in the plane perpendicular to the axis of the spindle, and the hybrid vibration (HV) in 3D space. The cutting locus of the cutting edge is deliberately modulated by controlling the resultant motion of the tool rotation, feed motion and vibration for fabricating surface textures. If the texturing parameters are appropriately controlled, micro/nanostructures are expected to be fabricated as schematically shown in Fig. 7. By controlling the feed motion, hybrid textured surfaces with primary structures along the feed direction and micro/nanostructures as secondary structures can be fabricated. The 3D RUT technique can be potentially used to fabricate different types of precisely controlled textural patterns at a high speed because of the high frequency of vibration motion and mechanical material removal mechanism. Construction of rotary ultrasonic spindles The vibration mode depends on the construction of the ultrasonic vibrator. In our study, a resonant piezoelectric [15] with permission from Elsevier vibrator was selected for manufacturing the rotary ultrasonic spindle. The vibrator was resonated by exciting several combined piezoelectric plates with high-frequency electrical signals, which are sandwiched with metal cylindrical horns. The high-frequency electrical energy was converted into mechanical vibration via the resonant piezoelectric transducer (PZT). The horn/tool assembly was used to amplify the vibration amplitude of the tool, because the oscillation amplitude at the face of the PZT was insufficient to achieve a reasonable cutting rate. Figure 8 shows two types of PZT systems for generating two basic ultrasonic vibration modes, namely, LV mode and bending vibration (BV) mode. The LV mode indicates that the tool vibrates along the axial (Z) direction, and the BV mode indicates that the tool vibrates in the transverse (XY) plane perpendicular to the axis. To generate LV mode, the PZT utilizes only one set of round piezoelectric plates as shown in Fig. 8(a). When sinusoidal voltage is applied to the transducer, the piezoelectric plate expands and contracts, so that the vibrator is resonated and the tool tip attached to the end of the horn vibrates in the LV mode along the Z axis. The vibration amplitude depends on the applied voltage, the material property of PZT and the spindle structures. The vibration amplitude is magnified by the horns, and maximized at the tool tip. The 1D ultrasonic vibration spindle has been widely used in the rotary ultrasonic machining processes. In Fig. 8(b), if two halfround piezoelectric plates are placed on the PZT and two sinusoidal voltages with 180°phase difference are applied to the piezoelectric plates, the two piezoelectric plates will expand and contract alternately, ultimately causing the tool attached to the end of the horn to vibrate with the bending mode in the XY plane. Different types of ultrasonic vibrators can be developed Figure 9 shows an ultrasonic vibrator that can generate 2D vibration in the transverse (XY) plane. Four piezoelectric plates with the same resonant frequency are placed on the ultrasonic actuator. Thus, the specific shape of the vibration locus depends on the vibration amplitudes and on the phase difference of the applied sinusoidal voltages. If sinusoidal voltages with 180°phase difference are applied on every two opposite piezoelectric plates, two BV modes can be generated simultaneously. For a specific phase difference of the two BVs, an elliptical or circular vibration mode can be generated in the transverse (XY) plane. For example, the CV mode can be generated by applying 0°, 90°, 180°, and 270°phase shifted sinusoidal signals having the same amplitude on the four piezoelectric plates in the clockwise direction. In this paper, the PZT systems shown in Figs. 8(a) and 9 were further combined into one ultrasonic vibrator shown in Fig. 10, and a new 3D hybrid ultrasonic vibrator was designed and manufactured. A circular vibration was tuned by modulating the abovementioned parameters for this vibrator. As such, the 3D ultrasonic vibrator can generate the LV mode along the axis (Z direction) of the spindle, the CV mode on the transverse (XY) plane, and the 3D HV in the 3D space by simultaneously implementing the LV and CV modes. Figure 11 schematically shows the construction of the ultrasonic vibration spindle. The main spindle is rotationally driven by a motor. The resonant ultrasonic vibrator controlled by an ultrasonic oscillator is attached coaxially to the spindle. Step horns are connected to the ultrasonic vibrator integrally. A tool, such as a grinding wheel or a cutting tool, is then mounted at the tip end of the horn. We used this spindle in conducting RUT experiments to fabricate micro/nano-textured surfaces. Texturing with diamond grinding wheels The RUT method was inspired by an ultrasonic-assisted grinding method [22]. Periodic micro/nano-structures were observed on machined surfaces after ultrasonic-assisted slant-feed grinding (UASG) was performed. The details of its texturing mechanisms can be found in our previous work [8]. Diamond abrasives of irregular shapes on grinding wheels are generally randomly distributed. Therefore, surface structures with various patterns were observed on the machined surface. Figure 12 shows typical textured surfaces after UASG under three types of vibration modes. As shown, sinusoidal structures or structures along the sinusoidal locus were fabricated under LV mode; periodical micro-dimples or other micro-concave structures were fabricated under CV mode; and a random-like rough surface was successfully Different types of textural patterns were obtained because of the irregular geometrical shapes of the diamond abrasives on the grinding wheels. The calculated cutting loci under three types of vibration modes shown in Fig. 13 can be used to gain insight into the fabrication of periodic textural patterns or random-like rough surfaces. However, no clear conclusion can be made as to what diamond abrasive can fabricate a specific textural pattern because of completely random distribution of the diamond abrasives on the grinding wheels. To make the texturing process more controllable, the diamond cutting edges should be regularly distributed with fixed locations on the tool. Material removal mechanisms should be investigated with regard to designing texturing procedures, as well as the tools that can implement the RUT processes under different vibration modes. Therefore, diamond tools with only one cutting tip, which is referred to as single-point diamond tools, were designed and manufactured. First, electroplated single point diamond tools were manufactured to examine material removal mechanisms. Specific RUT procedures under the LV and CV modes were developed. The cutting loci were mathematically calculated and drawn to predict the textural features that are same as those illustrated in Fig. 13. Electroless nickel-phosphorus (Ni-P) plating as an important molding material for manufacturing plastic and glass optical components was selected as the workpiece material. The material removal mechanisms under the LV and CV modes were studied by analyzing the relationship between the textural features and the cutting tip geometries. Figure 14 shows typical textured surfaces under the LV and CV modes, respectively. Although different textured surfaces have been successfully fabricated by using these electroplated one-point diamond tools, the geometries of the diamond cutting edges are not pre-designed. Thus, the structures cannot be tailored to meet the requirements of practical applications. Many studies have investigated the effect of diamond cutting edge's geometry on the machining performance [23]. However, for conventional cutting processes, the study on the different geometries of diamond cutting edges has mainly focused on the micro-shape of the cutting corner, to reduce tool wear, workpiece surface roughness, and subsurface damage, without considering surface texturing. The tool kinematics of the RUT process is fundamentally different from that of the conventional cutting process. The material removal mechanism is completely different because of the high-frequency periodic change of cutting locus. Therefore, new geometrically defined diamond tools were designed and manufactured to fabricate geometrically defined textures on the basis of the obtained knowledge on material removal mechanisms of RUT. Figure 15 shows two typical geometrically defined diamond tools for RUT processes under the LV and CV modes, respectively. To analyze the surface generation mechanisms and predict the 3D textural features of textured surfaces, a surface generation model was established for RUT processes under the LV and CV modes. Obtaining a simulated textured surface is an effective way of predicting the topography of machined surfaces; however, this strategy does not interpret the tool-workpiece interaction. A better understanding of the tool-workpiece interaction during the material removal process is greatly helpful in designing appropriate tools and obtaining good surface quality. Therefore, we utilized a 3D-CAD software to simulate the surface fabrication process by visualizing every step of tool-work interaction; one example is shown in Fig. 16. Discrete structure fabrication can help designers understand the material removal mechanisms. Figure 17 shows two typical textured surfaces fabricated using the above two geometrically defined tools. The features of the textured surfaces can be predicted by the proposed simulation method, and hierarchical textured surfaces can be generated using the RUT technology. Applications of textured surfaces Micro/nano-textured surfaces have been proven useful in many fields because of their functional performance with highly increased surface areas and their novel physical or chemical properties [6]. Nonetheless, the practical application of a specific texturing method is determined by the machinable materials and structures. The RUT method using diamond tools with few limitations at machinable materials can potentially provide solutions in many fields. When using diamond grinding wheels, the RUT method can be used to fabricate structures on the surfaces of hardto-machine materials, such as zirconia ceramics, which is a promising excellent dental restoration material [8]. The textured surface can improve the osseointegration of zirconia dental implants, wherein the regularity of textural patterns is not strict. When using designed single-point diamond tools, the RUT method can be used to generate tailored structures. Thus, the textural patterns can be modulated to obtain specific functions, such as directional textured patterns for inducing directional wetting. Hierarchical micro/nano-structures can highly increase surface areas and consequently induce better hydrophilicity of wettable surfaces. Figure 18 shows a directional wettable textured surface fabricated by using the RUT method under LV mode [12]. The theoretical study of the applications of these textured surfaces should be given more attention, particularly the principles for guiding the design of geometrical features to achieve functional performances. Problems and challenges Although various structures at the micrometer or even nanometer scale have been successfully fabricated using Reprinted from Refs. [11,12] with permission from Springer Fig. 16 (a) Discrete steps of material removal processes in RUT and (b) simulated textured surfaces. Reprinted from Ref. [12] with permission from Springer the RUT method, it should be noted that its freedom to fabricate structures required in industries is still limited. Integrating a rotary ultrasonic spindle to a machine tool with more degrees of freedom is a possible way of addressing this problem. Many studies have also reported on fabricating more complex structures with the use of a low/fast-servo system by adding more freedoms to a machine tool, and these works have successfully fabricated several hybrid or hierarchical structures [24]. Another strategy is to design diamond tools with more appropriate Fig. 17 Typical textured surfaces fabricated by using the RUT method with geometrically defined diamond tools under (a) LV and (b) CV vibration modes. Reprinted from Refs. [11,12] with permission from Springer Fig. 18 A textured surfaces and it representative water contact angles, possessing directional wetting properties. Reprinted from Refs. [12] with permission from Springer geometries by considering the material removal mechanisms to achieve more machinable structures. Another feasible approach is to further improve the rotary ultrasonic spindle by making it capable of modulating the vibration frequency and the amplitude with more freedom. Besides, no detailed study is available on the tool wear of ultrasonic-assisted texturing methods. The high-frequency tool-work contact should result in notable tool wear, which must be overcome prior to its industrial application. We proposed to control the depth-of-cut at a constant small value with a designed advanced cutting edge to minimize tool wear [10]. However, this restricts the freedom of tool geometries, thereby limiting the machinable structures. Another way to decrease tool wear is to use tools with better wear resistance. We have attempted to utilize a nano-polycrystalline diamond tool [25] in the RUT processes, and the preliminary results showed that tool wear was much slower than that observed on conventional single crystal diamond tools. Another problem that should be addressed is that the practical applications of these structures have not been extensively discussed. The principles for modulating the geometrical features of structures to obtain the required functions should be studied with deeper knowledge, which are also scientifically important. Summary and outlook To sum, the proposed ultrasonic-assisted texturing methods can provide designers with more freedom to create various micro/nano-structures for functional performances. These methods are classified into two categories according to the role of the ultrasonic vibration. In addition, this paper introduces a rotary ultrasonic texturing method that modulates the cutting locus at each vibration circle for the fabrication of structures. The RUT method using a rotary ultrasonic spindle has greater feasibility than other methods, because it can fabricate structures not only on flat surfaces but also on curved surfaces. The geometries of diamond tools can be designed to generate tailored structures, particularly hybrid textured surfaces, to meet requirements of functional performance. The geometrical features of textural patterns can be calculated and simulated. The RUT method also has a number of limitations. Integrating the rotary ultrasonic spindle into a machine tool of more degrees of freedom can enable the RUT method with more machinable structures. Using nano-polycrystalline diamond tools can address the tool wear problem. In future works, a more robust and flexible ultrasonicassisted texturing method should be further developed based on the rotary ultrasonic spindle. Moreover, the scientific principle for designing textural patterns for functional performance should be given more attention.
2022-12-31T14:15:46.188Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "5b671caa3d06e9c0785a031d719945ea59000e8f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11465-017-0422-5.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "5b671caa3d06e9c0785a031d719945ea59000e8f", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
216360809
pes2o/s2orc
v3-fos-license
APPLICABLE LAW IN MATRIMONIAL PROPERTY REGIME DISPUTES The paper presents the regulation of the applicable law as determined in Council Regulation (EU) 2016/1103 of 24 June 2016 implementing enhanced cooperation in the area of jurisdiction, applicable law and the recognition and enforcement of decisions in matters of matrimonial property regimes. It concludes that the new EU arrangement has made it easier for spouses to determine the applicable law and evaluates the suitability of the connecting factors provided by Regulation 2016/1103. The paper also challenges the examination of these connecting factors as of the time of the conclusion of the marriage and assumes that their exclusion under exceptional circumstances is difficult to achieve. It compares the connecting factors with those provided by Slovenian and Croatian private international law in theory, and provides practical examples of the differences resulting from the new European arrangement. The paper further examines the hypothesis that the possibility of agreement on the choice of law will cause many problems in practice, and provides possible solutions. Throughout the paper, the system established by Regulation 2016/1103 is compared with other European regulations and the relevant case law of the CJEU, but the author primarily focuses on the changes in Slovenian and Croatian case law caused by the application of Regulation 2016/1103. INTRODUCTION Family relations and the regulation thereof is a sensitive matter. While they are primarily important to individuals, the state's relationship to family relations is delicate as well. The progressiveness of such regulation depends in particular on how developed a society is and on the climate in each respective state. European Union (EU) Member States 1 range across a broad spectrum as regards both progressiveness and social climate, which became evident (yet again) in the procedure for formulating and adopting a single European legal framework for the property consequences of marriage and registered partnerships. Due to uncertainty about matrimonial property regimes, which had posed problems for cross-border spouses exercising their rights, the need to adopt European legislation 2 in this field was made a priority for the first time in the 1998 Vienna Action Plan. 3 Various activities 4 followed, which in March 2011 ultimately resulted in the adoption of the Proposal for a Council Regulation on jurisdiction, applicable law and the recognition and enforcement of decisions in matters of matrimonial property regimes (COM(2011) 126 final) and the Proposal for a Council Regulation on jurisdiction, applicable law and the recognition and enforcement of decisions regarding the property consequences of registered partnerships (COM(2011) 127 final). Since they fall within the domain of family law, the proposals would have had to be confirmed by the Council by a unanimous vote after consultation with the Parliament (Article 81/III of the Treaty on the Functioning of the European Union 5 ), but after almost two years of debate and fierce opposition from some Member States (Poland and Hungary in particular), the Council decided in December 2015 that consensus was not achievable. Immediately thereafter, several Member States expressed willingness to establish enhanced cooperation on this matter. In June 2016 the Council approved 6 that and then immediately adopted Council Regulation 1 For an overview of the legal arrangements in some non-EU countries, see Scherpe both Regulations are binding, directly applicable, and completely replace the relevant national rules on private international law. The remaining EU Member States continue to apply national private international law to matrimonial property regimes. They may join the enhanced cooperation at any time in the future, but in order to do so they must accept both regulations. 9 Only Estonia has so far announced a willingness to join, whereas cooperation from any of the other countries is not expected -at least for now. 10 The most frequently cited concern of the non-participating states is that under Regulations 2016/1104 and 2016/1103 a state that does not recognise same-sex marriage and/or registered partnerships between same-sex (or heterosexual) couples would have to recognise such unions concluded in other Member States. 11 There are safeguards against that in Recital 64 of Regulation 2016/1103 (and Recital 64 of Regulation 2016/1104; see also Recital 21 of both Regulations), which stipulates that the recognition and enforcement of a decision on the matrimonial property regime thereunder should not in any way imply recognition of the marriage underlying the matrimonial property regime that gave rise to the decision. Consequently, the participating countries are not required by Regulation 2016/1103 to transpose into national law forms of matrimony that they do not recognise in their national law, nor to recognise a personal status thereunder. Alas, encountering marital forms unrecognised in domestic legislation is inevitable. One source of confusion 12 The problem is not just recognition of unions between same-sex partners, which is the biggest (and very explicit) concern of states that have not joined the enhanced cooperation. It also (for example in Slovenia) concerns recognising a partnership registered by a heterosexual couple abroad. Slovenian law does not provide for such a partnership, which raises the question of whether such should be treated as a marriage or registered partnership, or whether a court should resort to the exception -alternative jurisdiction (Article 9 of Regulations 2016/1104 and 2016/1103) -and decline jurisdiction if it decides that such a partnership cannot be recognised in a specific case for the purposes of matrimonial property regime proceedings. 14 This issue will have to be addressed by case law; absent a response from the Court of Justice of the European Union (CJEU), the case law of the Member States will probably produce conflicting solutions. As of 29 January 2019, Regulation 2016/1103 is thus applicable to matrimonial property regimes -including with regard to the applicable law -in the Member States participating in the enhanced cooperation. The Regulation, for example, gives (future) spouses the option to choose the applicable law, and it provides rules to determine which law shall apply in the event the spouses do not agree on the choice of law. This paper explores whether the new EU arrangement has made it easier for spouses to determine the applicable law and evaluates the suitability of the connecting factors provided by Regulation 2016/1103. It also challenges the examination of these connecting factors as of the time of the conclusion of the marriage; they may be excluded under exceptional circumstances, but the paper assumes that the conditions for exclusion are difficult to achieve. It compares the connecting factors with those provided by Slovenian and Croatian private international law in theory, and provides practical examples of differences resulting from the new European arrangement. The paper further examines the hypothesis that the possibility of agreement on the choice of law will cause many problems in practice, and provides possible solutions. Throughout the paper, the system established by Regulation 2016/1103 is compared with other European regulations and the relevant case law of the CJEU. Several authors (e.g. Oprea, Party autonomy and the law applicable to the matrimonial property regimes in Europe -see the list of literature for more; Damascelli, Applicable law, jurisdiction, and recognition of decisions in matters relating to property regimes of spouses and partners in European and Italian private international law; Dolžan, Uredbi (EU) glede premoženjskopravnih razmerij za mednarodne pare -kolizijska pravila; Kunda NATIONAL PRIVATE INTERNATIONAL LAW Regulation 2016/1103 does not determine in which cross-border disputes it is applicable. 15 In countries participating in the enhanced cooperation it replaces the relevant provisions of private international law regardless of the country of origin of the international element that defines this as a cross-border dispute. The regulation is thus not applicable only in disputes with an element from another country participating in the enhanced cooperation; it is also applicable in disputes with an element from other EU Member States and in disputes with an element from third countries. In Slovenia, for example, it completely replaces 16 the relevant provisions of the Private International Law and Procedure Act 17 (Zakon o mednarodnem zasebnem pravu in postopku -ZMZPP). In Croatia, the Act Concerning the Resolution of Conflicts of Laws with the Provisions of Other Countries in Certain Matters 18 (Zakon o rješavanju sukoba zakona s propisima drugih zemalja u ođređenim odnosima -ZRSZPDZ) was in force until 29 January 2019, when it was replaced by the Private International Law Act 19 (Zakon o međunarodnom privatnom pravu -ZMPP). Whereas the former was almost identical to the Slovenian ZMZPP, in the ZMPP the Croatian legislature took a different route regarding the regulation dealt with in this article. Since Regulation 2016/1103 applies to matrimonial property regimes in all cross-border disputes, there is no need for a national regulation on this matter. The Croatian ZMPP thus refers to Regulation 2016/1103 concerning the applicable law and jurisdiction in such disputes (just as it defers to Regulation 2016/1104 for disputes concerning the property consequences of registered partnerships) and does not have its "own" provisions concerning these matters. THE REGULATORY FRAMEWORK OF CONFLICT-OF-LAW RULES When a judge determines that he or she has (international) jurisdiction in a dispute with an international element, he or she then determines which country's substantive law to apply in the ruling. With regard to the law governing (personal and) matrimonial property, the ZMZPP (Article 38) determined nationality as the primary connecting factor. The primary applicable law is therefore the law of the state of which (both) spouses are nationals (lex patriae, lex nationalis). This is a changeable yet fairly stable connecting factor that is far more difficult to change than (temporary or permanent) residence. If the spouses are nationals of different states, the law of the state in which they have permanent residence applies (lex domicilii). If the spouses have neither the same nationality nor permanent residence in the same state, the law of the state in which they both had their last permanent residence applies; absent that, the law that is most closely connected to the relationship applies (closest connection). For the latter, it is necessary to consider all of the circumstances of the case in question, e.g. the nationality and residence of the parties, their language, etc. The competent court decides which law is most closely connected to the relationship. The legal framework established by the Croatian ZRSZPDZ was the same, only that instead of the last connecting factor -the closest connection -it referred to the application of Croatian law (Article 36). The almost identical provisions of both laws are an indication of their common roots. In both countries the Yugoslav Act Concerning the Resolution of Conflicts of Laws with the Provisions of Other Countries in Certain Matters (Zakon o ureditvi kolizije zakonov s predpisi drugih držav v določenih razmerjih -ZUKZ) 20 had previously been in force, and both countries subsequently modelled their laws on the resolution of conflicts of laws on (inter alia) the matrimonial property regimes on the Yugoslav precursor. Article 38 of the ZMZPP does not determine at which point in time the existence of connecting factors is examined for the purposes of determining conflict-of-law rules, but these are considered 21 changeable connecting factors that refer to the moment of examination, i.e. the initiation of court proceedings. If during the course of the marriage a circumstance changes (e.g. the spouses acquire or lose nationality or move), this results in a change in the law that would apply if a dispute concerning matrimonial property were to be examined by a court (a changeable factor). 22 Regulation 2016 considered in the choice of conflict-of-law rules that are significantly different than those provided by the ZMZPP. The Council has moved away from nationality as the typical connecting factor in continental law 24 and determined common habitual residence after the conclusion of the marriage as the first relevant factor. This is a fixed, unchangeable connecting factor that subsequent changes (e.g. the relocation of the common habitual residence of the spouses) will not affect, nor do such changes therefore affect the choice of applicable law. The country of common residence should be identified, which makes it possible to use this connecting factor to determine the applicable law even if the spouses have different habitual residences within a single state. 25 This raises the question of how long after the conclusion of the marriage a common habitual residence must be established in order for it to constitute the first connecting factor. Is this a connecting factor if the married spouses settled in the same Member State a month or a year after marriage? Regulation 2016/1103 does not address this, leaving it up to case law; in order to aid case law, Recital 49 only determines that the first common habitual residence shortly after marriage should constitute the first criterion. In theory there have been proposals that there should be a period of several months after the conclusion of the marriage during which this condition must be fulfilled, 26 but some are also of the opinion that the period that should count as the first residence after the conclusion of the marriage should not be restricted. 27 In my opinion, it is impossible to specify a time period after the conclusion of the marriage during which the first common habitual residence may be established. In each specific case the decision hinges on the circumstances and it is in the hands of the court, but the author disagrees with the notion that such a condition may be fulfilled at any time after the conclusion of the marriage (e.g. that spouses who got married in their youth do not create the first common habitual residence until after retirement, thereby achieving the first connecting factor for the purposes of the choice of law). Since it is assumed that in most cases spouses will start living together after marriage (or at least in the same country), other connecting factors will be used only rarely in determining the applicable law. If, after marrying, the spouses do not have a common habitual residence in the same country or do not live long or intensely enough in any other country to establish habitual residence there, their matrimonial property regime is subject to the law of the state whose common nationality the spouses had at the time of the conclusion of the marriage. This is a more stable connecting factor than the first one, and it is easier to identify. National law and international conventions are used to identify a person's nationality (Recital 50). If the spouses do not have a common nationality at the time of the conclusion of the marriage, this connecting factor cannot be considered. The result is the same if at the time of the conclusion of the marriage the spouses have multiple common nationalities (subject mixtae), which is in line with the CJEU's position on the equality of nationalities. 28 This is a different approach than that provided by the ZMZPP. For a Slovenian national with multiple nationalities, the ZMZPP, for example, stipulates that for the purposes of the application of the ZMZPP they are considered as having only Slovenian nationality, which is underpinned by the notion that domestic law provides the best legal certainty for nationals of that state. 29 If a person who is not a Slovenian national has multiple nationalities, for the purposes of the ZMZPP he or she is regarded as being a national of the state he or she is a national of and where he or she has permanent residence; 30 if such a person does not have permanent residence in any state whose nationality he or she has, he or she is regarded as being a national of the state whose nationality he or she has and with which he or she has the closest links (Article 10). The same arrangement, only for the benefit of Croatian nationality, is provided by the Croatian ZMPP (Article 3) and its predecessor, the ZRSZPDZ (Article 11). If spouses do not have a common nationality at the time of the conclusion of the marriage or have more than one common nationality, their matrimonial property regime is subject to the law of the state with which, all circumstances considered, both spouses have the closest connection as of the time of the conclusion of the marriage. 31 Regulation 2016/1103 does not provide guidance on when closest links are deemed to have been established. 32 In each specific case all of the actual and legal circumstances of the spouses as of the conclusion of the marriage are considered: nationality, religion, language, location of assets, etc., which are determined by the competent court. The moment of the conclusion of the marriage is considered the relevant point for the examination of the closest connection. This is an unchangeable factor that determines which circumstances at a specific moment in the past must be considered. 33 Subsequent changes (nationality, residence, etc.) do not affect this connecting factor, and the applicable law may change only by way of the spouses concluding an agreement on the choice of law. 28 Once determined in such manner, the law applies to the spouses' entire property, regardless of whether it is located in multiple countries, whether or not these countries participate in the enhanced cooperation and whether or not they are EU Member States. It also applies notwithstanding the type of property, which provides legal certainty for the parties and prevents the fragmentation of the matrimonial property regime (Recital 43). However, reference (regarding all connecting factors) to the moment of the conclusion of the marriage (the second and third connecting factors) or the time after the conclusion of the marriage (the first connecting factor) has an important shortcoming. Even in the event of subsequent significant life changes, during the course of court proceedings the spouses may no longer have a connection to the state whose law will apply. For example, after getting married a German man and Slovenian woman move to Rome for a year due to the wife's training. They then move to Austria, where they live for the next 20 years until they divorce. In a matrimonial property regime procedure, the court of jurisdiction will be in Austria (Article 6/a of Regulation 2016/1103), which will have to apply Italian law (Article 26/I(a) of Regulation 2016/1103), even though the spouses do not (any longer) have a connection to Italy. To address such situations, Regulation 2016/1103 provides an escape clause. The court can avoid applying the law of the state of first habitual common residence after the conclusion of the marriage under certain conditions (Article 26/III). 34 Such a solution is possible only when proposed to the court by one of the spouses. He or she must demonstrate that the last habitual residence lasted longer than the first, whereby the court will examine whether it was indeed significantly longer. He or she must also demonstrate the existence of past spousal conduct such that it provides evidence of his or her reference to the law of that state, which is difficult if the other spouse opposes the application of this law. If the other spouse agrees with the application of this law, he or she may actively demonstrate his or her past conduct in this direction, but his or her (explicit) consent to the use of this exception is not required. This exception from the principle of permanence can thus be applied to avoid the impractical application of a law not connected to the dispute; however, it is possible only in specific court procedures when permitted by the court. In the above-mentioned case, Austrian law will therefore be applied provided one of the spouses proposes its application and the conditions are satisfied (and it is possible to determine with certainty the existence of a significantly longer residence in Austria). 35 Examples A) German nationals move to Slovenia in 2015 and get married. In October 2019 they divorce. The (ex) husband then moves to Germany and the (ex) wife 34 Poretti sees the application of this provision as creating the possibility of using the law of the same state for both succession and matrimonial property regime disputes between (former) spouses. For details, see Poretti, P., op. cit., p. 464. 35 This creates problems with ex tunc application of the thusly chosen law and the impact on thirdparty rights, but that is beyond the scope of this paper. stays in Slovenia. The husband wishes to initiate proceedings for the division of the matrimonial property. Which law will be applied? This paper does not deal with issues concerning international jurisdiction. But since it is impossible to determine the applicable law without determining international jurisdiction, the subsequent examples also provide solutions for that. In this specific case, the court of (international) jurisdiction is in Slovenia under Regulation 2016/1103 (because the court procedure was initiated (on or) after 29 January 2019 -see the next section) (Article 6/b). If the ZMZPP is applied, the applicable law is determined pursuant to Article 38/I: the law of the state of which the spouses are nationals is applied, which means that German law 36 will be applied. If Regulation 2016/1103 is applied, under Article 26/I(a) the applicable law is the law of the state in which the spouses had their first common habitual residence, in this case Slovenian law. B) A Slovenian national and a German national settle in Slovenia, where they marry. They divorce in October 2019. The German (ex) wife then moves to Germany and the Slovenian (ex) husband remains in Slovenia. The husband wants to initiate proceedings for the division of the matrimonial property. Which law will be applied? Pursuant to Regulation 2016/1103 (because court proceedings were initiated (on or) after 29 January 2019 -see the next section), the court in Slovenia has (international) jurisdiction (Article 6/b). If the ZMZPP is applied, the applicable law is determined pursuant to Article 38/III because the (former) spouses are nationals of different states and do not have residence in the same state. The dispute must be resolved according to the law of the state where they had their last common residence -i.e. Slovenian law. If Regulation 2016/1103 is applied, the law applicable to the matrimonial property will be the law of the state in which the spouses had their first common habitual residence after the conclusion of the marriage -i.e. Slovenian law. C) Two Slovenian nationals move to Austria, where they marry. In October 2019 they divorce, whereupon the (ex) husband moves back to Slovenia and the (ex) wife to Germany. The wife wants to initiate proceedings for the division of the matrimonial property. Which law will apply? Pursuant to Regulation 2016/1103 (because court proceedings were initiated (on or) after 29 January 2019 -see the next section), the court in Slovenia has (international) jurisdiction (Article 6/c). If the ZMZPP is applied, since they have a common nationality, the dispute is resolved using the law of the state of which the spouses are nationals (Article 38/I)i.e. Slovenian law. If Regulation 2016/1103 is applied, the law applicable to the matrimonial property will be the law of the state in which the spouses had their first common habitual residence after the conclusion of the marriage -i.e. Slovenian law. D) Two nationals of the United States move to Slovenia, where they marry. In October 2019 they divorce but both stay in Slovenia. The wife wants to initiate proceedings for the division of the matrimonial property. Which law will apply? Pursuant to Regulation 2016/1103 (because court proceedings were initiated (on or) after 29 January 2019 -see the next section), the court in Slovenia has (international) jurisdiction (Article 6/a). If the ZMZPP is applied, since the spouses have common nationality, the dispute is resolved using the law of the state of which the spouses are nationals (Article 38/I) -i.e. United States law. If Regulation 2016/1103 is applied, the applicable law will be the law of the state in which the spouses had their first common habitual residence after the conclusion of the marriage -i.e. Slovenian law. E) A Slovenian national and a German national settle in Slovenia, where they marry. After a year they move to Austria, where they live another year and a half. In October 2019 they divorce, whereupon the (ex) wife moves to Germany and the (ex) husband to Slovenia. The wife wants to initiate proceedings for the division of the matrimonial property. Which law will apply? Pursuant to Regulation 2016/1103 (because court proceedings were initiated (on or) after 29 January 2019 -see the next section), the court in Slovenia has (international) jurisdiction (Article 6/c). If the ZMZPP is applied, due to the non-existence of connecting factors under Article 38/I, II, the law of the state of the last common residence applies (Article 38/ III) -i.e. Austrian law. If Regulation 2016/1103 is applied, the applicable law will be the law of the state in which the spouses had their first common habitual residence after the conclusion of the marriage -i.e. Slovenian law. If one of the spouses proposes and both spouses invoke Austrian law, the court may apply Austrian law if it decided that residence in Austria lasting a year and a half is significantly longer than a one-year residence in Slovenia. Comment: In all of the above cases, the essential element in choosing the right answer as to which law applies is that Slovenian courts have international jurisdiction. This occurs when the (former) spouses have habitual residence in Slovenia at the time the court is seised, when they last had common residence in Slovenia and one of them still resides there, when the defendant has habitual residence in Slovenia at the time the court is seised, or when the spouses are Slovenian nationals at the time the court is seised. Connecting factors are used in cascading order (Article 6 of Regulation 2016/1103), but notwithstanding how the international jurisdiction of Slovenian courts is determined, the answer with regard to which law applies is the same. Regulation 2016/1103 applies (provided marriage was concluded on or after 29 January 2019 -see the next section) regardless of which state the "foreign" element in the dispute comes from. In disputes that continue to apply national private international law, bilateral or multilateral conventions between countries participating in the enhanced cooperation and others (Article 62 of Regulation 2016/1103) must be considered in the choice of applicable law. In Slovenia, international conventions will thus be used instead of the ZMZPP in disputes with elements from Hungary, Mongolia, Romania, Poland, or Slovakia. 37 The ratione temporis of Regulation 2016/1103 In the examples listed above, solutions regarding the application of law depend on whether the ZMZZPP (or the previously valid Croatian ZRSZPDZ) or regulation 2016/1103 is applied. When to apply one or the other depends on the ratione temporis of Regulation 2016/1103, which is determined in the transitional provisions (Article 69). The regulation applies only to court proceedings 38 initiated on or after 29 January 2019. If court proceedings were initiated before that date, jurisdiction is subject to national private international law. However, decisions in such procedures (initiated before 29 January 2019) adopted after this date are recognised and enforced in accordance with Regulation 2016/1103 as long as the rules of jurisdiction that have been applied comply with those set out in Regulation 2016/1103. In the examples described above, Regulation 2016/1103 applies for jurisdiction because the proceedings were initiated in (or after) October 2019 (which means on or after 29 January 2019). The Regulation 2016/1103 rules on the applicable law are applied if the marriage was concluded on or after 29 January 2019. Even when the marriage was concluded before then, Regulation 2016/1103 applies if the spouses agreed on a choice of law applicable to their matrimonial property regime after this date. This is not the case if the spouses (merely) agreed on the international jurisdiction or the applicable matrimonial property regime. This provision was different when Regulation 2016/1103 was adopted in that the conflict-of-law chapter applied to marriages or choice-of-law agreements concluded after 29 January 2019; however, this created a discrepancy with the provision on the application of the remaining chapters of the Regulation. Less than a month after Regulation 2016/1103 was adopted, the Corrigendum to Council Regulation (EU) 2016/1103 of 24 June 2016 implementing enhanced cooperation in the area of jurisdiction, applicable law and the recognition and enforcement of decisions in matters of matrimonial property regimes 39 was therefore adopted to eliminate the discrepancy. In the above-mentioned cases, the applicable law is thus determined in accordance with Regulation 2016/1103 if the marriage was concluded on or after 29 January 2019. If it was concluded before then, the ZMZPP applies. Considering the substance of the transitional provisions, it is clear that the national rules on private international law of the Member States participating in the enhanced cooperation will continue to apply in court disputes for a considerable amount of time (as long as there are marriages concluded before 29 January 2019). 37 This is because Slovenia has concluded international agreements in this field with these countries, none of which participate in the enhanced cooperation. See Rudolf, C., op. cit., p. 956. 38 The same applies to authentic instruments and court settlements formally drawn up or registered, approved, or concluded on or after 29 January 2019. 39 OJ EU L 183 of 8 July 2016. Example: An Austrian national and a Slovenian national married in 2010 and lived in Austria for three years after that. In 2013 they moved to Slovenia, where they continued living until they divorced in 2019. The ex-wife moved back to Austria and the ex-husband remained in Slovenia. In 2020 she initiated court proceedings for the division of the matrimonial property. Which legal source is applied to determine the international jurisdiction and the applicable law? Regulation 2016/1103 is applied to determine the jurisdiction in court proceedings initiated on or after 29 January 2019. Pursuant to Article 6/b, jurisdiction will lie with the courts in Slovenia since that is where their last habitual residence was and where the husband continues to reside. Regulation 2016/1103 is applied for the applicable law if the marriage was concluded on or after 29 January 2019. Given that this condition is not satisfied in this specific case, the applicable law will be determined under the Slovenian ZMZPP using the connecting factor under Article 38/III, which stipulates that the law of the state in which they both had their last permanent residence shall apply -i.e. Slovenian law. The solution would be different if the applicable law was determined under Regulation 2016/1103, where the connecting factors depend on the time of the conclusion of the marriage. In this case, Article 26/I(a) of Regulation 2016/1103 and Austrian law would apply since the spouses had their first common habitual residence in Austria. This case illustrates how in practice there will be situations where the jurisdiction is determined under Regulation 2016/1103 and the applicable law under the national rules on private international law. Such a discrepancy will create confusion and will not take advantage of Regulation 2016/1103, which strives to connect both elements (see, e.g., Article 7/I). MATRIMONIAL PROPERTY RELATIONSHIPS The applicable law considerations described above do not apply if the spouses chose the law applicable to their matrimonial property regime before the marriage, at the time of the conclusion of the marriage, or during the course of the marriage (Recital 45). The law they chose applies to all legal issues concerning their matrimonial property (for details, see Article 27 of Regulation 2016/1103). When choosing the law, the spouses may also agree on a specific property regime in the chosen national law; if they do not, the default matrimonial regime provided by the national law applies. 40 Greater autonomy of the parties gives the spouses more flexibility and improves legal certainty; however, the free will of spouses as regards the choice of law is fairly limited compared to the autonomy of parties in international contract law. Connecting a factor to the time of the conclusion of the agreement makes the connecting factor stable notwithstanding potential future change. If the spouses wish to avoid the chosen law in the future, they may change their agreement, which most commonly happens when they have chosen the law of the state of their habitual residence but later moved. It is significantly faster and easier to change habitual residence than it is to change nationality, which represents a more permanent connection and hence longer satisfaction with the chosen law. However, as regards the connecting factor of nationality, there is the question of whether they may choose the law of any state of which they are nationals if one or both of them are nationals of multiple states. Recital 50 of Regulation 2016/1103 explicitly determines that consideration of a person having multiple nationalities falls outside the scope of the regulation and should be left to national law or international conventions, in full observance of the principles of the EU. 42 For persons with multiple nationalities, the majority of national law arrangements prefer one nationality -typically the nationality of the given state (see above for the regulation thereof in the ZMZPP). The conclusion, then, would be that if one or both spouses have multiple nationalities, they may only choose the law of the state of one of their nationalities as determined by the national law where the agreement is made; thus, nationality and hence the law of the state in which the agreement was concluded have indirect precedence. However, such reasoning contravenes the case law of the CJEU, which emphasises the equality of nationalities. 43 Such a rule would also require the spouses to ascertain -at the time of concluding the agreement on the choice of law -which law they may choose based on the national provisions of the state in which the agreement is made. This makes it more difficult to conclude an agreement on the choice of law and, due to the lack of clarity, affects legal certainty. On the other hand, inquiries about a preference for a certain nationality over another would allow the spouses to conclude the agreement in a state whose national law would prefer the law of that specific nationality that the spouses may want to choose. By selecting the law of the state under which to conclude the agreement, the spouses would therefore indirectly select the applicable law. There are views in theory 44 that in applying Recital 50 it is necessary to invoke the provision thereof which refers to "observance of the general principles of the Union" and consider CJEU case law to therefore allow spouses with multiple nationalities to choose the law of any of the states they are nationals of. Another argument speaking in favour of this is the grammatical explanation of Recital 50: it invokes the application of national law when 42 The solution is different in Regulation 650/2012, which explicitly determines the opposite. As the law to govern his or her succession as a whole, a person with multiple nationalities may choose the law of a State whose nationality he or she possesses at the time of making the choice or at the time of death (Article 22). 43 In cases C-369/90, Micheletti, C-148/02, Garcia Avello, and C-168/08, Laszlo Hadadi v. Mesko, the CJEU more or less directly emphasised the equality of all the nationalities a person has. 44 Similar in Oprea, E. A., op. cit., p. 585. Regulation 2016/1103 determines nationality as a connecting factor, which is, strictly speaking, not the case when it comes to the choice of law. That is because nationality is not an objective connecting factor that Regulation 2016/1103 invokes per se; it is rather a connecting factor that may be chosen by the spouses. 45 The conclusion, then, is that Recital 50 does not provide for the situations under Article 22/I(b) of Regulation 2016/1103. Furthermore, another argument in favour of such reasoning, in author's opinion, is the final part of Recital 50, which states that this consideration should have no effect on the validity of a choice-of-law agreement made in accordance with this Regulation. This leads to the conclusion that the reference to national rules from the first part of Recital 50 does not refer to a spousal agreement on the choice of law. If a spouse has multiple nationalities, it is therefore possible to choose the law of any state whose nationality he or she has. 46 (Future) spouses may select the law of a state in which one of both of them have habitual residence at the time of the conclusion of the agreement, or the law of a state of which one of the (future) spouses is a national at the time of the conclusion of the agreement, but they cannot agree to use the law of the state with which they have the closest connection as of the conclusion of the marriage (see Article 26/I(c)). This makes sense: at the time of the conclusion of the agreement, a vague connecting factor would create uncertainty for the spouses in determining which state they have the closest connection with. The ZMZPP also limits parties' free will in choosing the applicable law, but in a different way and more narrowly than Regulation 2016/1103. Freedom is allowed only to the extent provided by the law that would apply to their matrimonial property regime (Article 39). The ZMZPP determines that the law applicable to choice-oflaw agreements is determined as of the time the connecting factors are determined. Notwithstanding possible subsequent changes, the connecting factors are determined with regard to the circumstances at the conclusion of the agreement, 47 which constitutes a sensible connection with the choice-of-law agreement for which the applicable law is determined. Restriction of the parties' autonomy under the ZMZPP thus requires a substantive ruling on the law that would apply to the specific choiceof-law agreement, whereas Regulation 2016/1103 itself specifically determines which law the parties may choose. For spouses who would like to conclude such an agreement, the provisions of Regulation 2016/1103 are significantly simpler and more comprehensible. However, the parties still cannot avoid examining national law rules when concluding an agreement, since they must consider the Regulation's formal requirements with regard to such an agreement. The now invalid ZRSZPDZ (Article 37) required exactly the same. For the purposes of legal certainty, Regulation 2016/1103 primarily determines that the choice of law applies from the time of the adoption thereof (ex nunc), which may be problematic in practice in the event the agreement is concluded during 48 the marriage. That is because, for the same property, a matrimonial property regime provided by the substantive law of one state (determined in accordance with Article 26) is used until the agreement is concluded, whereupon the law of the state chosen by the parties applies (and consequently the matrimonial property regime determined therein). To avoid that, Regulation 2016/1103 makes it possible for the spouses to conclude an agreement with retroactive effect, but this may not adversely affect the rights of third parties. 49 Formal requirements Aside from the substantive restrictions as to which law the parties may choose, Regulation 2016/1103 also determines the formal requirements that an agreement must satisfy to be valid. This ensures that they are aware of the seriousness of the agreement and its content (Recital 47). The agreement must therefore be expressed in writing, dated, and signed by both parties; any communication by electronic means that provides a durable record of the agreement is deemed equivalent (Article 2016/1103). 50 An agreement concluded using customary electronic messages is not valid; if it is made in electronic form, it must be signed by both parties with secure electronic signatures. These are the same requirements that also apply to choice-ofcourt jurisdiction, which is sensible since it is likely that the spouses will agree to the international jurisdiction as well as the conflict-of-law rules in a single document in the event of a matrimonial property dispute. Regulation 2016/1103 provides the same for a (potential) spousal agreement on the matrimonial property regime, whereby it additionally requires fulfilment of the formal requirements provided by the law applicable to the matrimonial property regime (Article 25). 51 In addition to the requirements under Regulation 2016/1103 with regard to the validity of the agreement on the choice of law, it is also necessary to consider the potentially stricter national rules of a Member State (participating in the enhanced cooperation) in which both spouses have habitual residence when the agreement is concluded. If their habitual residence is in different states both of which participate in the enhanced cooperation, and the national law of one of them has different formal requirements, the spousal agreement must satisfy the requirements of the law of one of these states. But if only one of the spouses has habitual residence in a Member State participating in the enhanced cooperation, the agreement must satisfy the formal requirements of that Member State. Importantly, the time of the conclusion of the 48 This is not an issue if the spouses select the applicable law before or when concluding the marriage, because it is not until that point that their matrimonial property relationship begins. 49 This issue is beyond the scope of this paper. agreement is essential in deciding on satisfaction of the requirements, which ensures predictability and legal certainty for the spouses. Regulation 2016/1103 does not lay down special rules if both parties are residents of countries that do not participate in the enhanced cooperation. Absent provisions to the contrary, in such cases only the formal requirements determined by the Regulation itself apply. 52 Accordingly, spouses with residence in a third country will have to satisfy only the formal requirements of Regulation 2016/1103, whereas if at least one of the spouses has habitual residence in a state participating in the enhanced cooperation, additional requirements may have to be satisfied (provided that conditions stricter than being in written form and signature by the parties are required for the validity of such an agreement). If the requirements are not satisfied, the agreement is invalid and conflict-of-law rules are determined in accordance with Article 26 of Regulation 2016/1103. The explicit nature of the formal requirements therein shows that a silent agreement on the choice of law is not possible. 53 Spouses who wish to conclude a choice-of-law agreement must first establish which state's formal requirements for the validity thereof they must satisfy. These requirements may well be governed by a different law than that applicable to their agreement (e.g. if they choose the law of a state of which one of them is a national). Despite the provision that the scope of the Regulation excludes the legal capacity of spouses (Article 1/II(a)), consent to and the material validity of the agreement are determined under the law chosen in the choice-of-law agreement. An exception is provided for a situation in which one of the spouses claims he or she did not consent to the choice of law. In such a case, his or her consent and the material validity are determined under the law of the state in which this spouse has habitual residence at the time the court is seised (Article 24/II). 54 But considering that being in written form and signature by both spouses is required for such an agreement to be valid, such cases will probably not be common. 55 The first solution (Article 24/I of Regulation 2016/1103) has been criticised in theory because the choice-of-law agreement cannot be examined under the chosen law if it is not proven that the choice was valid, 56 but this rule is also easily applied by the parties, 57 and it is provided in certain other EU regulations as well. 58 The second solution (Article 24/I of Regulation 2016/1103) has been the subject of some criticism in theory as well. A spouse who wishes to malevolently dispute his or her consent to the choice-of-law agreement may intentionally change his or her residence so that at the time the court is seised it will apply the national law whose provisions regarding the validity of consent suits this spouse. There have therefore been suggestions that it may be more appropriate to instead examine the validity of consent at the time it was provided. 59 However, the court nevertheless determines in each specific case which circumstances must exist to conclude that the effects of such spousal action should be examined according to the selected law. Change of a choice-of-law agreement Spouses may subsequently dissolve or change a choice-of-law agreement due to a change in life circumstances. Even though Regulation 2016/1103 does not explicitly provide that, dissolution triggers a new choice-of-law procedure under Article 26 thereof. The Regulation also does not determine when or for what reason the agreement may be dissolved. The conclusion, then, is that the spouses may decide to change the applicable law before or when concluding the marriage, or during the marriage. They may decide that -notwithstanding their original choice of law and choice of connecting factor -what is important is that in making a new choice of law they use the connecting factors and consider the formal requirements as provided by Regulation 2016/1103; in author's opinion, in doing so they are not restricted by any specific conditions or requirements that national law may provide with regard to changes in the choice-of-law agreement. The newly chosen law applies as of the time the choice-of-law agreement is concluded; during the validity of the original agreement the matrimonial property is subject thereto, and before that it may fall under the law determined by Article 26 of Regulation 2016/1103. In practice, such cases will cause problems when different laws apply to matrimonial property in different time periods. This can be avoided with an agreement as to the ex tunc validity of the new choiceof-law agreement (application by analogy of Article 22/II). 60 Such an agreement may, however, cause problems in that it might affect rights acquired under one matrimonial property regime that a spouse would no longer have under the matrimonial property regime in the newly agreed upon applicable law. It is also necessary to be mindful of third parties, which enjoy protection under the above-mentioned Articles 22/I and 28/I. EXCLUSION OF RENVOI In invoking foreign law, differences in conflict-of-law rules may create problems when conflict-of-law rules refer back or forward, whereupon one of the subsequent laws refers to a law that had already been considered. The question, then, is whether in the event of reference to the application of a foreign law such foreign law is used in its entirety -its conflict-of-law rules included -or whether only substantive law applies. The Slovenian ZMZPP and the now invalid Croatian ZRSZPDZ both include renvoi provisions. 61 They both stipulate in their respective Article 6 that the conflict-of-law rules of the referenced national law must be applied. But if the referenced law refers back, the substantive provisions of the referenced law are directly applicable. 62 The Croatian ZMPP, on the other hand, excludes renvoi, 63 just like Regulation 2016/1103. When Regulation 2016/1103 refers to the application of a national law (even of a state not participating in the enhanced cooperation or not a member of the EU -Article 20), this entails that the substantive law of such state must be applied, excluding the rules of private international law of that state (Article 32). 64 The only exceptions are the incompatibility of the law of the state with the public policy of the country in which the procedure is ongoing (Article 31) and explicit permission to apply the overriding mandatory provisions 65 of the law of the forum (Article 30). may conclude an agreement on matrimonial property 69 (the simpler yet not entirely correct terms "nuptial agreement" 70 and "prenuptial agreement" are commonly used in practice). In doing so, they may choose the law (including foreign law) applicable to their matrimonial property and settle all matrimonial property issues for the duration of their union and in the event of divorce. 71 The spouses may thus partially or entirely circumvent the validity of the statutory matrimonial property regime and completely independently devise a matrimonial property regime applicable to them, whereby the Family Code does not describe or list possible matrimonial regimes. 72 Such an agreement must be entered into a register. The legal arrangement, 73 and in particular the public nature of the information contained in such agreements, has, however, become a subject of public criticism and there are already discussions underway indicating that it may be changed. 74
2020-02-28T17:10:32.656Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "74ff17b2b83beca612ca14d401c1922a24194a3e", "oa_license": "CCBYNC", "oa_url": "https://hrcak.srce.hr/file/339610", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "74ff17b2b83beca612ca14d401c1922a24194a3e", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Economics" ] }
257015022
pes2o/s2orc
v3-fos-license
Evaluation of base course consisting of soil bags filled with fine-grained soil using the dynamic cone penetration test . In African countries, in order to improve the trafficability of earth and gravel road, locally available material-based and labour-based approach are regarded as one of the most practical measures. As one of the approaches, a base course reinforcement method using Do-nou, which is the Japanese term for soil bag, had been developed. In this study, the bearing capacity of base course built with Do-nou has been examined through the Dynamic Cone Penetration tests (DCP). The series of full-size driving tests have been conducted with varying base structures and compaction methods. The results of the DCP tests show that, only in the case of Do-nou reinforcement base with manual compaction, the strength distribution balance at the part of base course and subgrade within 800 mm in depth from surface was shifted from average to well after being subjected to traffic load. In the other two cases, the balance remained average. It presents that by reinforcing soil material with Do-nou bags, the base course compacted manually keeps the sufficient bearing capacity and well-balanced strength profile in depth comparing with those conventionally designed and constructed with equipment. Introduction In African countries, road networks connecting major cities and urban areas have been developed by the public sectors for macroeconomic development. However, such socioeconomic benefits frequently do not reach all rural areas as observed by the poor conditions of rural roads, which are the lifeline of rural communities as they connect households to social services and markets. Considering the limitations of delivering public services in developing countries, Fukubayashi and Kimura [1] discussed an approach to improving rural access roads involving the self-reliance of communities along the roads. For this purpose, one of the main challenges for geotechnical engineers has been to build a road base course without equipment for compaction and using nonqualified base course materials on the soft subgrade. Thus, a spot improvement method with reinforcement of base course material with do-nou, the Japanese term for soil bag, was developed [1]. Fig. 1 presents a standard cross-sectional view of a road constructed with do-nou. Bags used for storing fertilizers, crops, Etc., locally available in developing countries, were utilized as do-nou bags. The effect of do-nou on soil reinforcement and the corresponding soil reinforcement mechanism was proposed and theoretically quantified by Matsuoka and Liu [2]. Spot improvement using the do-nou method has been applied to several roads (Fig.2) and the practicability of this method has been confirmed [3]. In this paper, the performance of the base course built with Do-nou was evaluated with the Dynamic Cone Penetrometer (DCP) method, which were proposed to apply to lowvolume road design by the Research for Community Access Partnership Program funded by UK Aid. The series of full-size driving tests have been conducted varying base course material, structure, compaction method and the moisture content. The result of the cases where the construction generated soil was utilized as base course material was presented by Fukubayashi,et. al. [4], here the result of the cases the crushed stones was utilized was presented in terms of the DCP analysis for comparison. Full-size model driving tests Full-size model driving tests were conducted in the Kibana Agricultural Science Station of the Faculty of Agriculture at the University of Miyazaki in Japan [4]. The type of the in-situ soil was classified as sandy elastic silt according to the ASTM Standard D2487 through the grain size distribution analysis. The physical and mechanical characteristics of the subgrade soil are presented in Table 1. According to a design manual for low-volume road developed in African countries, in this paper for one in Ethiopia is referred [5], the subgrade whose California bearing ratio (CBR) value was measured as 4.0, was classified as S2, the lowest class in the design manual describing the gravel base thickness. In all driving tests conducted in this study, 2-ton truck with an empty load was used as the traffic load, and the number of continuous passes in one cycle was set to 300. The 300 passes were considered the annual average daily traffic, 150 passes in terms of round trips. Referring to the procedure to determine the design traffic class outlined in the design manual [5], the design traffic class was set to the class ranging from 0.01 to 0.1 million of the cumulative equivalent standard axle load. Base course materials and do-nou bags In rural Africa, the mechanically stabilized aggregate to meet the requirement stipulated in the road design manual, such as the crushed stones, is often unavailable. Therefore, in this test series, as one of locally available materials, the construction-generated soil, silty sand with gravel, was utilized as base course material which did not necessarily comply with the requirement in the manual. In order to compare the performance of the base course to those built with the selected material, the driving test was conducted on the base course with the crushed stones as well. The physical and mechanical characteristics of the two base course material are presented in Table 1. Bags made from polyethene with specifications listed in Table 2 were used as do-nou. These bags can be obtained even in rural area because the bags with the specification are utilized as wrapping 25 kg of fertilizer, crops, etc. After using the contents, such as crops, fertilizers, the empty bags were collected and utilized as Do-nou bags for road maintenance. Base course structure and compaction method The thickness of base course consisting of each material were decided with reference of the design catalog in the design manual [5] based on the strength of the subgrade of the test field, traffic class calculated based on the number of passes with the 2ton truck, and the modified CBR value of each base course material. The thickness for base course with the well-graded gravel and that with silty sand with gravel were decided as 225 mm and 250 mm, respectively. For each material, the base course was constructed in three ways as shown in Fig. 3. In all the cases, the moisture contents of the material were adjusted to be the optimum moisture contents and spread to be three layers whose thickness was less than 100 mm. The first method was following to the construction management requirement by the manual [5]. Each layer was compacted with the hand roller with 600 kg weight and all the surface were subjected to the 6 passes of the roller. The second method is the conventional road improvement practice by community themselves without any equipment. Each layer was compacted manually with just wooden rammer of 10 kg weight. 20 blows with the rammer per 0.4 m square area were applied. The third method is to apply do-nou to reinforce the strength of base course material but still for labor intensive works. A certain volume of the base course material measured with the locally available bucket was put in the do-nou bags and the open end was tied in the consistent manner. After being laid on the ground, each filled do-nou bag were compacted with about 10 kg hand rammer applying 20 blows manually. With confirmation of the dimensions of the compacted do-nou as 40 cm in length and width and 9 cm in thickness, it is considered empirically that the bags are tense enough to reinforce the base course material inside the bags sufficiently [4]. The first layer of do-nou were laid and compacted, then the voids between the adjacent compacted do-nou were filled with base course material and compacted manually, again. Then, the second layer of do-nou was laid and the same procedures were repeated. The covering layer of the base course material was applied on the second do-nou layer's surface so that the base course's total thickness reached to the same thickness as those constructed with the other two methods. All the cases of the full-size model driving tests performed in this study are summarized in Table 3. The compaction degree of the base-course material and material for covering layer shown in Table 3 were examined by measuring the moisture content and the density with the drive cylinder method (AASHTO T204-90). Traffic loading and measuring items The test field was 3 m in width and 5 m in length. In each case of the driving test, all passes were made continuously at speed between 3 to 4 km/h, driving forward and in reverse along a channelized tire path. In-situ strength of base course and subgrade in 800 mm depth was examined before and after the cyclic traffic loading of each case using the DCP with the specifications of ASTM D6951-M18. After the driving test, to examine the compaction effect caused with the traffic loading, the DCP tests were conducted at the channelized tire path where was intensively subjected to the traffic loading. Ampadu et al. [6] have introduced several applications of the DCP, such as an available tool that can be used for rapid verification of the levels of compaction on road project. Pinard et al. [7] have proposed an alternative method of pavement design for low-volume roads where the original DCP number (DN), which is penetration rate in mm/blow, is developing into fully balanced layer strength diagrams for various traffic categories of unpaved road. Dynamic cone penetrometer strength profile The DCP strength profile and distribution of DCP Number (DN) value (mm/blow) in depth, before and after the driving tests were shown in Fig. 4 (a) for Cases 1-3 of the base course with well-graded gravel and in Fig.4 (b) for Cases 4-6 of the base course with silt sand with gravel. A typical result from the three penetration tests for each case was shown. With consideration of the limitation of the DCP, the variation of these results was reasonably less. The criteria for the roads where less than 2 heavy vehicles per day pass, which was the minimum category in the design manual for low-volume roads using DCP [7]. was also shown in Fig.4. The integrated DN defined as shown in the Fig.4 (b), were tabulated in Table 4 and 5 with the rate of decrease due to tyre pass compaction. With both base course material, before the driving test, only the base course compacted with the equipment satisfied the criteria at the base course section. Through the compaction with the vehicle passes, the upper 200 mm from base course surface compacted manually with and without do-nou were strengthened and met the criteria. In Case-3, manual compaction with the reinforcement of do-nou bags, after the vehicle passes loading, the integrated DN was improved by 75%, while 42% in Case-2, 19% in Case-1. For the cases of the base course built with silty sand with gravel, the decrease of the integrated DN was around 30% in Cases 4-6, though the compaction and reinforcement method differed. In terms of DN value, when well-graded soil is used as base course material, do-nou reinforcement has shown a large improvement effect, while in the case of silty sand with gravel similar effect as non-reinforcement base. Balance curve based on the results of the dynamic cone penetrometer (DCP) survey Referring to the DCP-DN method [7], the measured profiles of DN values were converted to the balance curves and shown in Fig. 5 with the standard pavement balance curves which parameters vary from 0 to 90. For drawing the balance curves, the number of DCP blows required to reach a certain depth, expressed as DCP Structure Number (DSN) that was a percentage of the number of DCP blows needed to penetrate the pavement to a depth 800 mm, which was defined as DCP Structure Number at 800 mm depth (DSN800). [7] determined the standard pavement balance curves from the following formula: where DSN denotes the pavement structure number (% of DSN800) at the given depth (D), B is a parameter defining the standard pavement balance curve, and D is the pavement depth (%). According to the DCP-DN method [7], the pavement structure could be classified with the nearest balance curve parameter (B) and the deviation (A) between the standard pavement balance curve parameter (B), identified as the best fitting, and the measured balance curves. The best fitting balance curve parameter (B) and the deviation before and after the traffic loading for Case-1-6, were obtained through AfCAP LVR DCP software v1.04 [8] with the DCP survey result and summarized in Tables 6 and 7. For the gravel road base course, it is said empirically that the balance curve whose B value is around 35 is the most reasonable balance [7]. Only when the base course was compacted with a pedestrian roller, for both cases of wellgraded gravel and silty sand with gravel, the balance curve parameter, the B values, before traffic loading, exceeded 35. When the base course was compacted manually with and without do-nou reinforcement, after vehicle passes loading, the B values reached to more than 35. In Cases 1-3, the B values were resulting to be over 40, while Cases 4-6 were in the range between 35 and 37. Due to the traffic loading on base course, the top part was compacted well and strengthened, resulting that the upper base course layers contributing to overall strength in all the cases. The B values has increased to around 35 for Case-5 and 6, while the deviation A decreased approaching the better-balanced base course structure. The base course built with equipment showed the B value of more than 35 before the traffic loading. With the reinforcement of do-nou, the base course in Case-6 after traffic loading reached to the most balanced structure in the six cases. As shown in Table 6 and 7, when the base course was reinforced with do-nou, the deviation from the balanced curves was the smallest among the three cases for each base course material. It can be said that the reinforcement with do-nou would contribute to the well-balanced strength profile of base course. Conclusion The full-size model driving tests were conducted to examine the performance of the base course reinforced with do-nou and compacted manually compared to the conventionally designed and constructed base courses under the low-volume road design manual. The following findings have been mainly obtained: (1) Just after the construction of building the base course, the DCP strength profile of the base courses compacted manually both with and without reinforcement of do-nou did not satisfy the criteria in the design manual, while only that compacted with the equipment satisfied. However, after traffic loading, the top part of the base course in 200 mm depth at the tire path was well compacted, and all the structures satisfied the criteria. (2) The base course reinforced with do-nou after 300 passes showed the most balanced strength curve in the depth of 800 mm based on the DCP survey results. (3) The result of the DCP before driving showed that the base of all the cases were averagely balanced deep, while after loading, the result measured at the track only of the base reinforced with Do-nou showed the base was well balanced deep. It can be said that by reinforcing soil material with Do-nou bags, base course compacted manually keeps the sufficient bearing capacity and well-balanced strength profile in depth compared with those conventionally designed and constructed with equipment.
2023-02-19T16:19:27.297Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "429b3d17375529ebee0531f2945cc95ce9d1b5a0", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/05/e3sconf_geoafrica2023_02030.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f8630c3bd50e58d216badf4873e356addc7913ea", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
119416293
pes2o/s2orc
v3-fos-license
Cosmic crystallography: the hyperbolic isometries All orientation preserving isometries of the hyperbolic three-space are studied, and the probability density of conjugate pair separations for each isometry is presented. The study is relevant for the cosmic crystallography, and is the theoretical counterpart of the mean histograms arising from computer simulations of the isometries. Introduction Cosmic crystallography is a method to help finding the geometry and the topology of the universe [1]. In a close analysis of the method, a description has been presented of how each isometry of the universe gives its individual contribution to a pair separation histogram (PSH) [2]. More recently, the isometries of the infinite 3D euclidean space were studied in some detail, and the expected (theoretical) individual contribution of each isometry to the PSH was described [3]. The present report investigates the orientation preserving isometries of H 3 , the 3D infinite hyperbolic space with positive definite metric and unitary radius of negative curvature. To generalize for arbitrary negative curvature one simply needs dividing every quantity with dimension length by the radius of curvature. All the results obtained clearly reproduce their euclidean counterparts when the radius of curvature tends to infinity. The probability density of conjugate pair separations In H 3 we assume a spherical solid ball B. Under an isometry g of H 3 the ball occupies a new position B g ; we only consider isometries such that the balls B and B g intersect. Assuming a point P ∈ B and denoting as P g ∈ B g its g-transported, we call the pair (P, P g ) a g-pair. We focus our attention on the g-pairs such that P g ∈ B ∩ B g , and assume an infinity of points P g uniformly distributed throughout the intersection B ∩ B g . For the given isometry g, we ask for the probability P B g (l)dl that a randomly selected g-pair has hyperbolic separation lying between the values l and l + dl; the probability density P B g (l) clearly satisfies the normalization condition where 2a is the diametre of the balls. Some basic formulas A few useful formulas of the hyperbolic trigonometry in 2D are worth having at hand. In a geodetic triangle with sides measuring a, b, c and corresponding opposite angles measuring α, β, γ, we have • the law of sines sin α sinh a = sin β sinh b = sin γ sinh c ; (2) • the first law of cosines (cyclic) cosh a = cosh b cosh c − sinh b sinh c cos α; • the second law of cosines (cyclic) cos α = − (cos β cos γ − sin β sin γ cosh a) . We also need a few 2D relations between lengths of arcs of geodesics, horocycles and equidistant curves to a geodesic; see figure 1 for visualization: e ′ = g cosh r, h ′ = h cosh r, h = 2 sinh (g/2), (5) e f = cosh(g/2), tanh p ′ = tanh p cosh q, and as a consequence sinh (g ′ /2) = sinh (g/2) cosh r . Figure 1 Some geometric objects in the plane hyperbolic geometry. (a) g, g ′ , and r are geodetic arcs; h and h ′ are arcs of horocycles; e ′ is an arc equidistant to g; the arcs r are perpendicular to g and to e ′ ; the geodetic arc f is orthogonal to both g and h. (b) a geodetic quadrilateral with three right angles, assuming sinh p sinh q < 1. We often use the line element of H 3 in the cylindrical coordinates, we also use where W = cosh ρ cosh ζ, X = sinh ρ cos φ, Y = sinh ρ sin φ, Z = cosh ρ sinh ζ . In these coordinates the separation l between two points P 1 , P 2 is given by For future reference, consider the special situation of the three points (see figure 2) the separation l between the points P 1 and P 2 is then given by (10), namely cosh l = cosh ρ 1 cosh ρ 2 cosh g − sinh ρ 1 sinh ρ 2 cos φ . (11) Figure 2 Relative position of points P 1 and P 2 with hyperbolic separation l. The isometries of H 3 We are presently interested in the isometries of H 3 that preserve the orientation. These isometries can be classified as • hyperbolic, with 5 parametres; these isometries bear some similarity with the 3parametric euclidean translations; • elliptic, also with 5 parametres; they are analogous to the also 5-parametric euclidean rotations in ordinary space; • screw motions, with 6 parametres; bear some similarity with the also 6-parametric euclidean screw motions; • parabolic, with only 4 parametres; again they remind us of the euclidean translations. In each of these isometries the intersection of the balls B and B g is a rotationally symmetric solid lens, whose thickness T , diametre D = 2R, and volume V B g we now seek. We denote as a the radius of the balls, and m = 2M the separation between their centres C and C g ; then we clearly have M < a and (see figure 3) The separation m depends on the isometry g one is concerned with. Noting that a, M, and R make a right-angled triangle with hypotenuse a, we find cosh a = cosh M cosh R . (13) To have the volume V B g of the solid lens B ∩ B g we first consider a compact cylindrical surface C y embeded in the lens, and whose axis coincides with that of the lens (see figure 4); all points of C y are at a fixed distance y from the axis, so the geometry on C y is 2Deuclidean. Denoting as e the length of the generatrices (arcs equidistant to the geodetic axis), the area of C y is S(y) = 2πe sinh y . (14) Figure 4 Sketch of a compact cylinder C y inscribed in the solid lens B ∩ B g ; it has radius y and generatrices e. We must now relate e with a, M, and the variable radius y. In figure 4 we note a right angled triangle with sides a (hypotenuse), y, and M + x, so we have cosh a = cosh y cosh(M + x) ; since from eq. (5a) we have e/2 = x cosh y , then e = 2 cosh y cosh −1 cosh a cosh y − M , and The volume V B g of the lens is clearly It can be checked that when M = 0 we get the hyperbolic volume π(sinh 2a − 2a) of a solid ball with radius a, as expected. Also note that for small values of a and M we recover the euclidean volume (2π/3)(a − M) 2 (2a + M) of the solid lens [3] . Special translations Preceding the study of the general hyperbolic isometry of H 3 we first consider the very special situation in which the axis L of the isometry g crosses the centre C of the ball B. With a = the radius of B, and t = the value of the translation along the axis, we assume t < 2a to have nonvanishing intersection B ∩ B g . In this special isometry we clearly have the equality m = t. According to eq.(6) and figure 1, a point P at a distance r from the axis L is displaced under t to a distance l given by sinh (l/2) = sinh (t/2) cosh r ; this is a relation involving the variable l (displacement of P ), the variable r (distance from P to the axis L), and the parametre t (the unique relevant one in this special isometry). We next introduce the probability Q B g (r)dr that a randomly chosen point P g which is in both B and B g be in a radial position between r and r + dr. The probability density Q B g (r) clearly is proportional to the area S B g (r) of the cylinder C r inscribed in the solid lens B ∩ B g (see eq.(16)), the coefficient of proportionality being the inverse of the volume V B g of the lens: The equality of the probabilities P B g (l)dl and Q B g (r)dr then gives, using r(l) obtained from (18), In the figure 5 we have four instances of P B g (l). Each plot starts abruptly on l = t and vanishes when sinh(l/2) = cosh a tanh(t/2). They greatly differ from that of an euclidean translation, where P B g (l) = δ(l − t), a Dirac δ. Special screw motions It is very simple to generalize the probability density (20) to further have a rotation ω of B around the axis L of the translation. See figure 6. The separation l between a point P and the corresponding P g is now given by (11), where we replace ρ 1 = ρ 2 → r, g → t, φ → ω: For fixed t and ω this gives r(l), from which we derive dr/dl. Since neither the volume V B g nor the areas S B g (r) depend on ω in this special screw motion of B, the density Q B g (r) is again given by (19). The density P B g (l) is then with V B g as given in (17) with M = t/2 . A few sample plots of P B g (l) are given in figure 7. Figure 7 Probability densities P B g (l) of pair separations for special screw motions g = (t, ω) of a ball B with radius a, when the axis of the isometry crosses the centre of the ball. In (a) we took t = 0 (no translation), a = 1.8, and ω = π; in (b), t = 0.3, a = 1.5, and ω = π/8; in (c), t = 1.2, a = 2.0, and ω = π/8; and in (d), t = 3.2, a = 2.0, and ω = π. Each plot starts at l = t, and abruptly except when we have pure rotation (t = 0, case (a)). All plots end when cosh l = cosh 2 ρ cosh t − sinh 2 ρ cos ω, with cosh ρ = cosh a sech(t/2). All integrated areas are unitary. We clearly recover the equation (20) when ω = 0. On the other hand, setting t = 0 in (22) gives the P B g (l) for a special elliptic isometry, namely a pure rotation ω of the ball B when the axis of the rotation contains the centre of the ball: Parabolic motions To describe a parabolic isometry g of H 3 we need first announce its 2-parametric apex A, a point at infinity. Next we select an arbitrary point C of H 3 , and draw the unique horosphere C with centre A and containing C. Then, starting from C we mark an arc of horocycle with length µ, laying on C; the direction of the arc and the value of µ demand two new parametres and finally fix the isometry g. The horocyclic separation between C and its g-transported C g being µ, the corresponding geodetic separation m is given by eq.(5c): We now consider a solid ball B with radius a and centre C; clearly there is no loss of generality in this last choice. The two parametres (a, m) suffice to completely describe P B g (l). Denote as r the geodetic altitude of a point P of H 3 relative to the horosphere C; r is counted positive if P is outside C, and negative if inside. Also draw the horosphere C r with apex A and intersecting P . Under the isometry g all points of C r are equally displaced along horocyclic arcs laying on C r , and measuring λ = µ e r ; (25) equivalently, the geodetic separation l between P and P g is given by (see figure 8) Figure 8 A parabolic isometry g of H 3 brings the points C and P to C g and P g , respectively; m and l are geodetic arcs, µ and λ are horocyclic arcs; r are parallel geodetic arcs orthogonal to both µ and λ; all arcs lay in a same H 2 . For future use we compute dr/dl from (26), with m fixed: The horosphere C r intersects each solid ball B and B g in flat circular disks D r and D rg , both with radius ρ. To have ρ as a function of r and a we introduce an auxiliary variable s (see figure 9) and solve the system which gives Our geometric situation is now phrased in the following terms: in a 2D flat plane (the horosphere C r ) we have two circular disks D r and D rg , both with radius ρ, whose centres are separated by λ. We ask for the probability R B g (r)dr that a randomly chosen g-pair (P, P g ), such that P g ∈ B ∩ B g , has altitude between r and r + dr. Clearly the probability density R B g (r) is proportional to the area S B g (r) of the intersection D r ∩D rg , the coefficient of proportionality being the inverse of the volume V B g of the solid lens B ∩ B g : The euclidean area S B g (r) is simple to obtain (see figure 10), it is with λ = 2 sinh(l/2), ρ(r) as in (29), and cos α(r) = λ/(2ρ) . Since the probabilities P B g (l)dl and R B g (r)dr are the same, we finally have See figure 11, where examples of P B g (l) for parabolic isometries are reproduced. In each plot we have l max and l min given respectively by sinh(l/2) = tanh(m/2)e ±R , with cosh R = cosh a sech(m/2). General translations We now generalize the special translations of section 5. We consider a hyperbolic isometry g of H 3 whose axis is ζ, and value t measured along the axis. The solid ball B with radius a now has centre C at a distance b from the axis; in section 5 we assumed the special value b = 0. Under the isometry g the centre C g of the new ball B g is separated m from C; according to (6), we have sinh(m/2) = sinh(t/2) cosh b. We clearly have nonempty intersection B ∩ B g only when m < 2a; values of the parametres t, b, and a interesting for our purposes then obey the constraint The thickness T , radius R, and volume V B g of the solid lens B ∩ B g are still given by (12), (13), and (17), with 2M = m(t, b) as in (35). To obtain the probability density P B g (l) we follow the same four steps as described in ref. [3]. The first step is to investigate the shape of the surface B ∩ C r , where C r is the infinitely long cylinder with axis ζ and radius r. The surface B ∩ C r is either a topological annulus (if b + r < a), or a topological disk (if a, b, and r can form a triangle), or is empty (if a < |b − r|). To have the dimensions of B ∩ C r we consider a generic point B = (r, φ, ζ) of its contour; we note that the distance from B to the centre C = (b, 0, 0) of B is the radius a, then (11) gives ζ(a, b, r, φ) according to cosh a = cosh b cosh r cosh ζ − sinh b sinh r cos φ; (37) the variable half width z(φ) of the intersection is then (eq.(5a) with e ′ → z and g → ζ) where α = cosh a cosh b cosh r , β = tanh b tanh r. For fixed values of a, b, and r, the intersection B ∩ C r lies between the curves z(φ) and −z(φ). Figure 12(a) depicts an annulus-like intersection B ∩ C r , which occurs whenever 0 < r < a − b (equivalently α > β + 1); note that the equator of the annulus measures 2π sinh r, due to the azimuthal factor g φφ = sinh 2 ρ in (7). Clearly the extremes −π and π of φ are identified. On the other hand, figure 13(a) shows a disk-like intersection B ∩ C r , which occurs whenever α < β + 1, with unequal radii z max and φ max sinh r, with The second step is to investigate the shape of the combined intersection B ∩ B g ∩ C r . To examine the possible occurrence of annulus-like intersections B ∩B g ∩C r we project the centres of B and B g on the ζ axis, and consider the midpoint of these projections; if this midpoint lies inside the solid balls, that is, if cosh a > cosh b cosh t/2, then annulus-like intersections B ∩ B g ∩ C r may occur. Otherwise all intersections B ∩ B g ∩ C r are disk-like. Clearly B ∩ B g ∩ C r is the intersection of B ∩ C r with B g ∩ C r , and B g ∩ C r is an exact copy of B ∩ C r , only longitudinally displaced a horocyclic distance t cosh r along C r . When B ∩ C r is annulus-like (0 < r < a − b), then B ∩ B g ∩ C r is either annulus-like (when cosh(t/2) < α − β, see figure 12(b), or disk-like (when α − β < cosh(t/2) < α + β, see figure 12(c), or is empty (if cosh(t/2) > α + β). In the disk-like intersections B ∩ B g ∩ C r the disk extends from −ϕ max to ϕ max , with (see figures 12(c) and 13(b) ) When B ∩ C r is disk-like, then B ∩ B g ∩ C r is either disk-like (when cosh(t/2) < α + β, see figure 13(b) ), or is empty (if cosh(t/2) > α + β). The third step is to evaluate the area S B g (r) of the surface B ∩ B g ∩ C r . To this end we define the auxiliary function in terms of which the areas such as in figures 12(b), 12(c), and 13(b) are The fourth and last step is to compute where r(l) is found from (18), and V B g from (17) and (35). In figure 14 we reproduce three examples of the density P B g (l) for general hyperbolic translations; clearly all plots start abruptly at l min ≥ t. General screw motions We already have all elements needed to obtain the probability density P B g (l) of conjugate pair separations for a general screw motion, thus generalizing the results of the preceding sections 5, 6, and 8. We now make use of all four independent parametres, namely • a = radius of the solid balls B and B g , • b = distance from the centres of the balls to the axis ζ of the isometry, • t = translation of the isometry, measured along the axis, and • ω = angle of rotation of the isometry, around the axis. We shall further write all mathematical expressions in a form appropriate for automatic calculation of P B g (l) in a computer . Without loss of generality for our purposes we assume t ≥ 0 and 0 ≤ ω ≤ π. The separation 2M between the centres of B and B g is now given by (10) cosh and the condition M < a is necessary to have nonempty intersection B ∩ B g . Assuming this condition is fulfilled, the solid lens B ∩ B g has thickness T , radius R, and volume V B g given by (12), (13), and (17), respectively, with M given by (45). The centre of the lens is at a distance σ from the axis ζ, and there always exists one diametre of the lens which is directed perpendicular to the axis ζ. The lens intersects the axis whenever σ < R, or equivalently cosh b cosh t/2 < cosh a. We next imagine an infinite family of sufficiently long, coaxial (axis ζ), cylindrical surfaces C r with variable radius r. We are interested in the intersection of each C r with the solid lens B ∩ B g ; clearly only values of r in the range (r min , r max ) give nonempty intersections B ∩ B g ∩ C r , where Here Θ is the step function with values 0 and 1. Our strategy to approach B ∩ B g ∩ C r is first study B ∩ C r , then B g ∩ C r , and finally the intersection of these two. For a given r ∈ (r min , r max ) we note that B ∩ C r is annulus-like when r < a − b, and disk-like otherwise, so we define where α(a, b, r) and β(b, r) were given in (39). Each intersection B ∩ C r is nonempty for φ ∈ (−φ max , φ max ), and lies between the two curves these two curves are drawn on the geometrically flat cylinder C r . The intersection B g ∩ C r is identical to B ∩ C r , but is displaced t cosh r longitudinally on C r , and ω azimuthally. It thus lies between the curves where the term with the Θ function containing 2π in eq. (50) is included to allow automatic computing. As in ref. [3], the area S B g (r) of the intersection B ∩ B g ∩ C r is Finally, the probability density P B g (l) is given by (44) with r(l) coming from (21); we find that In figure 15 a few sample plots of P B g (l) for screw motions in H 3 are given. Conclusion The three plots in figure 15 are the output of a computer program whose inputs are the values of a, b, t, and ω; given these inputs, the program proceeds without any intervention. The three parametres (t, ω, b) related to the screw motion can be extracted from a 4 × 4 matrix M g , which expresses the motion in terms of the minkowskian coordinates (W ; X, Y, Z) [4]. Indeed, it can be shown that the trace T , the sum Σ of the principal minors of order 2, and the time-time coefficient U of the matrix M g are T = 2(cosh t + cos ω), Σ = 2(1 + 2 cosh t cos ω), U = cosh t cosh 2 b − cos ω sinh 2 b; (55) To close this report we show through a concrete example how to use the functions P B g (l) to get information about the topology of the universe. We need first briefly recall the theory that underlies the subject; for details see [2], [5], [6]. Expected (or theoretical) normalized histograms φ(i) of pair separations are decomposed as where φ un (i) is the expected normalized histogram of the uncorrelated pair separations, the i denotes an interval of separations (a bin in the histogram), n is the finite number of objects in the solid ball B, ν g = N g /n = ν g −1 with N g the number of g-pairs with both members inside B, and φ g (i) is the expected normalized histogram of the g-pairs. These φ g (i) are the histogramic counterparts of the functions P B g (l) of [3] and of this report. From (57), and accepting a suggestion by Fagundes and Gausmann [7], we write where φ sc (i) is the expected normalized histogram of pair separations in a simply connected ball with same radius and geometry as B; it is the histogramic counterpart of the function F H (a, l) of [8] and of F H (a, s) of [9]. In the limit n → ∞ the products n[φ − φ sc ] =: ϕ B and n[φ un − φ sc ] =: ϕ B un remain finite, and we write To go from the various histograms f (i) to the corresponding functions f (l) we have simply made the number of bins tend to infinity. The function ϕ B (l) has been called the topological signature of a ball B in a multiply connected space; since in practice the function ϕ B un (l) is usually small valued when compared with both ϕ B (l) and ϕ B Γ (l), the function ϕ B Γ (l) is generally a good approximation of the topological signature ϕ B (l). We now turn to a specific example: that of a ball B in the Seifert-Weber dodecahedral space. This multiply connected hyperbolic three-space is obtained from a regular solid dodecahedron D by pairwise identifying opposite faces using twists of 3/10 of a revolution [10]. We make the centres of B and D coincide (a rather uncopernic assumption), and choose B tangent to the edges of D. We first select two of the 12 matrices of face-pairing isometries of D: sufficient for our purposes. Applying (56) to any of M 1 or M 2 we find the values for the translation, the rotation, and the distance from the axis of the isometry to the centre of B (the origin of coordinates). From (45) and (62) we obtain half separation M 1 = 0.996 between the centres of B and B g , a value smaller than the radius a = 1.439 of B. We also need consider 60 other isometries, whose common prototype matrix is These 60 isometries also contribute to (59), since from (45) and (64) we find half separation M 3 = 1.395, a value smaller than a. All other isometries seem to give M > a, so they do not contribute to (59). For the 12 fundamental isometries (62) we find intersections B ∩ B g with volume V B 1 = 1.377 as given by (17); all produce the same spectrum P B 1 (l), which has nonzero values only in the interval l ∈ (1.99, 2.79). On the other hand, for the 60 isometries (64) we find V B 3 = 0.011433, and a spectrum P B 3 (l) with nonzero values only when l ∈ (1.75, 1.99). Finally, the probability density P B sc (l) for pair separations in a hyperbolic ball is the function F H (a, l) of [8], or the function F H (a, s) of [9]; for unitary curvature of the space, the volume of the ball with radius a = 1.439 is V B sc = 18.8. From (59) we then have (see figure 16) Figure 16 Approximate topological signature ϕ B Γ (l) for an observed universe endowed with the Seifert-Weber dodecahedral topology and unitary negative curvature. The centre of observation and that of the dodecahedron coincide, and the event horizon is supposed a = 1.44 away. The discontinuities observed at l = 1.75 and l = 1.99 derive from isometries, as described in the text. Their localization and strength are good indicators of the topology of the universe. The approximate signature ϕ B Γ (l) of figure 16 bears close similarity with the corresponding histograms figure 7 in [6] and figure 3 in [11]. However, some small distortion can be seen, probably arising because the uncorrelated contribution ϕ B un (l), present in (59), was not taken into account. As a matter of fact, we have not been able to obtain the expected ϕ B un (l) neither for the present Seifert-Weber space nor for any simpler 3D nontrivial manifold, such as the three-torus. Even for the two-torus that function has been eluding our efforts; only for the one-torus (a circle) we have already succeeded in finding the ϕ B un (l) [12].
2014-10-01T00:00:00.000Z
2000-10-29T00:00:00.000
{ "year": 2000, "sha1": "a6dbc2e6ab017f2d3760a0ab47262e543971366d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a6dbc2e6ab017f2d3760a0ab47262e543971366d", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Physics" ] }
215532269
pes2o/s2orc
v3-fos-license
Direct evidence of Neanderthal fibre technology and its cognitive and behavioral implications Neanderthals are often considered as less technologically advanced than modern humans. However, we typically only find faunal remains or stone tools at Paleolithic sites. Perishable materials, comprising the vast majority of material culture items, are typically missing. Individual twisted fibres on stone tools from the Abri du Maras led to the hypothesis of Neanderthal string production in the past, but conclusive evidence was lacking. Here we show direct evidence of fibre technology in the form of a 3-ply cord fragment made from inner bark fibres on a stone tool recovered in situ from the same site. Twisted fibres provide the basis for clothing, rope, bags, nets, mats, boats, etc. which, once discovered, would have become an indispensable part of daily life. Understanding and use of twisted fibres implies the use of complex multi-component technology as well as a mathematical understanding of pairs, sets, and numbers. Added to recent evidence of birch bark tar, art, and shell beads, the idea that Neanderthals were cognitively inferior to modern humans is becoming increasingly untenable. Unit 4, including more than 50 m 2 of excavation, contains two archaeological levels divided by a sterile deposit. These two levels (4.2 and 4.1) represent two human occupation phases with abundant artefacts and some traces of combustion and diffuse ash lenses. The overlying units (3,2, and 1) are coarse in texture and contain large limestone blocks. These units have yielded only a few scattered artefacts. From a paleoclimatic perspective, layer 4 was deposited during progressively colder and drier conditions with a majority of the fauna being reindeer (Rangifer tarandus) 12 . More than 4,000 artefacts longer than 15 mm have been found in Unit 4 (40 to 50 artefacts/m 3 ). The flaking is mainly of Levallois type, associated with other core technologies, with an assemblage of mostly unretouched flint flakes, blades, bladelets and points made on local flint (collected within 30 km of the site) 13 . Large flakes, blades and points are brought already worked from outside while a debitage in situ provided smaller flakes with various technologies including Levallois. Context of the Flake The flake (G8 128) is a Levallois flake 60 mm long. The artefact, with the adhering cord fragment, was found in situ in level 4.2, 3 meters below the modern surface by the director of the excavations (M.-H. Moncel). Furthermore, the cord fragment was found on the inferior surface of the flake, meaning that the cord fragment entered the deposit contemporaneous with or before the flake. There is no evidence of a burrow or den or other disturbance to the sediment in the well-preserved stratigraphic sequence. Upon excavation, the artefact was immediately placed, unwashed, in a zip-style plastic bag where it remained until microscopic examination. This careful treatment of the artefact precludes further modern contamination. Results and Discussion Samples of stone tools from the site are routinely screened with optical light microscopy. Previously, individual twisted plant fibres, some of which were multicellular, were reported on stone tools from Abri du Maras 6 . The authors suggested that these might be remnants of cordage, but the remains were too fragmentary to be conclusive. During further screening with reflected light microscopy, we discovered a fragment of string on a Levallois flake (sample G8 128) from Level 4.2 (Fig. 2). The flake was recovered in situ with the cord adhering to its inferior surface and was covered by sediment and breccia, demonstrating that the cord is at least contemporary with the deposition and burial of the flake and is therefore Middle Paleolithic in origin. The specimen was also imaged using an environmental SEM imaging platform of the National Museum of Natural History (MNHN, Paris) and a Hirox 2D/3D digital microscope at the Centre for Research and Restoration of the Museums of France (C2RMF, Paris). Examination of photomicrographs revealed 3 bundles of fibres with S-twist which were then plied together with a Z-twist to form a 3-ply cord 14 . The cord is approximately 6.2 mm in length and approximately 0.5 mm in width (Figs. 3, 4). The morphology of the cord fragment closely resembles replica cords produced in modern materials (see SI Fig. 1). Based on the presence of bordered pits 15 with torus-margo membranes which are arranged in parallel lines, the fibres resemble gymnosperm (conifer) and come from the inner bark 16,17 . The torus is surrounded by a margo that controls the pressure in the conifer water transport system,; this mechanism is a strategy that distinguishes gymnosperm from angiosperm (flowering plants) 18,19 . Juniper, spruce, cedar, and pine bast have been used archaeologically and historically in the manufacture of cordage and textiles (see Supplemental Information). The presence of pine at the Abri du Maras is confirmed through palynological 20 and charcoal analysis 12 . We also collected modern fibre samples from 18 materials that were present during the excavations and examined them microscopically. None of these matched the fibres from sample G8 128. In addition to the cord fragment described here and examples of twisted fibres illustrated in previously published photos 6 , a number of artefacts have plant/wood fibres adhering to their surfaces but do not exhibit sufficient twisting or plying to confidently identify them as remains of cordage. In some cases these show some twists while in other cases they do not. It is possible that these fibres are related to cordage or cordage manufacture, but, thus far, the sample on flake G8 128 is the only one to exhibit clear structure of a multiple ply cord. Figures 5-7 show fibres on artefacts L6 791 (Level 4.2) and I6 333 (Level 4.1). Both artefacts were found in situ and were handled in the same manner as artefact G8 128 and the cordage fragment (unwashed, placed in zip style plastic bags until microscopic analysis). We applied FT-Raman spectroscopy to the cord on sample G8 128 to determine the composition of the preserved fibres (Fig. 8). Raman spectrometry was the most suitable technique considering is high spatial resolution and its non-invasive application. This technique is becoming increasingly common in analysis of residues on stone tools 21,22 and is used here to corroborate visual identification of plant residues. Raman spectra analysis exhibits several bands characteristic of cellulose (1378, 1338, 1120, 1090, 897, 519 cm −1 ). The weaker presence of lignin is also attested to by the band at 1602 cm −1 23,24 . Several micro-analyses were performed at different locations on the cord sample and yielded the same spectral signature illustrated in Fig. 5. Analyses focused outside the cord sample did not exhibit an organic substance signature. These results confirm that the molecular nature of these residues is consistent with an interpretation of wood components. Spectra from modern Quercus and Juniperus are included as examples of hardwood and softwood respectively. Note that the peaks match, but that obviously the Maras sample has been degradated. The proportions of lignin and cellulose vary according to wood species and in particular between hardwood (angiosperms) and softwood (gymnosperms, including conifers). Holocellulose (cellulose and hemicellulose) is the main component of woods with the proportions varying between 65-80% for softwood and between 70-90% for hardwood, the remainder consisting mainly of lignin. Cellulose is typically more sensitive to degradation than lignin, with generally low cellulose content in archaeological samples 25 . However, burial in an alkaline environment like carbonaceous bedrock can induce a preferential degradation of lignin and hemicellulose 26 . Aging could have thus modified the relative proportions of lignin and cellulose in the analyzed fibres and it is not possible to reconstruct the initial composition because of their alteration. These results highlight the exceptional preservation of this organic material with the conservation of the cellulose molecular structure. The localization of this material, in contact with a carbonaceous breccia, could have favored the deposits of calcite minerals protecting the fibres from further alterations. Previous work has also demonstrated that the flint at Abri du Maras is covered with a microscopic post-depositional film of chalcedony, which may also aid in preservation 27 . The cord is not necessarily related to the use of the tool. Its presence on the inferior surface of the flake during excavation demonstrates that it was deposited before or contemporaneous with the flake. If it was contemporaneous with the deposition of the flake, it could have been wrapped around it as part of a haft or could even have been part of a net or bag. Previous analysis of impact fractures on artefacts from the site suggests the use of hafting and provide potential support for this possibility 6 . If it was deposited before the flake, it could represent a number of different items but nonetheless illustrates the use of fibre technology at the site. At present, the earliest possible evidence of fibre technology are the shell beads from Cueva Anton with a minimum age of 115 ka 28 . Shell beads may have been strung or tied to clothing as personal ornamentation, although this could have been accomplished with sinew or a leather thong as well as cordage. Conard and Malina 29 have recently posited that perforated ivory artefacts (lochstäbe) from Aurignacian sites in the Swabian Jura were used www.nature.com/scientificreports www.nature.com/scientificreports/ for spinning plant fibres for rope-making or textiles. Other early indirect evidence of fibre technology is impressions on fired clay from Gravettian sites in Moravia as early as 28 ka 3 . These impressions reveal weaving technology and the production of textiles. The complexity of the textiles suggests that they are part of a well-established tradition that began much earlier. www.nature.com/scientificreports www.nature.com/scientificreports/ In terms of the actual preservation of fibre technology, the Upper Paleolithic waterlogged site of Ohalo II yielded three fragments of fibres with a Z twist approximately 19 ka 30 . Remnants of a 6-ply cord were found at Lascaux and date to approximately 17 ka 31 . The cord fragment from Abri du Maras is older still, dating to between 41 and 52 ka. Thus, it appears increasingly likely that fibre technology is much older than previously thought. While it is clear that the cord from Abri du Maras demonstrates Neanderthals' ability to manufacture cordage, it hints at a much larger fibre technology. Once the production of a twisted, plied cord has been accomplished it is possible to manufacture bags, mats, nets, fabric, baskets, structures, snares, and even watercraft 3,4,32 . The cord from Abri du Maras consists of fibres derived from the inner bark of gymnosperms, likely conifers. The fibrous layer of the inner bark is referred to as bast and eventually hardens to form bark. In order to make cordage, www.nature.com/scientificreports www.nature.com/scientificreports/ Neanderthals had extensive knowledge of the growth and seasonality of these trees. Bast fibres are easier to separate from the bark and the underlying wood in early spring as the sap begins to rise. The fibres increase in size and thickness as growth continues. The best times for harvesting bast fibres would be from early spring to early summer. Once bark is removed from the tree, beating can help separate the bast fibres from the bark. Additionally, retting the fibres by soaking in water aids in their separation and can soften and improve the quality of the bast. The bast must then be separated into strands and can be twisted into cordage 4 . In this case, three groups of fibres were separated and twisted clockwise (s-twist). Once twisted the strands were twined counterclockwise (Z-twist) to form a cord. Ropes and baskets are central to a large number of human activities. They facilitate the transport and storage of foodstuffs, aid in the design of complex tools (hafts, fishing, navigation) or objects (art, decoration). The technological and artistic applications of twisted fibre technologies are vast. Once adopted, fibre technology would have been indispensable and would have been a part of everyday life. In reconstructing land use patterns, paleoanthropologists typically give priority to activities such as hunting and acquiring lithic raw material. Fibre acquisition, processing, and production may have also played an important role in scheduling daily and seasonal activities. String and rope manufacture are time intensive activities and large amounts of string are required for the production of carrying objects such as bags. In an ethnomathematical study of the Maya, Chahine 33 found that a 1.3 foot Maguey bag required over 400 meters of cordage. Thinking of the environment as including both natural and anthropogenic objects makes it possible to ask several questions about the choices made by cultural groups. Topography, climate, and distribution of plant and animal species are all key factors to consider. Plants play an important role not only in the material conception of objects but also in the formation of the thought of a culture, its representation of the world and its cosmogony 4 . Overall, cordage manufacture has a complex chaîne operatoire. Although wooden artefacts are rare, other finds attest to Neanderthals detailed knowledge of trees. They chose boxwood for its density and used fire in the production of "digging sticks" at Poggetti Vecchi approximately 175 ka 2 . In the construction of the Schöningen spears, they decentered the point to increase strength 1 . Furthermore, Neanderthals were manufacturing birch bark tar in the Middle Pleistocene of Italy 34 and at the sites of Konigsaue 35 and Inden-Altdorf in Germany 36 . Based on this evidence, the utilization of bast fibres from trees is an obvious outcome of their intimate arboreal knowledge. While some have suggested that cordage manufacture may have been a gendered activity 37 , we feel our current evidence is inadequate to address that question. Understanding archaeological finds in terms of taskscapes 38 , locating socially-situated tasks in the landscape, allows us to more fully appreciate the complexity of Neanderthal technology and social life. The production of cordage is complex and requires detailed knowledge of plants, seasonality, planning, retting, etc. Indeed, the production of cordage requires an understanding of mathematical concepts and general numeracy in the creation of sets of elements and pairs of numbers to create a structure 4,39 . Indeed, numerosity has been suggested as "one possible feral cognitive basis for abstraction and modern symbolic thinking" 40(p.205) . Malafouris 41 has suggested that a material instantiation of number concepts was necessary for the emergence of cognitive numerical ability. The production of cordage, with its use of pairs and sets, may represent one such instantiation. The production of the cord from Abri du Maras requires keeping track of multiple, sequential operations simultaneously. These are not just an iterative sequence of steps because each has to have access to the previous stages. The bast fibres www.nature.com/scientificreports www.nature.com/scientificreports/ are first s-twisted to form yarn, then the yarns z-twisted (in the opposite direction to prevent unravelling) to form a strand or cord 42 . Cordage production entails context sensitive operational memory to keep track of each operation. As the structure becomes more complex (multiple cords twisted to form a rope, ropes interlaced to form knots), it demonstrates an "infinite use of finite means" and requires a cognitive complexity similar to that required by human language 43,44 . The cord fragment from Abri du Maras is the oldest direct evidence of fibre technology to date. Its production demonstrates a detailed ecological understanding of trees and how to transform them into entirely different functional substances. Fibre technology would have been an important part of everyday life and would have influenced seasonal scheduling and mobility. Furthermore, the production of cordage implies a cognitive understanding of numeracy and context sensitive operational memory. Given the ongoing revelations of Neanderthal art and technology 2,45,46 , it is difficult to see how we can regard Neanderthals as anything other than the cognitive equals of modern humans. Materials and Methods Stone tools from the site of Maras are minimally handled and placed in a sealed plastic bag until they can be analyzed microscopically for the presence of possible residues. The initial screening was done via reflected light microscopy at magnifications of 20-475x using DinoLite digital microscopes. Potential residues were photographed with Dinocapture 2.0 software and their position recorded on a line drawing of the artefact. Analysis of flake G8 128 revealed twisted fibre bundles. This specimen was also viewed using an environmental scanning electron microscope (Hitachi SU 3500) at a variety of magnifications and was examined with a Hirox RH-2000 (MXB-5000REZ lens) at the 2D/3D imaging platform of the National Museum of Natural History (MNHN, Paris). To further characterize the specimen, we used Fourier Transform Raman spectroscopy (FT-Raman) to non-invasively analyse the molecular composition of fibre residues. FT-Raman spectroscopy using a Near-Infrared excitation source at 1064 nm was chosen to limit the effect of fluorescence that can occur when analyzing organic matter. Analyses were performed on a Bruker RFS 100/S system from MONARIS lab based on a Nd-YAG laser source, a Michelson-type interferometer and a nitrogen-cooled germanium detector. The FT-Raman spectrometer is coupled with a microscope allowing analyses with a spot size of about 15 µm using a 100x long working distance infrared objective. In order to avoid sample alteration during analysis, a laser source power of 500 mW was used corresponding to ~ 120 mW on the sample. The non-contact analysis was performed by positioning the artefact directly on the microscope stage, focusing the laser beam at the desired location for analysis. Spectra were recorded between 50 and 3500 cm −1 with a spectral resolution of 4 cm −1 , and between 10,000 to 40,000 scans were accumulated in order to obtain an improved signal to noise ratio. Multiple micro-analyses on different areas of the cord yielded the same spectral signature. Analyses outside the cord sample did not yield spectra with an organic signature.
2020-04-09T18:38:41.784Z
2020-04-09T00:00:00.000
{ "year": 2020, "sha1": "0ffe010cb9413cdcac5f943f1077541f08206a80", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-61839-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "18434682dab657566251a19f0791eee89b7654e3", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
267185831
pes2o/s2orc
v3-fos-license
Integrative data analysis to identify persistent post-concussion deficits and subsequent musculoskeletal injury risk: project structure and methods Concussions are a serious public health problem, with significant healthcare costs and risks. One of the most serious complications of concussions is an increased risk of subsequent musculoskeletal injuries (MSKI). However, there is currently no reliable way to identify which individuals are at highest risk for post-concussion MSKIs. This study proposes a novel data analysis strategy for developing a clinically feasible risk score for post-concussion MSKIs in student-athletes. The data set consists of one-time tests (eg, mental health questionnaires), relevant information on demographics, health history (including details regarding the concussion such as day of the year and time lost) and athletic participation (current sport and contact level) that were collected at a single time point as well as multiple time points (baseline and follow-up time points after the concussion) of the clinical assessments (ie, cognitive, postural stability, reaction time and vestibular and ocular motor testing). The follow-up time point measurements were treated as individual variables and as differences from the baseline. Our approach used a weight-of-evidence (WoE) transformation to handle missing data and variable heterogeneity and machine learning methods for variable selection and model fitting. We applied a training-testing sample splitting scheme and performed variable preprocessing with the WoE transformation. Then, machine learning methods were applied to predict the MSKI indicator prediction, thereby constructing a composite risk score for the training-testing sample. This methodology demonstrates the potential of using machine learning methods to improve the accuracy and interpretability of risk scores for MSKI. INTRODUCTION Concussions have been identified by both the US National Institutes of Health (NIH) and the US Centers for Disease Control and Prevention (CDC) as a serious public health problem, with an annual incidence of up to 3.8 million and associated costs of approximately US$22 billion. 1 Healthcare professionals that manage concussions are guided by consensus and position statements that make recommendations for a multifaceted approach to clinical concussion care.However, these clinical assessments may lack sensitivity to identify recovery as deficits in numerous sophisticated assessments (eg, neuroimaging, blood-based biomarkers and other instrumented measures) persist beyond clinical recovery, suggesting athletes may return to participation (RTP) before complete neurological recovery. 2This premature RTP may result in the ~2× elevated musculoskeletal injury (MSKI) risk in the year following a concussion, which has been identified across diverse sports settings, ages and sexes. 3These MSKIs carry enormous societal and economic consequences affecting up to 12 million people annually, leading to 20 million lost school days, lost sports time and costing ~US$33 billion annually in healthcare costs. 4Further, an MSKI also increases the risk of chronic physical complications, leading to reduced physical activity and may increase the risk of chronic health conditions such as diabetes and cardiovascular disease. 5hus, there is a need to identify those at the WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ Concussions are associated with an increased risk of musculoskeletal injuries (MSKIs), but there is currently no effective way to identify which individuals are at the highest risk. WHAT THIS STUDY ADDS ⇒ This study provides preliminary evidence that the proposed risk score could be a valuable tool for clinicians to identify student-athletes at high risk for post-concussion MSKIs. HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY ⇒ This study provides a novel and effective data analysis strategy for developing risk scores for postconcussion MSKIs. Open access greatest risk for subsequent MSKI to implement injury risk reduction techniques.Multiple attempts have been made to identify those athletes at the highest risk for post-concussion MSKI but with limited success.Impairments in dual-task gait have been found in high school and collegiate athletes who experienced post-concussion MSKI, but these were weak associations without prognostic capabilities. 6Multifaceted clinical examinations, widely used by healthcare providers, are cost-effective and clinically feasible tests 7 ; however, individual assessments (eg, symptoms, cognitive testing) were not effective in identifying elevated MSKI risk. 8Similarly, clinical mental health measures (eg, Brief Symptom Inventory (BSI-18), Hospital Anxiety and Depression Scale (HADS)) were also not predictive of subsequent injury, although satisfaction with life had a limited association. 9Others have posited broader risk factors such as persistent neurocognitive deficits 10 ; however, while plausible, both lack empirical evidence.Independent of concussion, injury prediction is notoriously difficult, and standard interventional studies have largely been unsuccessful, 11 with the notable exception of ACL screening protocols. 12Thus, developing a postconcussion MSKI prediction model requires innovative approaches. An ideal risk-scoring model of post-concussion subsequent MSKI would determine a minimal set of predictive clinically feasible variables (eg, demographics, health history, concussion characteristics and recovery) that can identify individuals at high risk for a subsequent MSKI.Thus, an integrative statistical model is needed to combine these disparate test measurements with demographic information and health history to create a composite risk score model for subsequent MSKI.Model fitting is further complicated by missing data, prevalent in sports medicine research and clinical practice, often due to time constraints during assessments and patient non-compliance, which may lead to biased or incomplete risk inferences and ineffective interventions. 13To overcome this systemic issue, suitable statistical methodologies such as data imputation techniques are crucial for generating reliable risk models while considering missing data patterns. 14erein, we propose to generate a composite risk score based on clinically feasible information for postconcussion MSKI risk through a two-step process.First, we propose a weight-of-evidence (WoE) transformation, [15][16][17] which naturally handles missing data and heterogenous variables by replacing the values with univariate risk scores.Second, we propose using a variable selection algorithm and logistic regression to form the multivariate composite risk score (the details will be described in the subsequent methodology section).Our approach overcomes the challenges stemming from numerous irrelevant covariates and prevalent missing values.Overall, this general and versatile data analysis and strategy is a step towards addressing the pressing need to understand post-concussion recovery and MSKI risk. Research aims and approach With the proposed novel analysis strategies, this study aims to identify post-concussion MSKI risk categories.We also aim to develop a clinical risk score similar to Zemek's prediction of persistent concussion symptoms approach 18 for post-concussion MSKI.These clinically feasible approaches could allow clinicians to apply targeted interventions with known injury risk reduction approaches if successful. IMPLEMENTED METHODOLOGY We have developed an extensive longitudinal concussion data set (2015-2022), which includes data on 211 studentathlete concussions, including demographic information, medical history, concussion injury and recovery information, and common data elements (CDEs) across clinical milestones.Data collected between 2015 and 2021 were part of the Concussion Assessment, Research and Education (CARE) Consortium. 19All data collection occurred at the University of Delaware, which is in National Collegiate Athletic Association's (NCAA) Division I and the Mid-Atlantic region of the USA. The time to complete all tests was 50-60 min at baseline and 30-40 min at three follow-up time points following concussion: (1) Acute (<48 hours post-concussion), ( 2) Asymptomatic (when no concussion symptoms are reported and (3) Return to Play (RTP) (when the studentathlete returns to full participation without restriction).Data were extracted and compiled by the research team starting in February 2022 and were updated through January 2023 as new concussions occurred.Further, MSKI were updated until March 2023. Patient and public involvement Former NCAA athletes provided site-level feedback regarding study procedures, which was incorporated into the CARE Consortium study design. Clinical assessments The selected CDEs were collected following standard procedures established in the literature. 19Relevant confounding variables (eg, age, sex, injury mechanism and presentation, prior concussion and MSKI history) were collected as described below.All participants provided written and oral informed consent, and some participants consented to only a subset of access, as approved by the University of Delaware institutional review board (IRB).Each assessment has been thoroughly described.(online supplemental table 2).Briefly, neurocognitive functioning was evaluated through the computerised test Immediate Post Concussion Assessment Tool 19 with composite scores representing verbal memory, visual memory, motor speed and reaction time.The Standardised Assessment of Concussion assesses mental status, 20 and the Balance Error Scoring System evaluates postural stability. 21We used two measures of symptom reporting, the Sport Concussion Assessment Tool 5 (SCAT5) symptom list, 19 which lists 22 common Open access concussion symptoms weighted from 0 (symptom not present) to 6 (severe symptom).We used the total number of symptoms and symptom severity from the SCAT5.The BSI-18 19 is a self-report questionnaire that evaluates psychological distress and psychological disorders like depression, anxiety and somatisation.The King-Devick test was used to evaluate saccadic eye movements, 19 and the Vestibular Ocular Motor Screen 19 was used to evaluate vestibular and oculomotor function and symptoms.Tandem gait was used to evaluate gait and balance control under single and dual-task conditions, which involves performing a secondary task while walking. 22Lastly, both the reliable and valid Satisfaction with Life Scale and the HADS evaluated participants' quality of life. 19 Electronic medical records The participants' MSKI history was obtained by accessing the University of Delaware SportsWareOnline (Computer Sports Medicine, Stoughton, Massachusetts, USA) electronic health record through IRB-approved approaches and with the participant's informed consent.The MSKI was categorised by region, side, severity and time loss (table 1). 23Time from each injury in relation to a concussion was calculated in days, with a negative value indicating the MSKI occurred before the concussion and a positive value indicating that the MSKI occurred after the concussive injury.Finally, the total number of unique MSKI was calculated for each participant (range=0-13 injuries).For this study, we only examined MSKI that occurred after a concussion. Challenges and justification for data analysis strategy The preliminary analysis efforts are met with four issues that make effective MSKI risk modelling challenging.First, incomplete and missing data is a substantial challenge as prospectively assessing intercollegiate athletes in-season has inherent limitations; however, simply ignoring the missing data or using imputation may result in biased estimation and inaccurate inference. 24Second, our initial data exploration revealed non-linear and non-monotone relationships between the covariates and the MSKI risk, making it difficult to justify using a linear model such as logistic regression that assumes a monotone variable association with the risk.Third, our electronic health records contain a set of variables that are measured at four time points (baseline, acute, asymptomatic, RTP), which may hold strong potential for building clinically informative risk scores; however, they also pose a technical challenge to identify a (reasonably modest-sized and explainable) set of important variables from all possible pairs of time point and measurement.There are also difficulties in comparing the relative importance of categorical and continuous variables for interpretation purposes.Finally, dimensionality increases when we attempt to categorise continuous variables or encode categorical variables (see online supplemental table 1 for the complete list of categorical and continuous variables in our study). ][17] These scores operate on the same scale, so it simplifies comparison across diverse data types.Additionally, it helps resolve the possible non-linear relationships between differing values and risk and avoids the need to increase the number of variables excessively.Subsequent modelling using variable selection methods such as Recursive Feature Elimination 25 and Least Absolute Shrinkage and Selection Operator (LASSO) 26 27 are then applied to the transformed variables to select a minimal set of transformed patient variables for logistic regression analysis, which combines variables into a composite score that quantifies the risk for subsequent MSKI. MSKI data analysis The data set consists of one-time tests (eg, mental health questionnaires), relevant information on demographics, health history (including details regarding the concussion, such as day of the year and time lost) and athletic participation (current sport and contact level) that are collected at a single time point as well as multiple time points of the clinical assessments (baseline and follow-up time points after the concussion).The follow-up time point measurements are treated as individual variables and as differences from the baseline. The statistical analysis can be described in four steps following the discussion of the MSKI study challenges and our proposed general methodology above. Step 1. Apply a training-testing sample splitting scheme with a 2-to-1 ratio of their sample sizes. Step 2. Perform variable preprocessing with the WoE transformation.Technically, the WoE transformation is directly applicable to a discrete-valued random variable and is a function ) that replaces the discrete value .As a difference of the log-probabilities, the WoE value is large and positive if the variable value occurs more frequently with an MSKI than without an MSKI; conversely, if WoE is large and negative, then the variable value is more frequently given no MSKI than MSKI. In practice, the true probabilities are replaced by empirical estimates.Consequently, with limited data, too many discrete values lead to poor estimates for values with few occurrences. To summarise the WoE of a variable across all variable values, the information value (IV) is computed, , which is also known as Jeffrey's divergence between the conditional distribution functions of the variable given the MSKI outcome (any injury regardless of severity or time loss). 28 29Larger divergence values between the variable's conditional distributions correspond to more informative variables (higher IV values). For a continuous variable ∼ X ∈ R , defining the conditional distribution and the computation of the WoE transform requires the discretisation/binning of the variable values into discrete ranges.This discretisation is achieved by searching for the optimal binning (number of bins and bin edges) that maximises the IV.Ordinal variables can be grouped in the same manner.For categorical variables, maximising the IV can also consist of grouping different categories.Once the optimal binning is determined, the WoE transformation is applied to the discretised variable described above. Step 3. Apply machine learning methods to fit a combination of the WoE-transformed measures to predict the MSKI indicator prediction, thereby constructing a composite risk score.Specifically, the well-established high-dimensional regression methods, including the LASSO 27 and the high-dimensional sufficient dimension reduction, 26 can be applied for both variable selection and constructing the optimal linear combinations of the WoE measures.After modelling fitting, the linear coefficients can be examined to quantify the contribution of different variables. Step 4. Apply the predictive model from Step 3 to the testing data set and evaluate the results' specificity and sensitivity compared with logistic regression with all original variables. CONCLUSIONS We present a novel and effective data analysis strategy for developing risk scores for post-concussion MSKIs.By replacing each variable with its estimated univariate risk score, we address the challenges of MSKI risk modelling, including missing data and variable heterogeneity.Our method also simplifies comparison across diverse data types and identifies and accounts for non-linear relationships between different variables without adding too many variables to the model.If successful in a larger data set, these clinically feasible approaches could help clinicians develop and implement targeted interventions that reduce the risk of post-concussion MSKIs. Twitter Melissa Anderson @MelissaAndrsn and Thomas A Buckley @ConcussionUd Contributors Conceptualisation and methodology (TAB, AB, WQ), supervision (TAB, AB, WQ), data curation (MA, CCC), writing-original draft (MA), writing-review and editing (MA, CCC, AB, WQ, TAB).Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research.Refer to the Methods section for further details. Patient consent for publication Not applicable. Funding National Institute of Health: National Institute of Neurological Disorders and Stroke (NINDS).TAB (PI), AB, WQ.Integrative Data Analysis to Identify Persistent Post-Concussion Deficits and Subsequent Musculoskeletal Injury Risk.Award: 1R21NS122033-01A1. 3 September 2021 to 31 July 2023 Competing interests All authors have read and understood BMJ policies on declaration of interests.MA, AB, WQ and CCC declare that they have no competing interests.TAB has a research contract with StateSpace. Table 1 Categorisation schema of musculoskeletal injuries
2024-01-24T17:00:38.037Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "fbb84cb65c7bfcbf58d8b5d93734dba1cafac8fe", "oa_license": "CCBYNC", "oa_url": "https://bmjopensem.bmj.com/content/bmjosem/10/1/e001859.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "31ee74123e94aed61b09cb9eb0ccd9a9ce15603b", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
46846618
pes2o/s2orc
v3-fos-license
A cast partial obturator with hollow occlusal shim and semi-precision attachment A maxillofacial patient's quality of life is distorted and social integration becomes difficult. An obturator is a maxillofacial prosthesis used to close a congenital or acquired tissue defect, primarily of the hard palate and/or contiguous alveolar/soft-tissue structures. Subsequently, it restores the esthetics, speech, and function. The present clinical report aimed for the prosthetic rehabilitation of a maxillectomy defect by the incorporation of a semi-precision attachment as PRECI-SAGIX – male part of 2.2 mm on fixed partial denture (#22 and #23 teeth) and matrix – plastic female part of size 2.2 mm and height 4.2 mm of yellow on cast partial in polymer base. It aids in the retention of a hollow lightweight obturator. The technique also described the method to make a bulbless obturator with a hollow self-cured acrylic resin occlusal shim. A patient is quite satisfied with bulb less, lightweight cast partial and hollow shim palatal obturator. Introduction The patient's acceptance often collapses return to normal state once the patient suffered from acquired defects such as carcinomas and they also suffer from psychological, surgical as well as functional trauma. [1] Surgical reconstruction of maxillectomy defects is not always possible because of the general health of the patient. [2] The primary goal of a prosthetic obturator is closure of maxillectomy defect and separation of oral cavity from sinonasal cavities. [3] According to GPT 9-2017, obturator has been defined as prosthetic rehabilitation of a dentulous maxillectomy patient with utilization of the remaining palate, the defect, remaining dentition and soft tissues to maximize retention, stability, and support. In the cases of large defects, the weight of the obturator is a major concern, a maxillofacial prosthesis used to close, cover, or maintain the integrity of the oral and nasal compartments resulting from a congenital, acquired, or developmental disease process, such as cancer, cleft palate, and osteoradionecrosis of the palate; the prosthesis facilitates speech and deglutition by replacing those tissues lost because of the disease process and can, as a result, reduce nasal regurgitation and hypernasal speech, improve articulation, deglutition, and mastication; an obturator prosthesis is classified as surgical, interim, or definitive and reflects the intervention period used in the maxillofacial rehabilitation of the patient; prosthodontic restoration of a defect often includes use of a surgical obturator, interim obturator, and definitive obturator. [4] A hollow obturator is the treatment of choice in such cases. Adequate retention of the obturator is also a critical factor for its function. Obturators are frequently associated with problems that result from lack of retention and stability. This leads to traumatic occlusion and failure to maintain an acceptable oronasal or oroantral seal. For a successful restoration, the patient must feel that they can socialize without impediment. They must have acceptable speech, dental appearance, and satisfactory oral function. [5] Precision attachments have been used in maxillofacial prosthetics for quite some time now. [1] The current clinical report describes the cast partial obturator (without bulb) and incorporation of a semi-precision attachment along with a part of a hollow rim (acrylic shim) to aid in retention. Materials and Methods A 24-year-old male patient was referred for the restoration of the palatal defect. The patient had undergone maxillectomy for cemento-ossifying fibroma involving the right side of the maxilla. Extraoral examination revealed gross facial asymmetry with collapsed midfacial region on the right side of the face [ Figure 1]. Intraoral examination revealed the defect extended from the buccal mucosa into the midpalatine region, medially and anteriorly from the left central incisor region to the posterior extent of the hard palate [ Figure 2]. The defect was classified as Aramany's class IV defect with a curved arch form. [6] Due to the scar formation after surgery, the sulcus on the affected side was obliterated with increased inter-ridge distance which would further compromise the retention, function, and prognosis of the prosthesis. It required that the planned prosthesis be lightweight with adequate retention so that the patient could overcome difficulty in speech, deglutition, and respiration. A thorough medical and dental history was taken and the patient counseled to reduce his emotional anxiety. Maxillary and mandibular diagnostic impressions were made with irreversible hydrocolloid (Algitex DPI, Wallace Street, Mumbai) using stock trays and diagnostic casts of Type III dental stone (Kalabai Mumbai) were retrieved [ Figure 3]. A heavy prosthesis usually affects the function of the prosthesis, and since the buccal sulcus on the affected side was obliterated, it was planned to fabricate a hollow occlusal shim obturator with extracoronal semi-precision attachments without the bulb (Preci-Sagi × 2.2 castable male par and female standard size ø 2.2: height: 4.1 mm-ø 4.2 mm yellow color, Ceka/Preciline, Waregem, Belgium) to enhance the retention of the obturator prosthesis. The complete oral prophylaxis and the surveying of upper diagnostic cast were performed. After it, the left lateral incisor and canine were prepared and an impression was recorded with elastomeric impression material (Virtual, Ivoclar Vivadent AG/Liechtenstein, Germany) and the temporization of reduced teeth was done. While fabricating the wax pattern, both the copings were splinted and a castable semi-precision attachment incorporated into the wax pattern with the help of a surveyor. Casting was done and after metal try-in in the patient's mouth, ceramic was fired, and the splinted crowns were cemented with GC-Type 1 [ Figure 4]. After cementation of crowns, a second diagnostic impression is recorded by stock metal tray from irreversible Figure 5] after reducing the recorded 1.00 mm green stick border and final cast procured. The cast was again surveyed and the framework designed in accordance with Kennedy class I removable partial denture design principles [ Figure 6]. The casted framework was tried in the patient's mouth and adjusted for proper fit. Acrylic denture base was made on the framework and jaw records were taken and transferred to a semi-adjustable articulator. Teeth were arranged. Waxed up dentures were tried and checked for retention, stability, support, phonetics, and esthetics in the patient's mouth. Adjustments were made accordingly. The trial dentures were further waxed carved and finished [ Figure 7]. Fabrication of hollow occlusal rim (shim) • After flasking, the trial cast partial denture with teeth for acrylization in flask base with dental plaster, a customized wax bead (Preparation wax, red color, 0.5 mm Bego, Germany), was adapted on the dental stone cast in flask which was 2 mm short of the cervical level of teeth of effected side over palatal and buccal side • This bead serves as a guideline for the height of the shim wax block. A putty index (elastomeric impression material) is made below the bead wax [ Figure 8] • After counter flasking, the dewaxing was done. The putty index is replaced on the base flask and molten wax (Modelling wax, Samit ® Jhandewalan, Delhi) is filled into the index till the level of bead [ Figure 9]. The wax is allowed to cool down • About 1-2 mm of wax is scraped from the walls of the block wax to make space for the acrylic shim. Autopolymerizing resin is mixed, and once in dough stage, a uniformly thick layer (1-2 mm) is applied to the wax to make a shim. Once cured, two holes are made in the shim and the flush of hot water is done forcefully through one hole to eliminate the wax through the other hole. Now, attached hollow shim is ready for obturator • The obturator is processed with heat-cured resin in a conventional manner [ Figure 10]. The final prosthesis is tried and adjusted in the patient's mouth and finished and polished in a conventional manner. The female part of the semi-precision attachment was attached intraorally and fixed to the prosthesis with the help of self-cured resin [ Figures 11 and 12].The attachment improved the retention of the prosthesis, distinctly. Discussion Palatal obturators were used for prosthodontic rehabilitation of surgical or acquired palatal defects. The common problems associated with cast partial obturators are loss of retention in due course of time because of the plastic deformation caused by cycles of insertion/removal which causes food lodgment and discomfort to the patient when the prosthesis is in function. [7] An attachment was indicated over the terminal abutment adjacent to a large palatal defect. Precision or semi-precision attachments may prove very useful in such cases. If the defect is large and some or all of the remaining teeth are weak, extracoronal retainers should be used. [1] Precision attachments are often expensive as compared to the semi-precision attachments. In the present case report, extracoronal semi-precision attachments were used over the adjacent abutment #22, i.e., lateral incisor teeth were periodontally sound. Moreover, these attachments are fairly economical and the retentive female component is easily replaced. Due to the extension of the prosthesis into the defect, the weight of the obturator invariably increases which further compromises the retention of the prosthesis. The weight of the obturator is a major concern for the prosthodontist. Several methods have been described to overcome the difficulty with fabrication of hollow bulb obturators. [8][9][10][11][12] Separate flasking of the two halves and joining them with autopolymerizing acrylic resin has been done by some authors, but this technique was time consuming and there are chances of leakage at the interface of the two halves. [13,14] Previous authors had used sugar and ice to make the bulb hollow by eliminating these materials later. [15][16][17] These techniques also seemed cumbersome. Another technique using a light-polymerized resin record base was tried which was less time consuming. [18] Advocated technique used the bead wax and putty index for the height of hollow shim. The current advocated technique is a variation of the past used technique while authors used wax and acrylic resin hollow shim which has been seemed a predictable technique because of the uniform thickness of the shim achieved as well as the ability to achieve a single-piece prosthesis which is always superior to a two-piece obturator. [19] The use of a beading wax and putty index improved the accuracy of the technique. Conclusion The fabrication of an obturator prosthesis in cases of oral sinonasal postsurgical defect is extremely important for the recovery of mastication, speech, respiratory, and esthetic functions, affected by the loss of large amounts of orofacial structures and consequently to lead to an improvement in the quality of life of these patients. An obturator should fulfill the basic requirements of adequate retention, stability, and support and at the same time be lightweight to prevent any discomfort to the patient. The current report concludes that the patient with semi-precision attachment and cast partial denture, without bulb and a part of hollow occlusal rim, is very much comfortable, and it is also free from sensation and weight of the obturator bulb. Over a period of 6 months of follow-up, the patient is very much satisfied with the lightweight obturator. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-04-03T04:45:03.305Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "03dbd41fc31cb72b55e995796e89dbf20925da2e", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijdr.ijdr_813_16", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "1bdbfd6e4a85666bd369201e2d3b5e1ce4274245", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
232372561
pes2o/s2orc
v3-fos-license
A Connective Framework to Support the Lifecycle of Cyber–Physical Production Systems The potential benefits of the adoption of cyber–physical production systems (CPPSs) and their significant role in enabling smart manufacturing is well recognized today. However, it is less clear how such CPPS can be most effectively and consistently engineered and maintained throughout their lifecycle due to the existing divide in the information technology (IT) and operational technology (OT) landscape and ad hoc integration practices that result in inconsistent data and data models at various levels of manufacturing processes. The work presented in this article addresses this problem by envisioning a connective framework to support the engineering of CPPS through the use of a set of digital twins consistent with the real system throughout its lifecycle, not just used in the design and deployment phases. A review of the latest perspectives on using digital integration frameworks, methods, and solutions for lifecycle engineering of CPPS is provided in this article. This article demonstrates how a suitable framework, named SIMPLE, can be realized to effectively address the lack of consistent data models throughout the engineering lifecycle, including implementation details and example cases developed by the authors at the Warwick Manufacturing Group (WMG) in selected industrial sectors. Consideration is given to supporting cyber-to-physical systems’ connectivity and extendable engineering toolsets, forming the basis for multidisciplinary digital engineering environments. Key discussion points include the role and importance of effective integration of IT and OT, suitable frameworks for integration and collaboration. and are interconnected are more efficient, productive, and intelligent than their unconnected equivalents [1]- [3]. CPPS consists of autonomous and cooperative elements and subsystems that are connected based on the context within and across all levels of production, from processes through machines up to production and logistics networks [4]. This article explores the need for, and application of, information and communication technology (ICT)-enabled integration frameworks to support the complete lifecycle of CPPS. At the relevant lifecycle engineering phases, different tools need to be coherently used in an integrated manner in support of relevant digital twins. Data need to be appropriately structured, shared, and accessed. Relevant middleware needs to be provided supporting appropriate messaging patterns and layered on appropriate communication to the sensing and actuating systems in the physical world. Such a framework needs to accommodate distributed applications (e.g., engineering, analytics, and decision-support tools), digital twins deployed to support the systems operational phases, and the physical system components. Often, these will be legacy components and tools with disparate interfaces and data formats. II. B A C K G R O U N D A. Legacy and IT-OT Integration Challenges While the promise and potential of smart manufacturing are significant, the realization of this potential is often limited by the inability to effectively integrate systems to obtain good quality data and to be able to adapt and manage such systems through their engineering lifecycle. This is further compounded by the disparate origins of information technology (IT) and operational technology (OT) systems, often referred to as the IT-OT divide. For example, if effective digital twins are to be constructed, information must be gleaned from multiple sources of data, e.g., production machines, real-time IoT sensors, historical sensor data, traditional manufacturing execution systems (MESs), enterprise resource planning (ERP), product/process lifecycle management (PLM) systems, and human input from domain and industrial experts. There is a failure to practically realize suitable frameworks to effectively bridge the gap between IT and OT systems, there is significant fragmentation of solutions, connectivity is poor, and the evolution of such systems is problematic [5]. The design of OT and IT systems has traditionally met specific requirements in order to serve distinctly different enterprise functions and user bases. These differences in technology, organizational culture, and function created a gulf between the OT and IT environments, creating barriers to capitalizing on the potential benefits of OT-IT convergence. However, for the paradigm of the smart factory to be fully realized, such systems need to be implicitly integrated [6]. Ciavotta et al. [7] report that, despite the success of IoT witnessed in the last decade, the adoption of IoT/CPPS deployments in manufacturing remains limited for a number of reasons, including a lack of suitable standards and recognized interoperability. Furthermore, security and real-time management issues also lead to current deployments being implemented in an ad hoc fashion and, often, limited to unidirectional data collection from the shop floor for monitoring [7]. There is typically poor lifecycle support, little reuse, fragmented connectivity, and a lack of engineering tool integration as vendor-specific partial solutions predominate. Promising developments have seen a convergence of the technologies used to implement IT and OT systems and many of the software and engineering methods [8]. This article looks from the systems' integration perspective toward the realization of such converged systems. B. CPPS and Emerging Digital Twins Cyber-physical systems (CPS) are distributed, heterogeneous systems connected via networks and are usually associated with the concept of the IoT [9], whereas CPPS mechatronic components are coupled via networks to computational entities that enable production systems to adapt when changes occur throughout their lifecycles [10]. CPPS is formed by the integration of physical and digital systems across all levels of manufacturing enterprises that collaborate to form intelligent and responsive production systems [11]. The engineering of CPPS requires cross-disciplinary collaboration, which often results in inconsistencies among models and unintentional errors in data that can lead to failures at the deployment and operational phases [12]. As described by Colombo et al. [13], the new paradigms for implementing CPPS, such as service-oriented architecture (SOA), cloud computing, IoT, big data, and the industrial Internet, need to be deeply investigated, especially in real-world operations [13]. A closely associated and now well-established concept is that of digital twins; indeed, within the business community, the metaphor of a "digital twin" is gaining popularity as a way to explain the potential of IoT-based assets and smart environments [14]. Virtual representations, referred to as digital twins, are considered as a key enabler of engineering CPPS that can help in significantly reducing the complexity of engineering heterogeneous systems. Digital twins are an emerging technology, which allows a systematic design, build, test, and operate approach that has significant potential to assist in system validation and prediction and making informed decisions. To develop high-fidelity digital twins multiple virtual models (models of products, processes, and resources), physical systems and manufacturing IT systems are required to be connected to form a network of data-sharing entities. Through such integration, the virtual models can be calibrated in synchronization with the related physical entity, while the physical entity can be dynamically optimized and adjusted based on the insights gained from the intelligence/analytics and simulation capabilities of virtual models. However, achieving integration of such data-intensive [16] networked objects is considered as one of the major challenges. The integration involves many layers of technology to enable data acquisition, communication, storage, and processing. Digital twins provide digital representations of the physical system (in complex or selective forms) that updates and changes as the physical-twin system changes. Progressing through the phases of the manufacturing system lifecycle, the digital or physical systems may alternately be the sources, and sinks of data as the system are, for example, being defined driven by simulation or later when the digital model of the system is calibrated based on the execution of the physical system [15]. A complex interchange of data between the participants in this system of systems needs to be supported, which we will later allude to in Section III. In 2019, the Industrial Internet Consortium (IIC) published an article on digital twin architecture and standards, proposing six sets of operations to characterize digital twin interactions within the Industrial IoT ecosystem; namely, they are discoverable, support underlying data repositories, and support event notification, the digital twin contents can be securely synchronized, and user authentication is supported. An integrated information model, separated from those representing each digital twin, forms the basis for all interactions, including design, orchestration, execution, and administration. This document provided a useful summary of digital twin features and use cases in an example lifecycle context [16] (see Table 1). The ISO 23247 project initiated in 2018 has the objective of creating a digital twin manufacturing framework [17]. The framework is composed of a set of general principles, a reference architecture, digital representations of manufacturing elements, and the identification of relevant technologies for synchronization, exchange, and management of digitally represented manufacturing twins. Various architectures that are suitable for the realization of digital twin use cases have been conceptualized. Talkhestani et al. [18] suggest that a digital twin requires three main characteristics: synchronization with the real asset, active data acquisition from the real environment, and the ability of simulation. During the operation phase within the lifecycle of a CPPS, any occurring changes in the physical system should be fed into the digital twin so that it is always synchronized to the current state of the CPPS [18]. Maintaining such models throughout the system lifecycle and ensuring consistency of data between the various applications is a significant challenge. From this, stems the need for the automatic detection of change, the management of interdisciplinary dependencies, and consistency checking. Various approaches have been proposed to support this change management [18], [19]. A vision for an event mechanism to accommodate such change notification is presented within the integration framework discussed in this article, and the authors extend this concept throughout the lifecycle, for example, in support of performance prediction ahead of system deployment, for analytics-based optimization during the operational phase and for what-if analysis during reconfiguration to accommodate upscaling of production (see Section IV). C. Integration Frameworks and Architectures The increasing heterogeneity and complexity of manufacturing systems have highlighted the limitations of classical architectures. The smart manufacturing applications require a migration from the traditional ISA-95 layered architecture to a more flexible and interoperable architecture [20]. An approach is needed to support the evolution of the monolithic automation pyramid into flexible architectures maintaining support for existing legacy systems with an enhanced integration strategy. Fig. 1 depicts the evolution of data-driven/smart manufacturing organizations' architecture. It highlights the importance of designing solutions that contribute to integrate physical systems, virtual/digital systems through the use of robust data models, which is the purpose of the Smart InforMation PLatform and Ecosystem for manufacturing (SIMPLE) platform [21]. The concept of services enables the information exchange and interaction between any elements of the hierarchical levels of the industrial process. Several works developed and evaluated SOA in industrial applications [22], which included the SOCRADES, IMC-AESOP, GRACE, and ARUM European research projects. There is now growing interest in microservices, an SOA variant, and an architectural style in which the applications are decomposed into simple services by offering modularity, making applications easier to develop, test, deploy, and, most importantly, to change and maintain [23]. A number of authors have suggested frameworks that might be utilized to accommodate the management of industrial data within the context of smart manufacturing. Several standards bodies, industrial consortia, and research groups have been working in the field of architectures and frameworks for Industry 4.0 [19]. These provide comprehensive implementation-independent guidelines. Notable examples include RAMI 4.0 [24], IIRA [25], and 5C [9]. Trunzer et al. [19] consider how these approaches might be consolidated into a single system architecture looking at selected implemented use cases in a number of notable projects, including IMPROVE, PERFoRM, and BaSys4.0. They consider the perspectives of architecture, middleware, interoperability, and reconfigurability [19]. Ciavotta et al. [7] describe an architecture that is used to connect different simulation tools to describe a system model [7]. Each tool usually describes different facets of the system and their level of detail differs. Brandstetter and Wehrstedt [26] explain a related cosimulation framework to couple simulation models of different engineering domains and simulation tools to save modeling effort and analyze the system's behavior and the interaction of system components within the CPPS virtually [26]. As noted by Soldatos [27], the Industrial Internet-of-Things (IIoT) systems also provide the means for interconnecting legacy machines with IT systems and ultimately treating them as CPPS systems. This is mainly achieved through the augmentation of physical devices with middleware that implements popular IoT protocols, such as Message Queuing Telemetry Transport (MQTT), Open Platform Communications Unified Architecture (OPC UA), and WebSocket. Overall, CPPS and IIoT systems will be at the very core of all Industry 4.0 deployments in the years to come [27]. Saqlain et al. [28] report on experimental results from a smart factory case study that demonstrates that a framework can manage the regular data and urgent events generated from various factory devices in the distributed industrial environment through state-of-the-art communication protocols. The collected data are converted into useful information, which improves productivity and the prognosis of production lines [28]. Their proposed framework contains five basic layers, physical, network, middleware, database, and application layers, to provide a service-oriented architecture for the end users. The MAYA project introduced a distributed architecture to support different distributed virtual components. The main elements of this approach are as follows: a centralized support infrastructure, a simulation framework, and a communications layer. The project targeted the design, engineering, and management of CPPS systems during all the phases of the lifecycle [29]. Nguyen and Dugenske [30] proposed MQTT-based flexible architecture for manufacturing IoT using the publish and subscribe mechanism. The approach is aimed at connecting machines and applications requiring support for multiple protocols. A clear goal within the realization of smart manufacturing is greater autonomy within more distributed architectures and frameworks. Through such autonomy, edge devices can operate independently of a central system making local decisions. Edge computing also simplifies the communication chain and reduces potential sources of error by connecting to physical assets directly and collecting, analyzing, and processing data directly. Edge devices can also directly execute operations, such as filtering and aggregating raw data, significantly reducing the need to transport a large amount of raw data to the cloud for further analysis [1], [2], [31], [32]. The research work and resulting platforms reviewed above mostly focus on the engineering of CPPS (architectures and deployment methods) and achieving connectivity with the resources and/or assets in the physical layer essential for collecting operational data during the systems' operation. There are also a number of articles published on unifying data formats and structures (e.g., AutomationML). However, no or minimum attention is given to the provision of a framework ensuring data consistency across various levels of manufacturing operations that can be shared with various engineering tools and components of CPPS. To address this gap, the key objective of the work presented in this article is the provision of a generic connective framework to achieve a tight coupling between the connectivity functions of the platform and a prescriptive processes, products, and resources (PPR) manufacturing data model. This ensures consistency between the operational data and the engineering data sets and digital models used to support the engineering phases, guaranteeing consistency between digital (or cyber) and physical systems, and also enables a seamless transition between the engineering and operational phases of CPPS lifecycle (see Fig. 2). D. Future Vision The aim of this article is to give an insight into the SIMPLE connectivity platform and its role in supporting configurable systems that can be progressively engineered throughout their lifecycle, drawing on appropriate standards and methods. Sections II-A-II-C of this article have described some of the challenges and proposed solutions related to the realization of effective frameworks for smart manufacturing systems integration. The need to support legacy integration and the continuing, if narrowing, divide between IT and OT systems has been highlighted. The emergence of digital twins in the context of CPPS has been reviewed, and the related standardization activities are briefly described together with relevant research. The practical realization and utilization of such smart manufacturing systems also require effective engineering methods and tools to support both their use and continuous evolution [33]. A series of projects at the University of Warwick and previous research by the Automation Systems Group (ASG), Loughborough University, have established lifecycle engineering, integration, and connectivity methods founded on the realization of a common data model shared between engineering applications throughout the lifecycle of manufacturing automation systems [34]. On-going research by the ASG has seen this approach evolve via the current SIMPLE research project into the concept of a shared dataspace, which can be populated and accessed throughout the engineering lifecycle to enable the integration of the physical system with its digital representation(s). Section III describes the SIMPLE connective framework to functionally integrate the cyber and physical elements of manufacturing systems throughout their lifecycle and supporting the practical realization of digital twins via an open, configurable framework. In order to be successful, such an approach must be able to integrate disparate data sources effectively, understanding the context of their use at various lifecycle phases from the perspectives of both the physical system and related digital twins in order to gain insight into, and optimize, its operation. The emphasis, and a key research contribution of SIMPLE, is the creation of an efficient, scalable, connective framework of minimum necessary complexity while fully contextualizing and cross-referencing data. The approach utilizes an efficient publish and subscribe integration layer for the real-time integration of digital twins with physical systems. III. S I M P L E F R A M E W O R K : M A N A G I N G C P P S L I F E C Y C L E The transition from engineering to operational phase marks the deployment of physical equipment on the shop floor and a significant shift in operational requirements across the manufacturing organizations. The RAMI 4.0 Architecture [35] and Industry 4.0 paradigm promote a holistic approach to the design and implementation of CPPS. In particular, the "Lifecycle and Value Stream" dimension of the IEC62980 RAMI 4.0 model highlights the need to consider the evolution of operational requirements throughout the complete manufacturing systems' lifecycle. Fig. 2 illustrates the approach used to identify key operational aspects of a digital twin framework throughout the CPPS lifecycle. Existing engineering practices often result in a lack of continuity and consistency between the engineering and operational phases of a production systems' lifecycle. During the operational phase, a further dissociation exists between physical systems and the set of digital representations of those systems. The objectives of the SIMPLE framework are to facilitate: 1) the transition and maintain coupling between the engineering phase and the operational phase (through lifecycle integration) and 2) connectivity between digital systems and physical systems during their operations (CPPS integration). A. Engineering and Operational Data Models Data models (structure, content, and formats) that are used to collect, store, and manage the physical systems' digital trace (i.e., events and data generated by physical systems in operation) are not directly derived from or consistent with data models developed and used in engineering phases. This results in the building up of large operational data sets that cannot be related directly to the engineering data. This cleavage prevents operational data or digital trace from being efficiently used to enrich and refine engineering information throughout design iterations. It also indirectly contributes to the undocumented creep of engineering models as changes to the physical systems are made throughout the operational phases. Furthermore, the incompatibility between engineering and operational data models prevents the development of data-driven systems common to both the physical and digital systems (e.g., data views and information visualization, key performance indicators (KPIs) definition and representation, and analytics). The SIMPLE framework implements a skeleton PPR-centric data model that emphasizes the contextual information and metadata content required to provide connective capability between engineering and production data sets. B. IT Systems Across Lifecycle Phases IT systems deployed to support engineering and operational phases are different in nature. The storage and management of engineering data rely on complex relational data models and large centralized databases and data management systems (DBMSs) or engineering data warehouses (e.g., PLM solutions and PPR data hubs). The deployment of these systems mostly relies on direct software and database or client/server communication architectures. Conversely, the operational phase is supported by OT systems (e.g., real-time industrial control networks, controls nodes, production, and orchestration control systems) and IT-level data systems centered around the management of streams of data generated from physical systems in real time, such as time-series/historian databases, and publish/subscribe, and broker-based communication platforms. It should also be noted that the data generated by physical systems are often collected and managed by either edge-level systems (e.g., supervisory control and data acquisition (SCADA) and local data servers deployed on the shop floor) or cloud-based data-lakes and/or industrial IoT platforms (e.g., MindSphere, Predix, and Thing-Worx). In contrast to engineering data, operational data also tend to be stored in partially unstructured and often noncontextualized forms and/or with no reference to the engineering data set. An effective digital twin framework requires interfaces to all the systems described above. The integration between the IT and OT domains is an essential element in the implementation of connected factories, digital twins, and CPPS, as it supports the communication of realtime signals, events and data from various production, IT, and cloud-level systems (e.g., SCADA, MES, ERP, and IoT platforms). C. SIMPLE Framework The objectives of the SIMPLE project being conducted by the ASG at WMG, University of Warwick, are to define a connective framework and implement the associated software components in order to provide practical and functional solutions to the challenges highlighted in Sections II and III: 1) facilitating the flow of data and information between engineering and operational phases of CPPS lifecycle and 2) facilitating the flows of data and information between digital models (cyber) and physical systems. The SIMPLE platform development is partially funded by Innovate UK Manufacturing Made Smarter, Industrial Strategy Challenge Fund (ISCF) round 1 program, and its objective is to stimulate the development of manufacturing-specific but cross-sector and cross-industry digital capabilities. As such, the SIMPLE framework specification and the SIMPLE platform implementation focus on core capabilities that can be used as-is or adapted effectively to a large set of use case applications, organization sizes, and IT systems. The SIMPLE platform implementation targets a low complexity, low overhead (in terms of implementation and deployment time, skills, resources, and technologies), manageable, and scalable platform. It should be noted that the data transport and communication architecture of the SIMPLE platform is aimed at supporting logging and communication of state and status change information to enable soft-real-time synchronization between physical equipment (e.g., controllers and automation components) and their digital counterparts (e.g., digital models, production information, and management tools). The term connective is used instead of integrative or integration (the act of combining into an integral whole) to place emphasis on the project's aim, which is to implement a framework that does not inherit the complexity of the systems that it integrates. Software systems integration often results in highly complex integration components or integrated engineering solutions, whose complexity increases exponentially with the number and/or complexity of systems. Alternative approaches focus on defining common data and/or functional models that capture all aspects of the systems between which integration is required [36]. While such approaches potentially allow a reduction in the software integration overhead as common data models are used for information exchange between systems, they often result in either excessively complex data structures and repositories, or narrow and domainspecific solutions. The following design guidelines were defined to guide the SIMPLE framework functional definition, design, and implementation. The guidelines focus on reducing the overall complexity of the platform (1 and 6), enabling digital/virtual connectivity (2, 4, and 5), and accommodating IT implementation constraints (3). 1) The information required to cross-referenceengineering data, digital models, and physical systems-should be defined by a set of data models focusing on contextual and metadata information. 2) The synchronization at the real time of multiple digital models (i.e., composite digital twins), and of digital twins with the physical systems, can and should be supported by specific events and messaging models. 3) Client/server and web-service-based architectures and request/response communication models are not suitable for real-time integration of digital twins and physical systems. Publish/subscribe models should be favored in order to ensure the deployability and scalability of digital twin capabilities. 4) A digital twin platform should implement connectivity to both the OT and IT layers. As such, digital twin platforms should be implemented as part of the IT and OT layers and should promote the implementation of connectivity (protocol translation), communication/events, and information processing and transformation, as close to the edge as possible. 5) A digital twin platform should promote the connectivity of physical systems with existing digital models and the connectivity between existing digital models in order to avoid multiplication of models and duplication of data. 6) In order to achieve connectivity between multiple software solutions and physical systems, low-level but complete, contextualized and cross-referenced data sets yield more value than detailed but fragmented and incomplete data sets. Details of the SIMPLE software platform functional specification and implementation are provided in Section IV. IV. S I M P L E P L A T F O R M I M P L E M E N T A T I O N The SIMPLE platform implements several software components whose functionalities are related to: 1) simple connective PPR-centric relational data models as links between engineering and operational data (V-core); 2) simple event models and logging of real-time operational data (V-log); 3) connectivity component to OT and the organization IT or cloud layers supporting both protocol translation and edge data processing capabilities (V-hub); and 4) a publish/subscribe (MQTT-based) communication platform (V-com) (see Fig. 3 for details). The core SIMPLE platform components are containerized (Docker implementation) to enable rapid and consistent deployment on a variety of IT platforms. Unlike typical IoT or IIoT platforms, some core components of the SIMPLE platform (i.e., V-hub) are designed and implemented to enable deployment on the edge, at the organization IT level, or in the cloud. The platform also implements software applications to support the configuration, deployment, and administration of the platform components (Admin). Future development phases will include the development of an application ecosystem that is not discussed or described further in this document. Sections III-A-III-E provide information on each of the SIMPLE platform's components and applications. Fig. 4 illustrates the approach underlying the design of SIMPLE data and event models. Key requirements of the SIMPLE platforms are to: 1) provide concise models to capture relationships between PPR data; 2) capture events from both physical and digital systems; and 3) enable synchronization of digital models describing manufacturing systems at various levels of hierarchy (e.g., factory, line, cell, and components levels). A. Generic Process Model In the generic example shown in Fig. 4, specific processes are mapped to elements of a resource hierarchy (see V-core implementation details in Section IV-B), and process-related events (e.g., resource status change) are logged by the V-log component (see details in Section IV-C). The concise model was used as a basis to develop the SIMPLE platform functionalities as it enables essential production KPI calculation and production performance analysis. It also allows information defined at various levels of details to be mapped; in Fig. 4, even if not explicitly defined (white color process bar), the assembly station process can be inferred from the component-level pick-&-place events and the process/resource relationships, which would allow a line-level discrete event simulation (DES) model to be synchronized with a kinematics level simulation for instance. B. V-Core PPR Data Models The V-core component implements a relational, PPR centric skeleton data model that focuses on capturing the structure of physical resources and products and their relationship to specific processes. The V-core data model aims at providing a generic model applicable to a wide range of applications. The model was used to implement PPR modeling for the construction industry (i.e., structural insulated panel assembly), battery, and seat manufacturing for the automotive industry and is generally applicable to most discrete manufacturing applications. 5 provides details of the key tables describing the V-core relational data model. Tables related to the V-core component configurations and administration (e.g., users and right management, versioning, and change management) have been omitted. A screen grab of the V-core database UI that can be used to manually edit the PPR information is also included in Fig. 5. V-core implements a REpresentational State Transfer (REST) over hypertext transfer protocol (HTTP) application programming interface (API) that allows external applications to retrieve, populate, or update the database content using JavaScript Object Notation (JSON) formatted payload. V-core also implements an MQTT interface and can both publish and subscribe to the V-com broker component (see section IV-D); for instance, if changes are made to the PPR data using the V-core REST API (e.g., addition/deletion of resources or remapping of PPR relationships by an external application), V-core will publish an event to inform other SIMPLE platform components or external subscriber applications. As a subscriber, V-core can also receive PPR-related change events and update the PPR information accordingly. Examples of practical use cases of the V-core data model, including its integration with the vueOne virtual engineering solution and integrated manufacturing and logistics (IML) battery module assembly line at WMG, are provided in Section V. C. V-Log and SIMPLE Event Model The V-log component is a containerized time-series database, currently implemented using Influx DB. V-core implements both: 1) an MQTT interface (to V-com; see Section IV-D) to publish and subscribe to specific events defined by the SIMPLE framework) and 2) a REST API that allows retrieval of logged events by SIMPLE components or third-party applications. The SIMPLE event and messaging model define two types of events that are logged by the V-log component (see Fig. 6): 1) changes of content and/or PPR relationships in the V-core data, which allows complete traceability of system design and configuration changes made throughout a system lifecycle and 2) change of resources' status published by physical or digital systems at real time. The status is defined by the source node itself. Examples of status types for a manufacturing asset are active/inactive, idle, blocked/starved, in fault, and so on. All messages contain a reference to the publisher node (unique node ID). A node is defined as any physical (e.g., IoT devices, edge, and OPC UA server) or digital assets (e.g., simulation models/environments) that connect and publish to the SIMPLE platform's V-com broker via a V-hub instance. Events are expected to be time-stamped by the source node and, if not, are time-stamped by the V-com broker. Fig. 6 provides the message structure for data and status-related change events. Other events relate to the SIMPLE communication platform management (e.g., node connection/disconnection, and MQTT Last Will and Testament) and are not detailed in this document. The messages related to resources' status change events include a reference to the resource itself and a reference to one of the processes associated with it in the V-core database. The payload field can be used to pass additional data and information in any format and structure (e.g., plain, JSON, or eXtensible Markup Language (XML) formatted string, numerical value, and binary object) that the subscriber node can interpret. For instance, the payload field can be used to return an image after an inspection process is complete, the results of a measurement, logs generated by controllers, and so on. D. V-Com Publish-Subscribe Broker The V-com component is an MQTT publish/subscribe message broker built on top of the Mosquito MQTT platform. V-com is configured by default to support MQTT level 2 Quality of Service (QoS) (a four-step handshake guarantees delivery of messages exactly once) as the additional communication overhead is acceptable given the purpose of the SIMPLE platform. V-com is configured by default to retain messages for all topics, which allows new subscriber nodes to obtain all past messages. The V-com component implements two main Topics (a term used to describe MQTT message hierarchy): 1) for messages related to changes in the V-core databases and 2) messages related to real time communication with live physical or digital systems (i.e., data change and status change events). Subtopics are defined using the unique ID of resources as defined in the V-core database (see Section IV-B). The list of topics can be retrieved by new publisher/subscriber nodes using V-core REST API. The V-com implements a message content and structure validation procedure that ensures that all messages are formatted according to the SIMPLE event and messaging model (see above section). Any messages whose content or structure is not compliant with the V-log event model are discarded in order to preserve the consistency of information within the SIMPLE defined namespace. E. V-Hub Connectors and Related Software Components V-hub is the component that can be configured to achieve real-time connectivity with systems and applications deployed in the IT and IT/OT levels of manufacturing organizations (e.g., MES, engineering databases and web servers, modeling and simulation environment, and OPC UA servers). V-hub instances support two key functions: 1) Protocol Translation: From communication protocols used by external systems to the MQTT-based SIMPLE communication space, the library of connectors that V-hub instances can currently implement are HTTP client, socket/web sockets, OPC UA client, Modbus, and MQTT, which will be extended based on emerging use case requirements. 2) Data Transformation: Data transformation is supported by the rule engine implemented as part of each deployed V-hub instance. Rule engines can be programed to apply specific data processing and transformation. The primary objective of data transformation is to generate an output message whose structure and information content complies with the SIMPLE V-log messaging format (see Section IV-C). In addition, the information contained in the input message can be processed (e.g., numerical calculations, string processing, and formats' translation). A typical example of a rule implemented by V-hub instances aims at the mapping of OPC UA channels and tags to a resources status change as defined by the V-log StatusChange messaging model (e.g., if tagA is True and tagB is False, then output string array ["OPC_serverA," timestamp, "Gripper_station1," "," "status=InFault," and "fault_message"]). The rule engine implements an event log that allows asynchronous input messages to be processed within the same rule. The latest message from a given source is logged; then, the rule engine parses the rules library and executes the rules that contain references to the input node. The design and implementation of the SIMPLE platform connectivity components differ from typical IoT platform implementations; IoT platforms (e.g., Siemens MindSphere, GE Predix, and Fujitsu RICE), typically implement both protocol translation and data transformation as a centralized functionality deployed on cloud/server systems (e.g., NodeRed-based platform such as Siemens MindSphere). However, such approaches result in high server load and bandwidth utilization across networks and can cause interruption of operations and loss of data if connectivity is lost. Industrial edge solutions are emerging (e.g., IgnitionEdge and Fujitsu IntelliEdge) that provide both hardware and software for edge-level connectivity, protocols translation, and data transformation, as well as local caching of messages and data. The SIMPLE platform promotes a similar approach (i.e., deployment or protocol translation and data transformation as close to IT/OT edge as possible) but differs in two fundamental aspects: 1) The SIMPLE implementation reinforces a specific information model (messages information content and structure) aligned to the V-core data model. This allows us to ensure that all information communicated within the SIMPLE namespace is contextualized and consistent with data stored in V-core. 2) Every instance of the V-hub connector is deployed as a stand-alone self-contained service on the targeted device, resulting in fully distributed protocol translation and data transformation capabilities. A single V-hub component can implement connectivity to one or more nodes and implement one or more rules, and one or more V-hub services can be deployed on a single device. It should also be noted that V-hub services can be deployed on server systems if required. The V-hub component is implemented using Python and V-hub connectors using available open-source python libraries. V-hub can, therefore, be deployed on any platform that can run a Python interpreter. Those combined capabilities provide a significant level of flexibility in structuring and deploying connectivity across a variety of IT and OT system architectures and configurations. A software application that implements two functional modules (V-map and V-gen) has been developed to support the configuration and deployment of the V-hub component's instances. The V-Map software is used to: 1) select and configure the connectors (e.g., OPC UA client) required to achieve connectivity with IT and OT level nodes (e.g., OPC server, MES, and DES) and 2) implement rules to apply to incoming messages/events (i.e., data transformation). The current implementation of the rule editor is essentially a simple Python code development integrated development environment (IDE). However, future implementation will focus on the design and implementation of use cases or customer-specific rule editors with more refined capabilities (e.g., rules management library, rules template, and advanced UI/UX). Once the required connectors have been selected and configured and the rules have been defined, the V-gen software environment is used to compile the connectors, rule engine, rule definition, and event logger code (into byte-Code), as well as a simple service shell that allows V-hub instance to be administered (e.g., retrieval of versioning information, retrieval of the event log, and live monitoring of input/output messages and events). The compiled code can then be deployed on the targeted devices and executed to support real-time operations. This approach mirrors the automatic PLC control code generation developed by ASG as a part of the vueOne virtual engineering solution [33]. V. U S E C A S E S A N D T H E I R I M P L E M E N T A T I O N A. Integrated Manufacturing and Logistic Demonstrator A full-scale IML demonstrator installed in the Warwick Manufacturing Group (WMG) is used to demonstrate example use cases in this article (see Fig. 7). This system showcases Industry 4.0 methods and encompasses both new production systems and legacy equipment within a series of advanced manufacturing scenarios, which is being used for both research and training with a range of industrial partners. The IML is a dynamically adaptable modular and reconfigurable, and hence, the application can be progressively changed as new requirements emerge. Machine stations can be exchanged physically and also virtually, i.e., new virtual station models can be swapped in (and out) in place of physical stations. It is currently configured to carry out a battery submodule assembly demonstration as a part of an Innovate UK and HVM Catapult-funded project. The product assembly consists of 18 650 and 26 650 form-factor cylindrical cells to be assembled into submodules and modules incorporating bus bars and an integrated cooling system. The IML features MES, three autonomous guided vehicles (AGVs), AGV fleet manager, control systems, and automation equipment from leading vendors, e.g., Siemens, Rockwell Automation, ABB, Mitsubishi, and Festo. It aims to provide a full-scale demonstrator for new manufacturing automation methods, tools, and technologies with the objective to support the entire lifecycle, e.g., enabling the digital validation, verification, and visualization, control code generation, and cloud-based engineering services. The demonstrator system has been implemented to support the combination of legacy and agile systems-stations connected through a traditional conveyor-based system, stand-alone stations, distributed warehousing, and AGV-based autonomous logistic system for pallet transportation and line-side component supply. The integration of intralogistics and assembly and the use of distributed warehousing offer the potential to minimize disruptions due to production abnormalities and reduce nonvalue-adding activities within adaptable processes and dynamic changes in product variety and volumes while maintaining efficiency. This section provides use case examples to demonstrate how the SIMPLE platform is used in achieving integration between various software tools and physical components at the design, deployment, and production phases of the IML. The use cases provided illustrate how such connectivity allows: 1) improved virtual validation by integrating virtual engineering tools with MES and physical components, such as programmable logic controllers (PLCs); 2) improved accuracy of models by calibrating data models during production through integration of virtual models and physical systems; and 3) improved optimization of scheduling in (soft) real time by integrating DES tools with production system to resimulate and reschedule internal logistics in case of any abnormalities during production. B. Digital-Digital Integration Digital-digital integration is carried out using the PPR data model of V-Core (see Fig. 8). PPR data model-based integration not only allows reuse of information but also helps in enforcing version control and keeping the digital models up-to-date, thus eliminating discrepancies. At the design stage, two types of digital twins are developed (i.e., line level and station level) to design and simulate the IML. The line-level model of the IML is developed in DES tool Witness that offers a detailed analysis of the overall line-level process and intralogistics. Data and real-time connectivity between the vueOne and Witness digital models are carried out using the SIMPLE platform. Station-level models are developed in vueOne virtual process planning software. vueOne is a 3-D kinematic-level process planning tool that offers detailed level modeling and simulation capabilities for automatic, semiautomatic, and manual operations. Various types of sensors and actuators and manual operations can be realistically modeled and simulated. Details of vueOne capabilities are reported in [33]. Once models of stations are validated, the station-level information (sequence of operations, cycle time, details of machine components, and their physical interface mapping) is used to update the process definition in the V-core data repository, via V-core HTTP API. V-core's data model holds a replica of the entire linelevel information and its hierarchy (i.e., areas, zones, stations, systems, and components). Station-, system-, and component-level information of the IML is imported from vueOne, whereas area-and zone-level information is defined manually in the v-hub core. Once area-and zone-level information is defined, the complete line-level information can then be retrieved by the DES model using V-core HTTP API interface. After carrying out the integration, both station-and line-level models can subscribe to changes in the data model and can be dynamically updated if an update is pushed from the V-core or any of the client sides. The work is currently carried out to further extend V-core-based integration to enable the integration of MES and AGV Fleet Manager. This will significantly help in validating and optimizing the performance of the overall system before deployment. C. Digital-Physical Integration For CPPS engineering, it is vital to have digital-physical integration and have real-time data synchronization between virtual models, physical equipment, and manufacturing IT systems to closely align them over the operational phase. This results in new application areas for modeling and simulation technologies beyond the design phase. Example use cases are presented in the following. 1) Virtual Commissioning: During the design stage, the digital-physical integration is used to carry out virtual commissioning. Fig. 9 shows the virtual commissioning setup of a stand-alone welding station to validate control software in a virtual environment. The communication between the vueOne model and station PLC is achieved either by using native communication drivers (e.g., S7comm) or through the V-hub component and OPC UA connector. The input and output signals of S71500 PLC are mapped to the respective sensor and actuator components of the virtual model. This integration could be further extended to include connectivity with MES to test control software in a more realistic environment, thereby avoiding making costly changes to the software afterward. 2) Data Model Calibration: During the production phase, V-com is used to collect the PPR data from the physical system in real time and stores it in V-log that is a time-series database (see Section IV-C). Data model calibration is performed once sufficient data are collected from the physical equipment (e.g., average cycle time). The calibrated data are then pushed to the vueOne and DES models to make the digital models in-line with the performance of the physical systems [e.g., process time data field in V-core's process table (see Section IV-B)]. 3) Dynamic Optimization of AGV Fleet Management: In this use case, the SIMPLE platform is used to integrate MES, Fleet Manager and physical equipment with the DES model, and a service module Smart AGV Management System (SAMS) [37] to carry out research work in dynamically optimizing line side supply and pallet transportation in real time based on equipment status and production demand. SAMS carries out optimization with the help of mixed-integer nonlinear programming (MINLP) using genetic algorithm (GA) integrated based on both demand information, real-time resource status information, and DES simulation output. Details of SAMS are not in the scope of this article. Both historic and live data are provided to the SAMS module using V-log and V-com. The optimization study is performed using historic data to better understand the consequences of production abnormalities and optimizing the schedule to minimize the effects of abnormalities. In the case of abnormality detection or change in production demand, simulation and analytics-based optimization is carried out, and rescheduling instructions are released to MES as a response action to optimize throughput. A consequent DES and AGV fleet management simulation are then performed to study the impact on the schedule. The impact is measured and reflected through KPIs, such as overall equipment effectiveness, and build to schedule. When implemented in real time, such an optimization approach will offer a step-change in the adaptability of manufacturing systems. VI. C O N C L U S I O N The realization of a connective framework for CPPS has been presented, which aims to address legacy system and IT/OT integration challenges. SIMPLE is an efficient, scalable, connective framework for manufacturing systems, of minimum necessary complexity, while fully contextualizing and cross-referencing data. The approach proposed utilizes an efficient publish and subscribe connective platform for the real-time integration of digital twins with physical systems. Through the contents of this article, the authors have attempted to highlight the key and unique capabilities of the SIMPLE connectivity platform, which are the result of the tight and consistent integration between data/information collection and management capabilities (typically provided by IIoT platforms) and manufacturing centric PPR data models used throughout the data pipeline to ensure consistency and quality of the resulting data sets. This article has highlighted the need to extend the ISA-95 manufacturing pyramid via an enhanced flexible integration strategy. The role of a services-oriented approach to integration is considered, with a growing trend toward the utilization of microservices in this context. The importance of digital twins, their characteristics, features, and example use cases is considered through a focused review of contributions and selected relevant research projects and initiatives. The need to comprehensively support connectivity in both the engineering and operational phases and aspects of the smart manufacturing system is highlighted, including the event model and its support for change in the context of the evolution of manufacturing systems, both from a PPR perspective, and its real-time status traceability and synchronization. A full-scale IML demonstrator installed at WMG is used to demonstrate example SIMPLE use cases related to digital-to-digital and digital-to-physical integrations. The SIMPLE connectivity platform aims at providing data-centric integration capabilities. As such, the evaluation of the benefits in terms of increased operational effectiveness (e.g., productivity) cannot be carried out through the measurement of direct production KPIs for instance. Instead, the improvement targeted by the presented research is to provide a systematic, prescriptive, and robust data collection platform to ensure that the data collected are consistent, complete, and contextualized. The improvements targeted are reduced data cleansing and curating time and effort, reducing to zero the collection of incomplete data sets and/or data sets with inconsistent data/information structure and formats. Complementary and further research phases will focus on comparatively assessing as-is and SIMPLE-based data workflow within an organization and evaluating differences in data processing steps and resource allocation (time-, manual-, and software-based data processing workflows) to obtain data sets of similar quality (content, structure, and format). Future development plans also include the integration of quality and completeness indicators as part of the SIMPLE platform administrative tools, which will inform users on the status and quality of their data collection relative to the benchmark defined by the prescriptive SIMPLE PPR data model. Future development phases of the SIMPLE project will also focus on both refining existing functionalities and expanding the capabilities of the platform while remaining consistent with the vision of providing low complexity and highly maintainable and deployable solution. The development of out-of-the-box advanced analytics capabilities based on the V-com and V-log data content would add significant value to the SIMPLE platform as a production analysis platform. Similarly, dynamic advanced PPR data visualization and KPI dashboarding can be implemented using the information and events currently managed by the SIMPLE platform components. The implementation and continuous development of the V-hub connector library are important aspects of platform development. The integration of new protocols and approach at both the OT and IT/OT layer (e.g., support for data distribution service (DDS), OPC PubSub, OPC time-sensitive networking (TSN), IEC 61499, and distributed control architectures) in the SIMPLE specification will provide new opportunities for real-time connectivity to OT-level systems. At the IT level, MQTT multibroker bridging will be investigated as such capability would directly impact the scalability of the platform, as well as its maintainability. Full deployment and testing of the SIMPLE platform is envisaged at the WMG Energy Innovation Centre and the UK Battery Industrialisation Centre (UKBIC) battery manufacturing and testing facilities, with the objective of virtually validating production campaigns for a variety of product configurations (e.g., battery cells, packs, and modules). The UKBIC case study will provide an example use case in a state-of-the-art smart factory, where the deployment and use of digital twins will be critical to achieve demanding production and business objectives.
2021-03-27T14:06:36.300Z
2021-01-13T00:00:00.000
{ "year": 2021, "sha1": "035f980444cebdca030c81025cfcadf34841d479", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/5/9383786/09321473.pdf", "oa_status": "HYBRID", "pdf_src": "IEEE", "pdf_hash": "035f980444cebdca030c81025cfcadf34841d479", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
255543589
pes2o/s2orc
v3-fos-license
Diagnostic utility of pediatric epilepsy monitoring unit: Retrospective single center study Objective: To evaluate drug resistance epilepsy (DRE) patients with persistent seizures after using of standard antiepileptic drugs. This single center study aimed to investigate the utility of Epilepsy Monitoring Unit (EMU) resulted in a definitive diagnosis. Methods: This was an observational retrospective study in 323 children who were admitted to the EMU for evaluation between 2012 and 2020. Results: Of the 323 patients, 168 (52.01%) were males. The most common referral for EMU were better characterization 91 (28.17%) and pre-surgical evaluation 56 (17.3%). Of the participants, 273 (84.5%) had seizures one to 2 times per day. At discharge, 75.5% of admissions received a definitive diagnosis. Conclusion: The EMU admission for pediatric epilepsy patients is very important for early accurate diagnosis and management with surgery for those consider DRE patients. T he first epilepsy-monitoring unit (EMU) was established in eastern region of Saudi Arabia for delivery of specialized care, teaching and research in 2010. According to international league against epilepsy and local health ministry stated the comprehensive epilepsy care is essential for definitive diagnosis for adults and pediatric epilepsy patients with drug resistance epilepsy (DRE). 1 Previous study reported the epilepsy with a national prevalence of 6.54/1000 and each year the new case reported with range (20000-30000). 2 The epilepsy disease has huge impact on social, psychological and society, so it is important for proper and accurate diagnosis in early stage to control the disease and provide good care to the patients. 3,4 The early seizure reported and treated by general neurologist, which is challenging and miss-diagnosis of disorder that mimic with epilepsy. 1,5 The EMU center follow the international guidelines for referral epilepsy patients, in which the seizure is not control with treatment. 5 The multidisciplinary team do evaluation and diagnosis for therapeutic regimen (medical or surgical) for the epilepsy patients with video-electroencephalogram (EEG) monitoring. 1,5,6 The objective of this retrospective review study was to investigate the pediatric patients admitted to an EMU for final diagnosis and this will change in initial diagnosis or treatment option. Methods. This observational retrospective study included patients between the ages of 0 and 18 years who were admitted for clinical evaluation to the Epilepsy Monitoring Unit (EMU) at King Fahad Specialist Hospital Dammam (KFSHD) between the years 2012 to 2020. The study was approved by the Institutional Review Board of the KFSHD and it was according to the ethical standards as was declared by Helsinki in 2020. Excluded from the study were records of patients older than 16 years of age, or discharged for nonmedical reasons (eg, personal matters). Study variables. The variable selected for the present study based on previous similar study and NAEC guideline. 1,7 The following data were collected: demographic (age, sex), date of admission, date of discharge, number of days admitted, seizure semiology, seizure type, seizure duration, EEG data, neuroimaging findings of brain MRI, discharge diagnosis, medication regimen at discharge, follow-up plan, and referrals. The attending epilepsy specialist based on video-EEG data, neurologic diagnostic tests, other diagnostic tests, and specialist consultation at discharge rendered the variable "definitive diagnosis". Four categories of definitive diagnosis were specified. The definitive diagnosis of epilepsy was established when the patient demonstrated symptomatology typical of the reason for admission with corresponding EEG abnormalities. 5,6 The category "unable to determine definitive diagnosis" was rendered by the attending epileptologist at discharge when diagnostic tests and consultation did not determine evidence of epilepsy events. EMU and pediatric epilepsy … Al-Bradie et al www.nsj.org.sa EMU Protocol. The Nihon Koden machines in the EMU used for the recording of video-EEG of long term montiring studies. IMPAX software used for obtaining the neuroimaging results and the reports. MRI was performed on a 1.5 Tesla GE Signa excite HDxt unit before June 2014, which was then upgraded to a 3 Tesla Siemens unit using a standard epilepsy protocol. The video-EEG monitoring setup used for continuous digital recording on the admitted patients to EMU. Statistical methods. The data analysis was performed using IBM statistical package for the social sciences version for Windows, version 21 (IBM Corp., Armonk, N.Y., USA). Categorical data was analyzed by frequencies and percentages of occurrence. Continuous variables were analyzed using median and interquartile range or mean and standard deviation. All analyses were performed at a significance level of p<0.05. Results. In 2020, there were 323 admissions to the EMU and data analysis based on exclusionary criteria. The mean patient age was 86.0 month (range, 0-202 months; Table 1), and 47.5% of the patients were female. The family history of epilepsy in reported (n=108, 33.4%), and not family history (n=182, 56.6%). Majority of the reported patient consanguinity (n=119, 36.8%) and not parents were related (n=113, 34.9%). Primary reason for admission is described in Table 2. At discharge, 97.8% (n=319) of admissions were given a definitive diagnosis. Frequency for each diagnostic category is summarized in Table 3. Epilepsy was Discussion. The main objective of good epilepsy care program is to provide accurate diagnosis, treatment with control side effect and better quality of life. 1,5,7 The medical history about seizure origin, duration and type obtained from family members or patients can lead to misdiagnosis in this disorder, so it is very important to obtain and record the EEG to capture the epileptic discharge and provide best option for treatment regimn. 7,8,9 Around 5.7% of patients had normal EEG and did not show abnormal epileptic activity 10 and 19.4% showed focal and generalized seizure activity on video EEG in EMU. This is similar to previous studies. 10,11 The previous studies showed higher number for normal EEG activity for epilepsy patients. 11,12 One of the reson for such difference is EMU in tertiary care and specialist hospital. The patient's referral to specialist hospital are diagnosed by not general neurologist and at early stages. 1,7 Second, the median time since onset of seizures was 39.5 months in our retrospective study. The previous studies in adult showed median of 6 years from onset of symptoms in EMU admission and highlighted the importance admission for accurate dignosis. 7 The surgery is option for the drug resistance epilepsy patients, which considered lack of efficacy and failed to response first line of AED treatment for more than two drugs. 9 The present study showed 16.0% (n=33) received two dose of AEDs. The definitive diagnosis reported in the present study matched with previous literature about accurate diagnosis for EMU admission. 10,11 The early proper diagnosis and treatment is essential for better management of patients for quality of life, social life and cope with daily activity. This report is about pediatric epilepsy patients and global developmental delay (GDD) is major concern among such population . 13 Our previous study showed the 56% of GDD patients were diagnosed with epilepsy and other study reported 80% of pediatric epilepsy patients has one or more comorbid disease. 13,14 The EMU admission for pediatric epilepsy patients is very important for early accurate diagnosis and management with surgery for those consider DRE patients. Previous studies showed the temporal lobe epilepsy surgery is superior for seizure reeducation then medical therapy and for better quality of life. 12,15 In the present report, a 17.3% were referred for surgery, which is similar to previous reports about adult and children epilepsy patients. 11 Conclusion. This is preliminary report consistent with international guidelines about the definitive diagnosis for pediatric patient's referral to EMU. These finding showed limitation due to retrospective study design. This study showed the importance to use of video EEG monitoring for detecting seizure event and help in diagnosis of refractory epilepsy in pediatric population. It is important to establish a local guideline at ministry level and send to primary health care facilities for recommendation of referral patients to EMU, in which seizure is not control with treatment. The early proper diagnosis is important and associated with better management and improved outcome. Future studies required include cost effective analysis, reason for delay of referral, surgery outcome and utilization of EEG, MRI analysis in final diagnosis.
2023-01-10T06:17:01.921Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "f6969b5b1bdf3853620f097be5de63b7daefcec6", "oa_license": "CCBYNC", "oa_url": "https://nsj.org.sa/content/nsj/28/1/66.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed4ed2e6d4be89fbc71c6d28b7af29bec1669a23", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259628069
pes2o/s2orc
v3-fos-license
The effects of dynamic team-building intervention on internal communication in the hospitality industry in Sunyani Municipality, Ghana The current study seeks to ascertain how dynamic team-building interventions can improve internal communication in the hospitality industry. The study surveyed 135 respondents from 15 hospitality enterprises in Sunyani Municipality, Ghana, using a communication assessment instrument. The mean organization diagnosis results suggest that the studied organizations' internal communication channels are riddled with delayed and late information delivery, vocally hostile behavior, and a culture of secrecy. As a result, with the study organizations, a dynamic team-building intervention was devised and implemented. The researchers compared the pre-team-building intervention assessment results with the post-team-building intervention assessment results to determine how they changed after the intervention. The data show that the assessment variables recorded an average mean transformation score of 1.8, which is closer to the ideal score of one (1) than the average mean assessment score of 2.4 prior to the team-building intervention. The computational results show that the dynamic team-building interventions were effective in bringing the mean value closer to the ideal score of one (1). Again, the Cohen's d test analysis result of 3.8 demonstrates that the transformation that happened as a result of the dynamic team-building intervention had a significant impact size. Based on the findings, options for dynamic team-building interventions to improve the efficacy of teams in the hotel industry are provided. such as improved interpersonal interactions, increased satisfaction with work, accelerated decision-making, delegation of duties, and increased motivation and synergy among team members (6). Despite such claims, other academics, (4), point out that while team-building is popular, the results are typically vague, non-significant, or mixed. Furthermore, the one major criticism of team-building programs-that they focus more on playing games than on changing behavior-has a significant disadvantage. While dynamic team-building programs might be engaging and interesting, they typically fail to have the desired impact once everyone returns to work (7). Another disadvantage is the way team-building is seen. Participating in dynamic team-building exercises appears to produce a range of opinions, some positive and some unfavorable. Fapohunda (5) refers to criticism that team-building is nothing more than a pretext for paid vacation time. Furthermore, Isik et al. (6) examined additional research on enhanced performance as a result of team-building and discovered that, despite enthusiastic claims, there was still another lack of compelling evidence to support the favorable impacts of team-building on performance. A comprehensive examination of the extant literature on team-building and organizational growth and development reveals that there is little research on team development or that just a few studies have been conducted in service delivery organizations. Previous research, for example, has focused on industries such as healthcare, where multidisciplinary teams face issues such as collegiality, hierarchy, and professionalism (2). Similarly, the World Tourism Organization, mentioned in Salanova et al. (8), states that there is little empirical research, notably on the effects of team-building on business, in the world's largest industry, the hospitality industry. It would seem reasonable to suppose that the hospitality, finance, and retail industries rely heavily on effective collaboration. Despite the fact that many studies have been conducted to better understand group dynamics, forecast group performance (9), and improve the quality of group activities, there are few papers or research studies on teamwork in service sector contexts. The purpose of this dissertation is to assess the benefits of team-building on the hospitality industry using enterprises in Sunyani Municipality, Ghana, as a case study. Team-building Team-building, according to Aquino et al. (10), is the practice of bringing together people with various needs, histories, and expertise and transforming them into integrated and successful work units. At the same time, Aga et al. (2) noted that team-building is a learning process with an experimental approach to enhancing internal group processes such as individual communication, collaboration, and conflict resolution. The purpose of team-building is to improve communication quality, increase productivity and creativity, and support organizations in inspiring personnel to appropriately follow operational norms and procedures (2). Other advantages include enhanced workplace trust and mutual support, which leads to increased job satisfaction and dedication to the organization (5). The two primary categories of group growth are dynamic team-building and team-training (11). Both interventions for dynamic team-building and team-training strive to promote team efficacy, but they target different types of teams and consequently have different team requirements. Team training aims to transform weak and transient 'patchwork' teams into cohesive units by consolidating group members' acquisition of cooperative skills such as communication, which necessitates the inclusion of practical training within specific contexts that are specific to the work or task (12). In contrast to team-training, dynamic team-building is described as most efficient when a team has a specific difficulty that impedes the team's work and thus exhibits maximum utility for stable groups built over time of the same members who have long experience working together. Dynamic team-building is frequently less regulated, with the purpose of teaching groups the fundamental skills required for collaborative business (1). It is based on the idea that members of a group can help themselves by diagnosing and addressing problems in order to better manage their own behavior. One of the most common strategies for group development is dynamic team-building. Dynamic team-building interventions are therapies intended expressly to address team development concerns such as enhancing interpersonal relationships, increasing productivity, and aligning team goals with organizational goals, resulting in more successful organizational work (13). In particular, dynamic team-building interventions allow teams to reflect on how they communicate with one another, recognize defects and shortcomings in collaboration, provide an ideal picture of cooperation, and contribute to the establishment of effective organizations (1). The interventions are first intended to help groups evolve and build their social and interpersonal ties, but they eventually shift their focus to a variety of aspects of group development, such as articulating common goals, attaining results, or completing assignments (2). According to Aga et al. (2), the effect of dynamic team-building on group performance is also influenced by the amount of time the team spends together and the amount of time the group is given to complete the task. They discovered that dynamic team-building had no effect on teams created for short periods of time and given fictive assignments for short periods of time. The interventions, on the other hand, had an influence on pre-existing and newly created teams that collaborated on a genuine job for a longer period of time. Within-group research has typically used teams convened for brief periods of time to work on fictive tasks, which may explain uneven dynamic team-building outcomes (3). The current study looks at work groups that have been working together and are expected to continue working together in the near future. Internal Communication Internal communication encompasses all interactions that occur within an office or organization. This relationship can exist among employees, employees and administrators, and superiors. Internal communication is the process of sharing information, fostering commitment, and managing change as the primary drivers of employee motivation and performance within an organization (14). Internal communication can be defined as any formal or informal communication within an organization. The purpose of internal organisational communication is to increase organisational value. are intended to push the organization outside of its comfort zone (19). This study employs the human process intervention, which includes leadership, problem-solving, communication, and group decision-making. Coaching, training and development, process consultation, and third-party intervention are examples of interventions. Frequently, these interventions focus on interpersonal interactions, group dynamics, and dynamic team-building activities. The competence of the consultants is only one factor that determines the success of the intervention; organisational commitment factors also influence the changes that will result from the intervention. There are three distinct categories of client dedication to the transformation process (17). This first form of commitment, known as affective commitment, is a result of the organization's drive to advance. The second form of commitment is normative commitment, which entails a pledge to aid in the change process. Thirdly, the continuation commitment, also known as the commitment based on cost calculation, aids in preventing failure-related losses. To acquire the trust of clients, consultants must frequently interact with them. The ability to commit will aid consultants in planning and selecting the most effective interventions and communicating the significance of the change to the organization. Multiple human factors, including change beneficiaries, organizations, and change agents or facilitators, influence how changes occur. To be effective, a facilitator must be optimistic, gifted, and knowledgeable. The facilitator must have a comprehensive comprehension of the group he is attempting to influence, establish plans, and then lead the changes, but he cannot assume that he possesses all the necessary skills. Before attempting to change others, the facilitator must first comprehend himself. The success or failure of an intervention is contingent upon the organisation's capacity for change and the facilitator's capacity for generating commitment. Research Design In this study, the action research design was chosen and implemented. It was chosen due to its potential to provide insight during the interventional procedure, which could assist the study in reaching its intended conclusion. In this study, the action research design method was divided into three phases: pre-intervention, dynamic intervention, and post-intervention. The pre-team-building intervention phase achieved two essential objectives. Initially, it was necessary to evaluate the current condition of internal communication in the organizations in order to identify the internal communication- The objective of the team-building intervention was to devise an action plan to address the challenges identified in the pre-intervention assessment. Each set of objectives and regulations was developed in response to the identified obstacles prior to implementation. The action plan details the primary topics, significant internal communication issues, tasks, and responsible parties. The post-team-building intervention phase was designed to evaluate the organization's current state in relation to the five main thematic areas of superior-subordinate communication, information reliability, information quality, superior openness, and upward communication prospects. The primary objective was to determine whether the team-building intervention resulted in a change. Data Collection The researchers used both qualitative and quantitative research approaches to gather and analyze data throughout the pre and post team-building intervention stages of the study. To gather qualitative data, interviews and observation methods were used. Quantitative data were gathered using closed-ended Likert-scale questionnaires. The Likert-scale questions have a 1-5 range of scores with 1 = Strongly Agree, 2 = Agree, 3 = Uncertain, 4 = Disagree, and 5 = Strongly Disagree. The data were examined and presented using tables and graphs. Research population and sample The research population is made up of employees from upper and lower-level positions in firms in the service sector of the hospitality industry in Sunyani municipality. Specifically, hotels and guest houses fall within this category. The researchers used cluster sampling to select twenty-five firms in the hospitality industry in Sunyani municipality. Simple random sampling was used to select respondents from the fifteen sampled firms in the hospitality industry as the units of analysis. In all, one hundred and thirty-five (135) people participated in this study as respondents. Pre-Organisation Development Assessment The researchers evaluated the organisation's internal communication using the Organisation Internal communication Self-Assessment Instrument (questionnaire) developed for the study. The mean scores of the assessment have been graphically presented in Figure 2. In reference to Figure 2, the firms current internal communication condition has projected higher than the ideal mean score of one (1). The outcome is a clear indication that the firms have challenges with internal communication across all five key assessment variables. In order to determine the variables causing the projection of the mean score of the variables over the ideal mean score of one (1), the researchers and study participants conducted internal diagnoses of the study firms. Superior-subordinate communication The firms scored an average mean of 2.5, which is higher than the desired mean value of 1.0. The responders point out that both supervisors (superiors) and employees (subordinates) collaborate to realize the firms' objective and vision. However, there are many situations in which supervisors verbally engage in aggressive behaviour towards employees in the workplace. Internal creativity has decreased as a result of this behaviour, which has also occasionally resulted in subordinate disengagement and workplace deviation. Information quality, The firms received an overall average score of 3.0 in the category of information quality. The findings revealed that information delivery is often slow and untimely. The methodology used to gather and process information as well as the channel of distribution, according to the respondents, is not always dependable. Superior openness/candor, The assessed firms scored an average mean of 1.8 in superior openness. The respondents state that there is a strong emphasis on confidentiality within the firms. Lack of consensus building, poor creativity, and ineffective problem solving are all effects of the secretive culture. Opportunities for upward communication, The firms received an average score of 2.4 in the category of opportunities for upward communication. The study revealed that the firms discourages giving feedback, especially to superiors. Again, employees of the firms are not free to speak out and voice their opinions on issues that trouble them. Additionally, this has made it difficult to propose fresh perspectives, and offer constructive criticism. Information reliability. For information reliability, the firms received an average score of 2.0. According to the results, there are several discrepancies and mistakes in the management and flow of information. As a result, employees in the study firms turn to information scrutiny in order to identify prospective issues and stop undesirable events. Action Planning and Implementation The researchers conducted an activity plan on key issues identified as internal communication challenges bedeviling the study firms. The plan was implemented between February 2021 and March 2022. Table 1 shows the action plan. Information reliability inconsistencies and inaccuracies Facilitation Presentation Internal Change Agent Researchers Source: Field Survey, 2021 Pre and Post Team-building Intervention Transformation In this section, the researchers compare the pre-team-building intervention assessment mean to the post-assessment results to get a clearer picture of the magnitude of changes that occur after the intervention process. Discussions The intervention produced a significant difference between the pre-intervention means and the post-intervention assessment means. Referring to table 2, the post-intervention team-building assessment revealed a mean transformation score of 1.8, which is closer to the optimal score of one (1) than the pre-assessment mean score of 2.4. The computational results indicate that the dynamic team-building interventions were effective in bringing the postintervention mean value closer to the ideal score of one (1). The post-intervention mean score (1.8) is also significantly closer to the ideal value of one (1) than the pre-intervention mean score (2.4). Again, the Cohen's d test analysis score of 3.8 indicates that the effect size of the transformation that resulted from the intervention in dynamic team-building is significant. The aforementioned arguments demonstrate the efficacy of dynamic team-building interventions in addressing internal communication issues in the hospitality sector. Interventions for team development are effective at identifying problems, devising and designing solutions, implementing and evaluating them. Dynamic team-building is similar to organizational retreat therapy in that change agents and/or trainers use comparable team-building materials and tools to transform participants. Conclusion This study emphasizes the significance of dynamic team-building interventions for improving internal communication in the hospitality industry. The assessment results demonstrate that the dynamic team-building interventions implemented in the study organizations resulted in substantial improvements in internal communication. Effective internal communication is essential for the success of hospitality industry organizations. It improves coordination, fosters cooperation, and fosters a positive work environment. By addressing the identified communication challenges, organisations can improve their overall performance and achieve their objectives more efficiently. This study's findings contribute to the existing literature on dynamic team-building and internal communication in the hospitality industry. It provides managers and practitioners in the industry with valuable insights for understanding the benefits of dynamic team-building interventions and implementing strategies to improve internal communication. Nonetheless, it is essential to recognize the limitations of this investigation. The investigation was conducted in a particular geographic area and on a limited number of organizations. Future research could expand the scope by examining the long-term effects of dynamic team-building interventions on organizational performance with a larger sample size. In conclusion, dynamic team-building interventions may be an effective method for enhancing internal communication in the hospitality industry. By addressing communication obstacles and nurturing a positive communication climate, organizations can improve teamwork, enhance collaboration, and ultimately achieve better results. Implementing dynamic team-building interventions should be considered as part of the hospitality industry's overall organizational development strategy.
2023-07-11T15:52:22.900Z
2023-06-30T00:00:00.000
{ "year": 2023, "sha1": "95fdedf418c9c920faa47f173709a8fc3c77b77d", "oa_license": "CCBYNCSA", "oa_url": "https://wjarr.com/sites/default/files/WJARR-2023-1181.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a512a45563d4eb1fc8bd7daedeac66c6b5ad8f9a", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
125941944
pes2o/s2orc
v3-fos-license
Vibrational Characteristics of AGARD 445.6 Wing in Transonic Flow This paper presents the application of Computational Fluid Dynamics (CFD) and Fluid Structure Interaction in ANSYS to do vibrational analysis on an aircraft wing in transonic region. A simulation study is conducted on a wing by modelling it in a solid modelling software. Further CFD analysis is performed at different Mach numbers to identify pressure variations at different locations on the wing. Transient structural analysis is carried out to study the variations in displacement of the wing with time. The post processing is done for determining the structural frequency and thereby to establish the flutter boundary in the transonic range. Introduction An aircraft is a very complex engineering structure which experiences vibrational effects both internally and externally. Aerodynamic problems in general are often difficult to solve analytically. Experimental or numerical simulation can be used to analyse these models to simulate the interaction of fluids with surfaces defined by boundary conditions. Flutter of an aircraft wing is a major concern in the field of aeroelasticity. Vasanth Dhanagopal et al. [2] coupled the CFD solver with the governing structural equations of motion and with the help dual time stepping approach to establish the flutter boundary. T Sai Kiran Goud et al. [3] in his article emphasises on the complications arising with the interaction between the fluid module and the wing module. The paper discusses the difficulties of fluid structure interaction problems when done analytically and thereby use ANSYS workbench to determine the stresses induced corresponding to the flow. Chowla Sangeetha et al. [4] discussed the fluid structure interaction problem by determining the initial boundary conditions of AGARD 445.6 wing and also studies the variation in stress across the wing. The fundamental structural behaviour of the AGARD 445.6 wing is studied under some practical load conditions and the effects of Fluid structure interaction are validated with available experimental results. Liu et al. The coordinates are imported from MS Excel to CATIA V5 using Macros. The solid model obtained from the CATIA V5 is saved with .igs extension to import it to the ANSYS workbench. The imported solid model in ANSYS is as shown in Figure 2. Modal analysis The modal analysis is performed on the wing in order to determine the natural frequencies and mode shapes with a fixed support at the root chord.The first 4 modes are taken into consideration to validate the wing model as shown in Table.1. Figure 3 shows the first bending frequency mode and Figure 4 shows the first torsion frequency mode. The obtained natural frequencies are compared with the previous study on the same model as in Ref [1]. In the present work, AGARD 445.6 wing has been taken into consideration to study the vibrational behavior of an aircraft wing in aerodynamic flow. The analysis is done in ANSYS-FLUENT and the stability boundary of the wing is determined as the condition where the structure oscillates with a constant magnitude. The flutter speed index is also defined for the identification of the transonic characteristics. Wing Modelling The wing cross section is modelled by the NACA airfoil NACA65a004.The airfoil NACA 65a004 is a symmetric airfoil which comes under 6 digit series. This has an area of minimum pressure 50% of chord with a lift coefficient of zero and has maximum thickness of 4% of the chord. The standard dimensions of the AGARD 445.6 Wing in shown in Figure 1. The fluid domain is meshed as shown in Figure 5. The region of wing which first comes in contact with the air flow is the leading edge of the wing. In the leading edge, the air flow is separated and at the trailing edge they merge. Hence the mesh on the fluid domain is a fine mesh close to the leading and trailing edge as shown in Figure 6. Transient analysis is preferred as it is a time domain solution and includes higher order terms dealing with time and hence we can get better accuracy than steady state solutions. In ANSYS-FLUENT transient analysis on the fluid domain is done with pressure based solver. The analysis is carried out with a time step of 0.001 for 10s to get the solution convergence. The pressure coefficients of the wing structure are plotted for different locations across the span. The Figure 7 and Figure 8 show the pressure coefficient at zero span and full span respectively for a Mach number 0.98. The area between these lines helps to determine the lift factor. As CFD analysis is done on this wing with 0 angle of attack there is no difference in values of these two lines and thus no lift generated. Transient Structural Analysis One way FSI and a transient structural analysis is done in ANSYS-FLUENT to determine the deformations on the wing structure due to air flow over the wing. Aerodynamic analysis is done for the fluid domain with the established boundary conditions. In transient structural the wing will be under consideration. The meshing of wing in structural analysis is shown in Figure 9. The solution is shared from FLUENT to Transient Structural so that the pressure loads are imported for every element in the generated mesh. The imported pressure loads are shown in the Figure 10. The transient analysis is carried for 10s with time step 0.05s. The same procedure is carried out for different Mach numbers to obtain the displacement vs time plot. Flutter Speed Index Fast Fourier Transform (FFT) is done to find the frequency of structure, and the bending structural frequencies are determined for each Mach number. This bending structural frequency is used to determine the speed index value for the corresponding Mach number. The flutter speed index is calculated by the formula given below (1) Where U∞ is the free stream velocity, b is the half-chord length, ω is the structural frequency of the wing, and µ is the mass ratio. Where m is the mass of wing, l is the length of the wing and ρ is the density of free stream air. The speed index diagram has been plotted by considering the structural frequencies in bending mode of the wing. In the present simulation study, it is the bending displacement data that is analysed for the frequency. The torsional frequency data were not extracted as the present analysis was incapable of extracting the same. Results The displacement pattern of the structure with the time series is determined. The time-displacement plots at 0.57 Mach, 0.86 Mach and 1.14 Mach are shown in Figure 11, Figure 12 and Figure 13 respectively. Flutter boundary is considered as the conditions corresponding to the oscillations with constant amplitude, the figures show that the structure is at the flutter boundary. Figure 14 shows the variation of speed index with the Mach number. For the Mach regime between 0.8 to 1.2 which is the transonic region, it is seen that there is a dip in the speed index graph. This dip is attributed to the nonlinearities and unsteadiness which are characteristics to the transonic region. It has been observed that a maximum dip is occurring at Mach 0.86. As we further increase the value of Mach number the speed index rises. The speed index graph in Figure 15 shows the variation of flutter index value in Ref [6]. It is seen that the trend of the graph is in good agreement whereas there is a difference in the range of values of flutter speed index. This is due to the fact that in the present work, the analysis was not able to extract the torsional frequency. The flutter speed index is computed with respect to the bending mode. Conclusion The analysis has been performed in ANSYS Workbench to obtain the vibrational characteristics of the AGARD 445.6 wing. The wing was studied under transonic flow conditions. Computational fluid dynamics analysis is performed on the wing to have a basic understanding on the pressure variations on the surface of the wing. The transient analysis gives the time domain solution for the wing which is utilized to extract the structural frequency. The plot of Mach number and its corresponding flutter index is important to observe the behavior of the wing when it is in the transition zone between subsonic and supersonic speeds. From the flutter index diagram, dip is observed in transonic regime and at a certain Mach number the wing is most unstable.
2019-04-22T13:07:44.138Z
2017-08-01T00:00:00.000
{ "year": 2017, "sha1": "bcd7325c698dd8a4b5cfb9ea757f51edd0776382", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/225/1/012036", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ab174c88040a0bc8f63d658c56837ddb392513a5", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Geology", "Physics" ] }
219954635
pes2o/s2orc
v3-fos-license
Heat and Mass Transfer on MHD Jeffrey-Hamel Flow in Presence of Inclined Magnetic Field In this study, a magnetohydrodynamic Jeffrey-Hamel flow of a viscous, fluid that conducts electricity and is incompressible through a divergent conduit in presence of inclined variable magnetic field with heat and mass transfer has been investigated. The solutions of the governing equations of the MHD flow are obtained numerically since they are nonlinear. The numerical scheme used is implemented in a computer software program and the results presented in graphical form. The velocity profile, the temperature profiles, the effect of variable magnetic field and of varying various dimensionless numbers on the flow are analyzed. Jeffrey-Hamel flows are also applied in the diffuser development. Some of the systems include; the channel between the compressor and gas turbine engine burner, the canal at departure from a gas turbine linked to the jet pipe, the canal subsequent to the impellor of a centrifugal compressor, wind tunnels with closed circuits, and water turbine draft tubes among several others. The results provide significant information for the improvement of proficiency and performance of technologies in aerospace, chemical, civil, environmental, industrial and mechanical applications. Introduction Magnetohydrodynamics (MHD) describes the intricate interaction between magnetic fields and plasmas that are accountable for considerable dynamic conduct in various cosmic matters including the sun. MHD is significant in planetary processes such as magneto-convection, magnetic flux occurrence, flux ropes, spots, atmospheric heating, wind acceleration, flares, and eruptions. Mass transfer is the transport of one constituent from a region of higher concentration to that of a lower concentration from a system that contains two or more components whose concentrations vary from point to point. Heat transfer rate depends on the systems temperature and the properties of the medium intervening through which the transfer of heat takes place. Two-dimensional steady motion of a viscous fluid through divergent-convergent channels which is referred to commonly to as the classical Jeffery-Hamel flow in fluid dynamics was first studied by Jeffery [1] and Hamel [2]. The flow models for Jeffery-Hamel flows are interesting and are used to demonstrate the phenomenon of boundary layers separation in divergent channels. Jeffrey and Hamel developed a solutions for the Navier-Stokes equation using the similarity concept that depended on two parameters that were non-dimensional which were the flow Reynolds number and the angle of the channels widths. The classical Jeffery-Hamel problem was further analyzed by Axfold [3] studying the effects of the magnetic field that is external to the conducting fluid. He concluded that the magnetic field acted as a control parameter as well as the Reynolds number for the flow and the angle of the walls. The study [4] reduced the Maxwell's electromagnetism governing equations and the Navier-Stokes equations to ordinary differential equations that were nonlinear for the model the problem of the Jeffery-Hamel flow for a case where the magnetic field was high in Inclined Magnetic Field the presence of nanoparticles. The flow region in the channel that was divergent was studied with different values of Hartmann number and different values of the angle of channel and their results when matched with the exact solution obtained by Adomian's Decomposition Method (ADM) were in agreement. [5] while studying the flow over convergent and divergent wall rib-lets conducted an experiment in the research laboratory of the High-Speed wind Tunnel of the Dresden University of Technology (Germany) to determine the velocity field over convergent and divergent rib-let patterns by hot-wire measurements in turbulent pipe flow. They concluded that adjacent to the wall of the channel, convergent and divergent rib-let patterns show considerable differences with regard to the timeaveraged stream-wise velocity and the stream-wise velocity fluctuations. If the rib-lets converge, the average time velocity decreased while velocity fluctuations increased with the opposite for divergent rib-let patterns. Magnetohydrodynamic flows which was an extension of the classical Jeffrey-Hamel flows in divergent and convergent channels Jeffery-Hamel flows to MHD interpreting the effect of the external magnetic field working as a parameter in the solution of the channel flows for the divergent and convergent channels was analyzed by the study [6]. MHD flows in convergent-divergent channels studies extended from the classical Jeffery-Hamel flows to the MHD studies done by the study [6] was studied by the study [7] introducing the method of Adomian decomposition and determining the Adomian's polynomial. They obtained a solution for the problem for the case of divergent and the case of convergent channels concluding that the velocity distribution on the fluid flow and the shear stress constant is depicted at various Reynolds numbers. They compared their results with some earlier works which illustrated their excellent accuracy. [8] discussed slip and Joule heating effects in mixed convection peristaltic transport of nanofluid with Soret and Dufour effects.The study [9] considered blowing/suction effect on hydromagnetic heat transfer by mixed convection from an inclined continuously stretching surface with internal heat generation/absorption. The study [10] Examined double-diffusive convection in a porous enclosure with cooperating temperature and concentration gradients and heat generation or absorption effects.Unsteady MHD free convective visco-elastic fluid flow bounded by an infinite inclined porous plate with a heat source, viscous dissipation, and Ohmic heating was investigated by the study [11]. Motivated by the above investigations, in this work a fully developed free convective flow of a viscous incompressible electrically conducting fluid past a vertical porous plate bounded by a porous medium in the presence of thermal radiation, heat source/sink, variable suction, and variable permeability is analyzed. The study [12] studied the MHD boundary layer flow of a VISCO-Elastic fluid past a porous plate with varying suction and heat source/sink in the presence of thermal radiation and diffusion. They considered a case of a magnetic field whose strength was uniform and perpendicular to the plate with heat source. They concluded in this study that the presence of thermal radiation decreases the temperature, an opposite nature is shown in the case of Eckert number and the influence of the heat source leads to enhance the temperature. Jeffery-Hamel flows with heat transfer of nanofluids using the homotopy perturbation method and comparing with numerical results were studies by the study [13]. They considered the influence of nanoparticles on the nonlinear Jeffery-Hamel flow problem investigating three types of nanoparticles namely Copper, Alumina and Titania by considering water as a base fluid. They concluded that the effect of solid volume fraction of nanoparticles on the heat transfer and fluid flow parameters is more pronounced when compared with the type of nanoparticles and the skin friction coefficient and Nusselt number for alumina nanofluid is the highest in comparison to the other two nanoparticles. Analysis of heat and mass transfer for unsteady viscous MHD nanofluid flowing through a conduit whose walls are permeable in presence of metal nanoparticles was done by [14]. They considered two cases for effective thermal conductivity through the H-C model and concluded that the permeability of the conduit increased shear stress at lower wall. Heat transfer rate increases with the increase of the Reynolds number and Mass transfer rate decreases with the increase of Reynolds number thermal boundary layer thickness is a decreasing function when injection/suction happens altogether on HTP. A Jeffery-Hamel flow of non-Newtonian Micropolar incompressible fluid inside non-parallel walls and notices heat transfer effect in flow region was studied by the study [15]. They converted the governing nonlinear PDEs to nonlinear coupled ODEs using appropriate similarity transformations and solved them with the utilization of the Taylor optimization method based on differential evolution (DE) algorithm. They concluded from their results that the fluid velocity was decreased, although the angular velocity of micro constituents and heat transfer in the flow was increased as enlarging the values of the vortex viscosity parameter associated with the divergent channel. It also noticed that both spin-gradient viscosity and micro-inertia density enhance the micro rotation profiles and their results agreed with the results obtained by the fourth-order Runge-Kutta method. Magnetohydrodynamics Jeffery-Hamel flow with heat transfer problem in an Eyring-Powell fluid using differential transform method was studied by the study [16] where they analyzed the variations of velocity profiles for different values of the Reynolds number, Eckert number, Prandtl number and Hartmann number in the flow with heat transfer with the fluid in both divergent and convergent channels and concluded that the nanofluid flow velocity profile increases as the value of the Eckert number and the Prandtl number increases and decreases as the value of the Hartmann number increases, on the contrary, the heat profile fluid flow also increases. They also deduced that the velocity profile of Jeffrey Hamel nano-fluid flow decreases as the value of Reynolds number increases. More recently, The study [17] studied the unsteady twodimensional Jeffery-Hamel flow of an incompressible non-Newtonian fluid, with nonlinear viscosity and skin friction, flowing through a divergent channel in the presence of a magnetic field in the direction perpendicular to the motion of the fluid. They noted that when the Reynolds number and Hartmann number was increased, the velocity of the fluid increased. However, when the unsteadiness parameter was increased, the velocity of the fluid decreased and was a constant when the values of the Eckert number and the Prandtl number were increased. The temperature of the fluid increased when the Reynolds, Hartmann, Prandtl and Eckert number were increased with the same case for the unsteadiness parameter. Mathematical Formulation The analysis of unsteady Jeffrey Hamel MHD flows between porous walls with injection or suction where there is a magnetic field that is oblique with heat and mass transfer has not been extensively analyzed. This research is focused on a 2D unsteady Jeffrey Hamel MHD incompressible viscous fluid that conducts electricity flowing from a source at the connection between two porous walls with a case where there is injection / suction with an inclined magnetic field that is variable, with the walls at an angle 2α as in figure 1. If α >0, the walls are convergent and divergent if α <0. The walls are considered to be rigid. The velocity of the fluid is considered to be along the radial direction. The magnetic field is inclined at an angle of β as shown in the geometrical model illustration of the problem in figure 1. Governing Equations In cylindrical coordinate system, the general equations for the flow is given by Using the Maxwell equations to obtain the Lorentz force by using × J B . The total electric field J from the Ohms law is given by The velocity of the fluid and the magnetic field resulting from the induction are given by Obtaining the difference of the differentiated equation (9) Equations (8), (11), (12), (13), (14) and (15) respectively to account for unsteadiness and the wedge angle size from studies done by the studies [15,16,19,20,22], and [23] where δ is a function of t is a time-dependent length scale whereas the parameter m is linked to angle of the wedge together with wedge radius. Numerical Solution Similarity transformation is applied in reducing the governing equations to ordinary differential equations that are further reduced to degree one before applying the collocation method. From research work done by the studies [15,16,19], and other scholars, the following transformations are used. 1 1 Applying the transformations on the governing equations, the following ordinary differential equations are obtained; Applying the following dimensionless numbers to equation (20), (21), (22) and (23) and rearranging the equations. Results and Discussion The following graphs illustrate the effects of several flow parameters and variables. From figure 2, temperature increased with the increasing the suction parameter. Suction increases the velocity of the fluid which in turn leads to increased kinetic energy which leads to increase in temperature due to conversion of kinetic energy to thermal energy. For figure 3, temperature falls as injection parameter increases. Injection results in reduction of the fluid velocity as a result of the boundary layer thickening in turn reducing kinetic energy lowering its conversion to thermal energy thus reducing fluid temperature. Concentration of the fluid decreases with the increase in the Reynolds number as observed from figure 4. This can be alluded to the fact that as the Reynolds number increases, the fluid temperature increases which in turn leads to a decrease in the concentration. An increase in temperature leads to increased kinetic energy in the fluid leading to increased vibrations of the molecules of the fluid hence an increase in the distance or spaces between molecules resulting in low concentration. From the graph in figure 5, as the Eckert number is increased, the temperature also increases. Increasing Eckert number leads to a rise in the kinetic energy leading to increased vibration of the molecules which results in conversion of kinetic energy to heat energy hence increase in temperature. From figure 6, increasing Grashof concentration number, magnetic induction increases. This is because increasing Grashof concentration number causes the viscous forces to reduce thus increasing the fluid velocity thus increasing magnetic induction. Temperature increased with Grashof concentration number increased form figure 7. Grashof concentration number increasing, leads to a reduction in the viscous forces and meanwhile with viscosity and temperature inversely associated, a decrease in the viscous forces results in temperature increase. Increasing the Grashof concentration number as in figure 8, the fluid velocity increased. The viscous forces decreased with the increase in the Grashof concentration number reducing the consequence of the drag force in the fluid hence increased velocities. The velocity increases with the increase in the Grashof Temperature number as shown in figure 9. As the Grashof Temperature number increases, the effect of the viscous drag on the fluid reduces hence the velocity increase. Increasing the Grashof temperature number leads to increasing temperature. As the Grashof Temperature number increases, the viscous drag decreases leading to temperature increase. The magnetic induction increases with the increase in the Grashof Temperature number as shown in figure 11. As Grashof Temperature number increases, the viscous drag decreases which result is increased velocities hence magnetic induction is increased. In figure 12, increasing the Hartman number led to increased temperature. With the Hartmann number increasing, viscous drag decreases and with an inverse relation for temperature and viscosity, there is increase in temperature. From figure 13, the concentration increases with increasing the unsteadiness parameter. As the unsteadiness parameter increases, the boundary wall thickens which results in the increase in concentration since the concentration at the wall is greater than the concentration away from the wall hence an increase in the concentration parameter. As the wedge angle decreases, boundary layer effect becomes more pronounced in between the divergent walls and since w T T ∞ < , the temperature decreases from T ∞ to w T . The fluid concentration decreases with reduction in wedge angle α as shown in figure 15. As the wedge angle decreases, boundary layer effect becomes prominent in the region of the flow with the walls moving towards the center line and since w C C ∞ > with the concentration of wall greater than the concentration at the centerline. As the wedge angle decreases, the velocity of the fluid increases hence reduced concentration from figure 16, the temperature increased with the Prandtl number increase. With Prandtl number increased, viscous forces effects dominates the fluid flow and since viscosity and temperature are inversely related, an increase in the viscous forces led to decreased temperature. From the graph on figure 17, the magnetic induction increases with increasing Reynolds magnetic number. There is increase in the velocity of the fluid with increasing Reynolds magnetic number resulting in increased interactions between the fluid and the magnetic field hence increasing magnetic induction. Temperature decreases with the increasing unsteadiness parameter as observed from the graph on figure 18. As the unsteadiness parameter increases, boundary layer thickens and since the wall temperature is less than the free stream temperature, w T T ∞ < , the effect of the wall increases in the flow region hence decrease in the temperature. Conclusion The unsteady Jeffrey-Hamel flow in the presence of the inclined magnetic field with suction and injection has been investigated and the effect of various parameters discussed. In conclusion, the temperature increases with the increase in the suction parameter, Eckert number, Grashof Temperature number, Hartmann number, wedge angle, and the Prandtl number but decreases with the increase in the injection parameter. The concentration of the fluid increases with the increase in the unsteadiness parameter and the wedge angle parameter while it decreases with the increase in the Reynolds number. Magnetic induction and velocity increase with the increase in the Grashof temperature and concentration numbers. The temperature decreases with time while the magnetic induction increases with the increase in the magnetic Reynolds number.
2020-06-19T02:00:54.400Z
2020-06-18T00:00:00.000
{ "year": 2020, "sha1": "abc0fcd8de6b40ed476a5f7611c54bb1e52caca4", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.acm.20200904.11.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "243097d7917a61a0f4a97e491f011d8a28ef7698", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
269645721
pes2o/s2orc
v3-fos-license
Women with a history of gestational diabetes mellitus present an accumulation of cardiovascular risk factors at age 46—A birth cohort study Abstract Introduction The incidence of gestational diabetes mellitus (GDM) is globally increasing, and it has been associated with later type 2 diabetes, metabolic syndrome (MetS), and cardiovascular disease (CVD). However, long‐term population‐based studies investigating common CVD risk factors years after pregnancy are lacking. To evaluate the future mortality and morbidity in cardiovascular and metabolic diseases, we conducted a thorough investigation of midlife risk factors in women with and without previous GDM. Material and Methods A prospective population‐based cohort study was conducted of 3173 parous women from the Northern Finland Birth Cohort, 1966. Study participants were obtained from the national register or patient records. Those with a GDM diagnosis formed the GDM cohort (n = 271), and those without a previous GDM diagnosis formed the control cohort (n = 2902). Clinical examinations were performed on participants at the age of 46 and included anthropometric measurements, oral glucose tolerance test (OGTT), biochemical measurements, and cardiovascular assessment. Results At the age of 46, women in the GDM cohort had a higher body mass index (BMI, 29.0 kg/m2 vs 26.3 kg/m2, p < 0.001) and greater waist circumference (94.1 cm vs 86.5 cm, p < 0.001) than the control cohort. In the GDM cohort, a higher incidence of impaired glucose tolerance (12.6% vs 7.3%, p = 0.002), more previously diagnosed and OGTT‐detected type 2 diabetes (23.3% vs 3.9%, p < 0.001), lower high‐density lipoprotein (1.53 mmol/L vs 1.67 mmol/L, p = 0.011), higher triglycerides (1.26 mmol/L vs 1.05 mmol/L, p = 0.002) and a higher fatty liver index (6.82 vs 2.47, p < 0.001), were observed even after adjusting for BMI, polycystic ovary syndrome, parity, level of education, physical activity, smoking, and alcohol consumption. The women in the GDM cohort also had more MetS (42.6% vs 21.9%, p < 0.001) and higher risk scores for CVD and fatal events (Framingham 4.95 vs 3.60, p < 0.001; FINRISK 1.71 vs 1.08, p < 0.001). Conclusions Women with a previous diagnosis of GDM exhibit more risk factors for CVD in midlife and are at a higher risk for cardiovascular events later in life. | INTRODUC TI ON Gestational diabetes mellitus (GDM) and hyperglycemia develop when physiological insulin resistance (IR), caused by increasing weight, adiposity, and placental hormones, can no longer be counterbalanced by compensatory hyperinsulinemia. 1,2The prevalence of GDM has increased worldwide and varies between 7% and 10% depending on the population and the diagnostic criteria. 3After a nearly universal oral glucose tolerance test (OGTT) screening was introduced in Finland, the incidence of GDM rose to 19.1% in 2017. 4egnancy has been described as a window to a woman's future metabolic health.Although GDM is usually a transient condition that resolves after delivery, it has been shown to be associated with type 2 diabetes (T2DM), 5 metabolic syndrome (MetS), [6][7][8][9] and cardiovascular disease (CVD) later in life. 10,11The risk for developing T2DM is at least seven times higher in women with a history of GDM than in women with a normoglycemic pregnancy, 12 and in our recent 23-year follow-up study, approximately half had developed T2DM. 135][16] However, long-term population-based studies that comprehensively profile common CVD risk factors have not been undertaken.Here, we report the outcomes of a cohort study examining known risk factors for MetS and CVD in women at 46 years of age, with and without previous GDM, to assess their future metabolic and cardiovascular health. | Study population The study population consists of the Northern Finland Birth Cohort 1966 (NFBC1966) of the University of Oulu, Finland, which includes 96.3% of all births during 1966 in the two northernmost provinces of Finland (Oulu and Lapland).The NFBC1966 included 12 055 mothers who gave birth to 12 058 live-born children, of whom 5889 were girls.The data collection began at the 24th gestational week and was conducted again at the ages of 1, 14, 31, and 46 years.When the individuals were 46 years old, all who were alive and whose addresses were known were sent a questionnaire and an invitation to attend a comprehensive health examination.A total of 5123 women, all white, were traceable; 3706 (72.3%) replied to the questionnaire, and 3280 (64.0%), of whom 3173 were parous, took part in the clinical examinations.The GDM cohort included all women diagnosed with GDM (n = 271), and the control cohort included all parous (n = 2902) women who had no GDM diagnosis. | Diagnosis of GDM Information on the GDM diagnoses was mostly obtained from the Hospital Discharge Register (HDR) (n = 249, 91.9%) or the Finnish Medical Birth Register (FMBR, n = 133, 49.1%) based on the following ICD-codes: ICD-10 O24.4 and O24.9; ICD-9648.8.Some women with GDM (111, 41.0%) were found in both registries.Diagnoses were set by the physician in charge of the care, and both registries are mandatory legal requirements after a delivery (FMBR) or an inor out-patient hospital visit (HDR).In addition, 111 women reported GDM in the questionnaire even though no diagnosis was found in the FMBR or HDR.In 22 women, GDM was confirmed from the patient records, and these women were included in the GDM cohort, the other individuals were excluded from the analyses. Screening for GDM changed during the study period in Finland. Prior to 2008, a risk-based screening for GDM was used.Indications for OGTT included body mass index (BMI) >25 kg/m 2 , glucosuria, age over 40 years, previous delivery of a macrosomic infant (≥4500 g), or expected macrosomic infant in the current pregnancy.After 2008, nearly universal screening was implemented, and all women except those with an estimated very low risk (primiparous woman <25 years, with BMI <25 kg/m 2 and no family history of diabetes; and multiparous women <40 years, with BMI <25 kg/m 2 , and no previous GDM or macrosomic infant) were screened. 17reening for GDM was performed by a standard 2-h OGTT (75 g glucose load in 250 mL water) after a 12-h overnight fasting, and the diagnostic cut-off values were set at fasting plasma glucose (FPG) ≥5.3 mmol/L; 1 h ≥10.0 mmol/L; and 2 h ≥8.6 mmol/L.Any single abnormal value was considered diagnostic for GDM. | Anthropometric measurements Anthropometric measurements were performed in the clinical examination after overnight (12 h) fasting.Body weight was measured using a regularly calibrated digital scale.Height was measured twice using a standard and calibrated stadiometer, and the mean of the two measurements was used.Waist circumference was measured between the lowest rib and iliac crest, and hip circumference was measured at the widest part of the trochanters.Both circumferences were measured twice, and the means of the measurements were used to calculate the waist-hip ratio.Body fat mass, fat percentage, muscle mass, and visceral fat area were measured using an InBody 720 bioelectrical impedance analyzer (Biospace Co. Ltd, Seoul, Korea). | Assessment of cardiovascular health To evaluate cardiovascular health, brachial blood pressure and carotid ultrasonography were performed.Brachial blood pressure was measured with an automatic oscillometer blood pressure device (Omron Digital Automatic Blood Pressure Monitor Model M10-IT, Japan) with the correct size of cuff in a sitting position on the right arm after 15 min of rest.Both systolic and diastolic blood pressure were measured three times at 1-min intervals, and the mean of the two lowest systolic values and their corresponding diastolic values were used for the analyses.An experienced cardiologist (K.K.) performed online ultrasounds for a subpopulation of 646 randomly selected women, of whom 49 were diagnosed with GDM.Carotid ultrasonography was performed using a General Electric Vivid E9 ultrasound system with a 9L-D 2.4/10.0MHz linear transducer for vascular imaging (GE Health Medical, Horten, Norway). | Biochemical measurements Venous blood samples were taken after overnight fasting and centrifuged, and plasma or serum was stored at −20°C for up to 2 weeks and later at −80°C.The samples were analyzed at the Nordlab laboratory (Oulu University Hospital).PG was determined using an enzymatic dehydrogenase method (Advia 1800; Siemens Diagnostica, Erlangen, Germany) with a testing range of 0.2-39.9mmol/L.Serum insulin was quantified using a Siemens Advia Centaur (Siemens Diagnosticay) with a detection limit of 0.5 mU/L.The analysis of high-density lipoprotein (HDL), low-density lipoprotein (LDL), and triglycerides was performed using an enzymatic assay method (Advia 1800; Siemens Diagnostica).High-sensitivity C-reactive protein (hs-CRP) analysis was performed with a nephelometric assay (BN ProSpec; Siemens Diagnostica). | Cardiovascular risk scores To estimate the risk of CVD, three risk scoring systems were utilized. The Framingham Risk Score evaluated the 10-year risk for developing coronary heart disease, cerebrovascular events, peripheral artery disease, or heart failure based on age, gender, smoking, total cholesterol, HDL cholesterol, systolic blood pressure, hypertension medication, and diabetes. 21The SCORE estimated the 10-year risk for fatal CVD based on age, gender, smoking, total cholesterol, and systolic blood pressure. 22The FINRISK calculator estimated the 10-year risk for myocardial infarction or serious transient ischemic attack based on gender, age, smoking, total cholesterol, HDL cholesterol, systolic blood pressure, diabetes, and family history of myocardial infarction. 23 | Prediction of fatty liver To predict fatty liver, the fatty liver index (FLI) was calculated as follows: The index ranges from 0 to 100, and FLI ≥60 indicates fatty liver, while FLI <30 rules it out. 25 | Evaluating confounding factors To assess known confounding factors potentially associated with both GDM and later metabolic and cardiovascular health, we collected data on parity, prevalence of polycystic ovary syndrome (PCOS), 26 smoking, alcohol use, physical activity, 27,28 and level of education. 29Data on parity were gathered from the postal questionnaire and the FMBR.PCOS, smoking, alcohol consumption, physical activity, and education level were self-reported in the questionnaire.For smoking, four groups were formed: never-smoker, former smoker (quit more than 6 months ago), former smoker (quit less than 6 months ago), and current smoker.Alcohol consumption was categorized as non-user, light user, moderate user, or heavy user based on grams of alcohol consumed per day. 26 | Statistical analyses Baseline demographic and lifestyle characteristics, as well as anthropometric measurements between the cohorts, were analyzed using the Pearson Chi-Square test for categorical variables and Mann-Whitney U test for continuous variables.Two-sided p-values less than 0.05 were considered statistically significant. The crude and adjusted associations between GDM and continuous outcome variables were analyzed using multiple linear regression analysis or logistic regression analysis.When constructing the regression model, all data were log-transformed to obtain normality and homogeneity of variance prior to analysis.The association between GDM and categorical variables was analyzed using logistic regression analysis.The adjusted associations were adjusted in three steps, (1) BMI; (2) PCOS; (3) parity, level of education, physical activity, smoking, and alcohol consumption.All statistical analyses were conducted using the R software package version 4.0.2(R Foundation, Vienna, Austria). 30 | RE SULTS The study flow chart is shown in Figure 1, and the characteristics of the study population are presented in Table 1.Women in the GDM cohort had a higher mean parity (3.1 ± 2.4) than women in the control cohort (2.5 ± 1.6, p < 0.001).Women in the GDM cohort smoked less frequently, consumed less alcohol, and were more sedentary.However, there was no significant difference in the level of education or in the prevalence of PCOS diagnosis between the cohorts.The mean age at first delivery after GDM pregnancy was 34.0 ± 5.3 years, and the mean follow-up time from the first GDM pregnancy to the health examination was 11.9 ± 5.3 years. | Anthropometric analysis Women in the GDM cohort had a higher mean BMI, and obesity was significantly more prevalent than in the control cohort (Table 2).Only 28.3% of the GDM cohort had a normal weight, as did less than half of the control cohort.Women with a history of GDM had a higher mean fat percentage, higher mean body fat mass, larger mean visceral fat area, larger mean waist circumference, larger mean hip circumference, and higher mean waist-hip ratio. | Analysis of glucose metabolism Women in the GDM cohort exhibited significantly higher mean PG and mean serum insulin concentrations at each time point of the OGTT (Figure 2; Table 3).When adjusted for confounding factors (BMI, PCOS, parity, level of education, physical activity, smoking, and alcohol consumption), all time points except mean fasting and mean 30-min insulin remained significantly higher. In the GDM cohort, both mean HOMA2-IR and mean HOMA2β indices were significantly higher than in the control cohort, indicating the development of IR and compensatory hyperinsulinemia, respectively.Similarly, the mean Matsuda index was significantly lower in women in the GDM cohort.Prediabetic conditions (impaired fasting glucose, impaired glucose tolerance), as well as T2DM, were detected more frequently in the OGTT screening in the GDM cohort than in the control cohort.Moreover, 18.9% of the women in the GDM cohort reported T2DM, compared to only 2.5% in the control cohort.Overall, only 58.3% of the women in the GDM cohort had normal glucose tolerance at the age of 46 years, compared to 85.8% in the control cohort. | Assessment of lipid and cardiovascular risk profiles We assessed the risk of CVDs and metabolic syndrome using lipid profiles, clinical examinations, and risk score calculations. Dyslipidemia was more common in women with previous GDM (Table 4).Their mean HDL concentration was lower, their mean triglyceride concentration was higher, and cholesterol-lowering medication was used more often than in the control cohort.The mean low-density lipoprotein concentration did not differ between the study cohorts, and no difference was observed between them in brachial blood pressure or the use of hypertensive medication. The mean FLI was significantly higher in the GDM cohort than in the control cohort.Correspondingly, metabolic syndrome was more prevalent in the GDM cohort, with 42.6% of women meeting the criteria for metabolic syndrome compared to 21.9% in the control cohort (Table 4).There were no differences in the mean hs-CRP concentration or in the mean carotid artery intima media thickness (CIMT) between the cohorts. To estimate the risk of CVD or fatal events, three risk scoring calculators were utilized.Both Framingham and FINRISK risk calculators estimated a higher risk for women in the GDM cohort than in the control cohort (Table 4).However, the SCORE calculator found no significant difference in the 10-year risk of fatal CVD between the cohorts. | DISCUSS ION This prospective cohort study shows that women with a history of GDM exhibit an accumulation of risk factors for CVD by the age of 46, resulting in a significantly higher risk of cardiovascular events later in life.Women in the GDM cohort were more overweight with central obesity, were more insulin resistant, were more frequently diagnosed with T2DM and MetS, and had higher FLIs, even after adjusting for multiple confounding factors. The risk for CVD is multifactorial, and GDM has been shown to be a contributing factor.In this study, 17.9% in the GDM cohort and 10.2% in the control cohort had prediabetes at 46 years of age, indicating a substantial risk of developing T2DM later in life.We previously published a cohort study investigating the development of T2DM after GDM, and showed that 50.4% of the participants developed T2DM during the 23-year follow-up, and the incidence remained linear until the end of the study. 13In other studies, the incidence of T2DM after GDM has varied from 3% to 70% depending on the ethnicity of the study population and the follow-up time. 12,33The prevalence of MetS in the present study was 42.6% in women with prior GDM and 21.9% in non-GDM women, which is in line with a Danish study (38.8% vs 13.4%), 6 although other studies have reported lower prevalence (19.32%-25.3%vs 6.52%-10.0%). 8,34,35Notably, MetS has been associated with a two-fold increased risk of CVD, CVD mortality, myocardial infarction, and stroke. 36n-alcoholic fatty liver disease is closely linked to MetS, and the FLI has been reported as highest in individuals with an increased risk for T2DM. 37IR impairs the ability of insulin to prevent lipolysis in fat cells, leading to increased release of free fatty acids (FFA) and FFA efflux to the liver. 38,39This leads to systemic metabolic distress and endothelial dysfunction, predisposing individuals to CVD. 40,41 In our study, FLI was almost three times higher in the GDM group in line with previous studies, indicating a strong link between fatty liver, IR, and GDM-related inflammation. 37The higher FLI in GDM women cannot be explained by alcohol use, as they consumed less alcohol than non-GDM women.Hence, the association of IR with previous GDM seems to be an important factor in the development of fatty liver. Low-grade inflammation is a known risk factor for CVD, and hs-CRP serves as both a marker and an active contributor to its pathogenesis. 42Even moderately elevated levels over 3 mg/L have been found to be associated with an increased CVD risk. 43In contrast to previous studies, 44,45 we found no difference in hs-CRP levels between the two cohorts.Similarly, no significant difference in mean blood pressure or CIMT between the two cohorts was detected, although other studies have reported an almost two-fold risk of developing hypertension after GDM. 10 In line with our study, previous studies have not found differences in vascular function 46 or in CIMT 47,48 between women with and without a history of GDM. Even though CIMT is considered an early marker of subclinical CVD, the relatively young study population may explain the absence of differences. Cardiovascular risk calculators are widely used to assess the risk of future CVD and to determine the need for medical intervention. Both the FINRISK and Framingham scores indicated a higher risk of CVD and fatal events in the GDM cohort compared to the controls, whereas the SCORE did not.This is probably due to differences in the underlying algorithms; FINRISK and Framingham incorporate diabetes, whereas SCORE does not.Our study cohorts did not present clinically significant differences in conventionally used risk factors, TA B L E 3 Glucose metabolism at age 46 years in women with and without history of GDM.such as total cholesterol and blood pressure.Therefore, we suggest that when estimating CVD risk in women with a history of GDM, glucose metabolism should be incorporated into the analysis. Currently, the Finnish Current Care Guideline for GDM recommends that all women who receive medical treatment for GDM during pregnancy (metformin or insulin), and who therefore have a high risk of T2DM, undergo a follow-up OGTT at 6-12 weeks after delivery. 17Women who have their GDM treated with diet only are glucose metabolism, provided that the guideline's implementation is successful.However, supplementing the current follow-up protocol with an analysis of lipids and FLI, and perhaps a risk score calculation, would also elucidate the risk of CVD. The strengths of this study include its prospective population-based cohort design and clinical examinations, including OGTT and carotid artery assessment by ultrasound.Furthermore, both the questionnaire study and the clinical examinations had notably high participation rates. It has not been investigated whether the women who chose not to participate in this study differed from the women who did.However, research indicates that participation in clinical trials is influenced by various individual and structural factors. 49Some reported factors hindering participation include being a racial minority, having a lower socioeconomic status, harboring mistrust in academic institutions or doctors, fear of negative impact, poor access to care, or living in a rural area. These same characteristics may also be associated with poorer health and lifestyle choices, potentially influencing the outcomes of our study, albeit likely in a manner that strengthens our results. Another asset of the study was that GDM diagnoses were collected from reliable data sources, such as FMBR and HDR, with ICD codes set by physicians.The study participants were ethnically homogeneous from the same geographic area, which reduced intersubject variation but may limit the generalizability of our results.The main limitation of the study is that the majority of participants underwent risk-based screening for GDM, as most of them gave birth prior to 2008, before the more universal GDM screening was introduced.Therefore, the incidence of GDM may be underestimated. On the other hand, the introduction of comprehensive screening led to a significant increase in the number of women diagnosed with GDM, who were more often primiparous and had lower BMI. 50It is possible that these women had a lower risk of CVD compared to the women identified by risk-based screening.Moreover, the study was conducted in midlife, when the incidence of CVD is typically low in the female population.Therefore, the risk of CVD was estimated based on risk score calculations and not on cardiovascular events. | CON CLUS ION Women with previous GDM exhibited multiple unfavorable metabolic alterations and risk factors for CVD at 46 years of age, placing them at a higher risk for future CVD events.Screening for T2DM is Metabolic syndrome was defined according to the International Diabetes Federation criteria by ethnic-specific central adiposity with two or more of the following factors: elevated concentrations triglycerides (≥1.7 mmol/L or specific treatment for this lipid abnormality), reduced concentrations of HDL cholesterol (<1.29 mmol/L), elevated blood pressure (systolic ≥130 mmHg or diastolic ≥85 mmHg or treatment of previously diagnosed hypertension), and elevated fasting PG (concentration ≥5.6 mmol/L) or previously diagnosed type 2 diabetes.24 Physical activity during leisure time was categorized into four groups: (1) inactive, activities consisting of reading, watching TV, and chores that do not cause physical exertion; (2) lightly active, some physical activity at least 4 h a week including walking, cycling, hunting, fishing, and light gardening; (3) active, exercise including running, jogging, skiing, swimming, ball games, and heavy gardening for at least 2 h a week; and (4) very active, competitive sports several times a week, including running, swimming, skiing, ball games, and other high-intensity sports.The level of education was categorized as basic, secondary, or tertiary. recommended to have an OGTT 1 year after delivery.Thereafter, a follow-up including measurements of BMI, blood pressure, fasting glucose, and glycosylated hemoglobin A1, as well as an OGTT and lifestyle guidance, should be carried out every 1-3 years.If the first postpartum OGTT is normal, a 3-year interval is considered sufficient.The diagnosis of GDM is also recorded in permanent health information records.In light of the findings presented here, the current national guideline is sufficient to identify individuals with impaired TA B L E 4 Cardiovascular risk factors in the study cohorts. already recommended after GDM, and the assessment of CVD risk factors should be considered to identify individuals at risk, allowing for targeted interventions to prevent cardiovascular morbidity and mortality.Further analysis regarding the actual risk of CVD events in women after GDM is needed, and a follow-up study conducted a decade after menopause would be of interest.AUTH O R CO NTR I B UTI O N S Juha S. Tapanainen and Juha Auvinen conceived and designed the study, obtained funding, and supervised the project.Juha Auvinen, Laure Morin-Papunen, Sirkka Keinänen-Kiukaanniemi, and Terhi Piltonen contributed to the original data collection.Kari Kaikkonen performed the carotid ultrasonography.Evi Bakiris, Jari Jokelainen, and Juha Auvinen performed the statistical analyses.All authors contributed to data analysis and interpretation.Evi Bakiris and Kaisu Luiro wrote the first draft of the manuscript; all authors contributed to revision and approved the final version of the manuscript. Characteristics of the study cohorts. 16,14,31,32However, it remains unclear whether the risk is caused by underlying dysfunctional glucose metabolism or by the accumulation of several CVD risk factors.A recent systematic review and meta-analysis concluded that GDM independently doubles the risk of future CVD, irrespective of T2DM, and that this heightened risk becomes evident within the first decade after pregnancy.16TA B L E 1 a Aalyzed by Chi-Square Test/Mann-Whitney U test. Variable GDM No GDM Crude p a Model 1 b Model 2 c Model 3 d (n = 271) (n = 2902) Adjusted for BMI, PCOS, parity, alcohol use, smoking, physical activity, level of education. b Adjusted for BMI.c Adjusted for BMI and PCOS.d
2024-05-11T06:17:34.956Z
2024-05-09T00:00:00.000
{ "year": 2024, "sha1": "0141bb3339512371344b8a125b6642a263808319", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1111/aogs.14861", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7d26b23e59b8b134374a903f5d2c5795a41a72ca", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11809150
pes2o/s2orc
v3-fos-license
Relative Role of Stars and Quasars in Cosmic Reionization We revisit the classical view that quasar contribution to the reionization of hydrogen is unimportant. When secondary ionization are taken into account, in many plausible scenarios for the formation and growth of supermassive black holes quasars contribute substantially or even dominantly at z>8, although their contribution generally falls below that of star-forming galaxies by z=6. Theoretical models that guide the design of the first generation of redshifted 21 cm experiments must, therefore, substantially account for the quasar contribution in order to be even qualitatively accurate. INTRODUCTION Modeling theoretically the properties of the sources that reheated and re-ionized the intergalactic medium at the end of the cosmic Dark Ages is difficult, because, at present, our knowledge of the nature of ionizing sources is highly incomplete. The conventional view, held over a decade since the original work of Shapiro & Giroux (1987); Shapiro et al. (1994); Giroux & Shapiro (1996a); Madau et al. (1999), is that the ionization at the highest redshifts is dominated by the UV radiation from star-forming galaxies, and that the non-thermal contribution from quasars builds up only later, at z < 4, to dominate the double ionization of helium (which requires photons of energies exceeding 54.4 eV). This conventional view, however, is incomplete. In particular, it ignores an important physical process of "secondary ionizations" by energetic ionizing photons. It is a well-known physical fact that an ionizing photon with the energy E in excess of ∼ 100 eV is capable of ionizing more than a single atom -the excess photon energy of E − 13.6 eV is deposited in the electron, which is capable of ionizing one or more additional atoms in its vicinity (Shull & van Steenberg 1985;Valdés & Ferrara 2008). These secondary ionizations can substantially increase the relative contribution of quasars to the reionization of the universe. For example, a 1 keV X-ray photon can ionize up to ∼ 25 hydrogen atoms. While this important effect has been included in many of the prior work on hydrogen reionization (Giroux & Shapiro 1996b;Oh 2001;Venkatesan et al. 2001;Ricotti et al. 2002;Machacek et al. 2003;Ricotti & Ostriker 2004;Dijkstra et al. 2004a;Madau et al. 2004;Furlanetto 2006;Zaroubi et al. 2007;Pelupessy et al. 2007;Salvaterra et al. 2007;Cohn & Chang 2007;Kramer & Haiman 2008;Shull & Venkatesan 2008;Warszawski et al. 2008;Schleicher et al. 2008;Santos et al. 2008;Ripamonti et al. 2008), it may be worth re-assessing the canonical view of the sub-dominance of the quasar population as the source of hydrogen reionization in light of recent improvements in the values of cosmological parameters and new developments in our understanding of the growth of supermassive black holes at high redshifts. There are two possible approaches to modeling reionization. In the first one, a model for the emission, propagation, and absorption of ionizing radiation from various categories of sources is constructed. Ultimately, such a model must involve a cosmological numerical simulation that resolves Lyman limit systems with sub-kpc resolution and clustering of galaxies and quasars on ∼ 100 Mpc scales. Such a large uniform dynamic range is not yet feasible in modern cosmological simulations, so this type of modeling inevitably suffers from the unknown systematic biases, but, at the same time, it offers the most complete model of cosmic reionization. A second, much simpler (and, therefore, much more limited) approach consists in ignoring the absorption and propagation of ionizing photons, and restricting a theoretical model to only counting the number of ionizations per atom. A major (but not the sole) limitation of this approach is that the criterion for reionization -the required number of ionizations per atom -is unknown. While reasonable estimates for this number can be constructed (e.g. Gnedin 2008, and discussion there), it, at the very least, must exceed unity 5 . In this brief paper, we adopt the second, simple approach, to couple theoretical expectations for both stellar sources and quasars, and to consider their relative contribution to the total number of ionizations per hydrogen atom. In line with our limited goals, we restrict our attention only to two main astrophysical types of ionizing sources, although we remain acutely aware that other, more exotic sources -including the energetic photons and electrons from dark matter annihilation -can provide a substantial or even dominant contribution to the total ionization budget (e.g. Belikov & Hooper 2009, and references therein). METHOD We evolve the population of two distinct types of seed massive black holes (MBHs): either "small seeds", derived from Population III remnants (Madau & Rees 2001;Volonteri et al. 2003), or "large seeds", derived from gasdynamical collapse in metal-free galaxies (Begelman et al. 2006;Lodato & Natarajan 2006). We adopt one single formation scenario for each realization of the Universe we consider, therefore each model contains only "small seeds" or "large seeds". In the Population III remnants model, seed MBHs form with masses m seed ∼ few×10 2 M ⊙ (see Fig. 1, top left panel), in haloes collapsing at z > 15 from rare 3.5-σ peaks of the primordial density field (Volonteri et al. 2003). In the "large seeds" scenario, massive seeds with M ≈ 10 4 M ⊙ can form at high redshift, when the intergalactic medium has not been significantly enriched by metals. Here we refer to Begelman et al. (2006); Lodato & Natarajan (2006), for more details of the physical MBH formation model. Seeds form in gravitationally unstable pre-galactic disk with primordial composition, in halos with virial temperature ∼ 10 4 K, cooled mainly by atomic hydrogen. The stability of gaseous disks depends on two parameters, the halo spin parameter λ spin , and the fraction of baryonic matter that ends up in the disk, A maximum spin parameter λ spin,max exists, for which a disk is unstable as a function of the fraction of baryons forming the disk, i.e., for every f d , disks are stable for λ spin > λ spin,max . We here assume ( f d , λ spin,max )=(0.2,0.2). The mass of the forming MBH seeds is set by the joint characteristics of the gas flow and of the evolution of the collapsed gas (Begelman et al. 2007). The mass function of MBH seeds peaks at 10 4 M ⊙ (see Fig. 1, top right panel). We study MBH evolution within dark matter halos via a Monte-Carlo algorithm based on the extended Press-Schechter formalism. The population of MBHs evolves along with their hosts according to a "merger driven scenario", as described in Volonteri et al. (2003;). An accretion episode is assumed to occur as a consequence of every major merger (mass ratio larger than 1:10) event. During an accretion episode, each MBH accretes an amount of mass, ∆M = 9 × 10 7 M ⊙ (σ/200 km/s) 4 , that scales with the M BH − σ * relation of its hosts (see Volonteri & Natarajan 2009). Accretion starts after a dynamical timescale and lasts until the MBH has accreted ∆M. We model the accretion rate onto MBHs in two ways. As a baseline model, we assume that accretion proceeds at the Eddington rate (see also Salvaterra et al. 2007). In a second case, we model the accretion rate during the active phase from the extrapolation of the empirical distribution of Eddington ratios, λ = log(L bol /L Edd ), found in Merloni & Heinz (2008). We adopt a fitting function of the Eddington ratio distribution as a function of MBH mass and redshift (Merloni 2009). We are here extrapolating such a model at much higher redshifts and lower MBH masses than originally intended. We caution readers in taking the results of this model face value.The main goal of our exercise is to probe possible sensible ranges for the accretion rates on MBHs. At each timestep of our Monte Carlo simulation of halos merger trees, we calculate the average energy density emitted during each timestep by the MBH population as follows: where ∆ρ acc is the total mass density (in comoving units) accreted by MBHs within the timestep, and ǫ is the average radiative efficiency, which we assume depends solely on MBH spins for radiatively efficient accretors. We evolve MBH spins according to two simple models: coherent accretion (Volonteri et al. 2005) and chaotic accretion (King & Pringle 2007). These two models lead to rapidly spinning MBHs Top to bottom: 18 < z < 20; 16 < z < 18; 14 < z < 16; 12 < z < 14; 10 < z < 12; 8 < z < 10; 6 < z < 8. Left panels: small seeds. Right panels: large seeds. Solid histogram: Eddington accretion; dot-dashed histogram: Merloni-Heinz accretion. The energy density, U, depends on both the accretion rate (in Eddington units, f Edd = 10 λ ) and ǫ. Assuming constant f Edd and ǫ, at a given time t: where M seed is the average mass of MBH seeds in each scenario, and τ = 0.45 Gyr is the Eddington timescale. We can see that if t ≪ τ : U ∝ f Edd (1 − ǫ). Within our evolutionary scheme ǫ is determined for each MBH, while we have used a single, average, ǫ in calculating U. Since it takes a few Myr for MBHs in the coherent accretion model to spin-up to large spins (i.e. for the radiative efficiency to increase from 0.06 -Schwarzschild hole, to 0.2 -spin= 0.9) this scheme slightly overestimates the radiative output from MBHs in the 'high spin' case at z > 11. We further assume that a fixed fraction f UV of the bolometric power radiated by high-redshift quasars is emitted as hydrogen-ionizing photons. The number of ionizing photons scales as f UV /E γ , where E γ is the mean photon energy (see section 2). We refer the reader to Madau et al. (2004) for a thorough discussion about the quasar spectra; in the following we assume, conservatively, that f UV = 0.2. The total comoving energy density of ionizing radiation emitted by the growing MBH population is then: where the sum is over the timesteps between our starting redshift (z max = 20, timestep 0), and the timestep, j(z) corresponding to the redshift of interest, z. We have also implemented a more pessimistic case for MBH accretion, loosely based on Milosavljevic et al. (2008), where the accretion rate has been fixed to 30% of the Eddington accretion rate. In this scenario, the yield in ionizing radiation is lower than for the case of 100% Eddington rate, but ionization histories in this scenario fall within the range spanned by our other models; they, thus, do not change any of our conclusions and we do not consider them further in this paper. Our models are consistent with the constraints from the soft X-ray background (Dijkstra et al. 2004b). Our predicted population of high redshift AGN would account for almost 5% of the measured (0.5-8 keV) background, or ∼ 25% of the unresolved one (Salvaterra et al. 2005(Salvaterra et al. , 2007. Models are also consistent with the bolometric luminosity function of quasars at z > 4 (Hopkins et al. 2007). "Large seeds" are preferred as our fiducial model, based on a better agreement with the bolometric luminosity function at z ∼ 4, however we note that in this paper we probe much higher redshifts than those probed by the study of Hopkins et al. (2007), hence we consider the match with the luminosity function at lower redshifts as a weak constraint. For the stellar contribution to the total ionizing background we use the extrapolation of the observed UV luminosity functions of high redshifts galaxies (Bouwens et al. 2007(Bouwens et al. , 2008 and the assumed value for the relative (to the amount of escaped UV light at 1000) escape fraction of ionizing radiation. The complete details of the methodology are described in Gnedin (2008). Here we only briefly repeat that the total mass-to-light ratios of higher redshift galaxies are computed in a given cosmology by matching the observed spatial abundance of galaxies of a given luminosity to the theoretically computed abundance of halos of a given mass, so that (and, optionally, an additional factor that accounts for "bursty" star formation rate in high redshift galaxies can be incorporated in eq. [4]; reasonable values for that factor make insignificant impact on our results). The obtained thus mass-to-light ratio can be extrapolated to higher redshifts. Since the mass-to-light ratio is a weak function of redshift, the uncertainty of such an extrapolation does not dominate the final uncertainty of our estimate for the ionizing emissivity from galaxies. Instead, the final uncertainty is dominated by the uncertainty (both observational and theoretical) on the adopted value of the relative escape fraction for ionizing radiation (Gnedin 2008). The combined (stellar plus quasar) contribution to reionization can then be estimated as where N γ/a is the number of ionizations per atom, n a = (1 − 0.75Y p )n b ≈ 2.0 × 10 −7 cm −3 is the comoving number density of atoms of hydrogen or helium (we assume that most of helium is only singly ionized during hydrogen ionization), and 14.4 eV is the mean ionization energy per atom. The first term in this equation accounts for the contribution from stars, the second one includes primary ionizations by ionizing photons from quasars with the mean photon energy E γ , and the last term accounts for secondary ionizations from energetic ionizing photons. For the gas uniformly ionized to the ionization fraction x, the fraction of radiation energy density f SI going into secondary ionizations has been computed by Shull & van Steenberg (1985), and their results can be conveniently fitted by a simple but accurate formula (Ricotti et al. 2002) (6) In reality, of course, the universe is not uniformly ionized, so the quantity x in equation (5) is an effective value x eff , such that N γ/a (x eff ) = N γ/a (x) and the average is mass-weighted. Thus, x eff is not a mass-or volume-weighted average of the cosmic ionization fraction. However, x eff = 0 for the fully neutral and x eff = 1 for the fully ionized universe. The quantity N γ/a, * has been computed in Gnedin (2008) and is not discussed here. The mean energy of ionizing photons from quasars, E γ , remains a parameter of our model. The exact value of this parameter obviously depends on the spectral shape of the quasar energy distribution. If we assume a classic multicolor disk spectrum up to kT max ∼ 1 keV(M BH / M ⊙ ) −1/4 (Shakura & Sunyaev 1973), and a nonthermal power-law component with spectral slope L ν ∝ ν −α , with α ≈ 1 at higher energies, we find E γ ≃ 300 eV for M BH = 10 3 M ⊙ . The spectrum is harder/softer for smaller/larger MBH masses. As a fiducial value, we adopt E γ = 300 eV (cfr . Fig 1), and we investigate the sensitivity of our results to the precise value of this parameter in the next section. Equation (5) cannot be evaluated without an equation for x eff as a function of time. A complete computation of x eff (t) requires a sophisticated three-dimensional modeling of the transfer of ionizing radiation throughout the inhomogeneous gas density distribution in the universe. Since such modeling is well beyond the scope of this paper, we instead introduce the following ansatz for x eff (t): where f n→x is a parameter. The motivation of this ansatz follows from the realization that if every atom in the universe is ionized exactly once, then in the beginning of reionization, while x eff is sufficiently small, x eff ≈ N γ/a . 6 The factor f n→x therefore accounts for the loss of ionizing photons to recombinations and for more complicated dependence of x eff on x at the later stages of reionization. This factor cannot be too small (or the universe would never be reionized), and any value for f n→x above about 0.5 (half of photons lost for recombination) does not affect our conclusions. Therefore, in the rest of this paper we adopt f n→x = 0.75 as our fiducial value. In reality, f n→x must be a function of time, but since it is not likely to be much lower than 1, the exact time dependence of f n→x is unimportant at the level of precision of our approximations (which is dominated by the uncertainties on the escape fraction from galaxies and the lack of knowledge for the specific parameters of our quasar model). 2.-The total ionizations-to-atom ratio as a function of redshift for our fiducial model ("large seeds, high spins" in WMAP-5UP cosmology) for 3 values of the mean energy of ionizing photons from quasars Eγ . The adopted reionization criterion, 1 < N γ/a < 3 at z = 6, is shown as a thick black segment with error-bars. (2009). The second model ("WMAP-5UP") is obtained from the first one by increasing both the amplitude of the density fluctuations σ 8 and the scalar spectral index n S upward by 1 standard deviation. The relevant values of cosmological parameters for these two models are listed in Table 1. Figure 2 demonstrates the sensitivity of the total ionizations-to-atom ratio N γ/a on the assumed value for the mean energy of ionizing photons from quasars E γ . Because ionizations from the quasar population are dominated by secondary ionizations (which are independent of E γ ) for any plausible value of E γ , this parameter has only a mild effect on our results. Figures 3 and 4 now present our main result: the computed N γ/a as a function of redshift for two cosmologies and various parameters of the quasar model. The criterion for the universe to be reionized at z = 6 (1 < N γ/a < 3) we adopt from Gnedin (2008), where it is discussed and justified. Clearly, the WMAP5 model (with the most likely values of σ 8 and n S ) is somewhat short of the reionization requirement. As has been discussed by Gnedin (2008), this is not necessarily a serious problem, since the likely uncertainty on our extremely simple model is a factor of 2 or so. Never-the-less, slightly higher values for σ 8 and/or n S provide a wider breathing space for reionization modeling. In order to illustrate a plausible estimate for the uncertainties due to observational errors on the escape fraction (c.f. Shapley et al. 2006), high redshift luminosity functions (Bouwens et al. 2007), etc, we show in Figure 5 the "large seeds -high spin" quasar model in the WMAP-5UP cosmology (which we adopt as our "fiducial model") together with the estimated errors. Notice, that formally Bouwens et al. (2007) galaxy luminosity function at z > 6 should be considered as an upper limit; we emphasize this by having the hatched area in Fig. 5 unclosed from below. We also show the FIG. 3.-The total ionizations-to-atom ratio as a function of redshift for different models for quasar contribution (red and blue lines) as well as contribution from the stars alone (black solid line) for the Eddington accretion model for two chosen cosmologies. The adopted reionization criterion, 1 < N γ/a < 3 at z = 6, is shown as a thick black segment with error-bars. The total ionizations-to-atom ratio as a function of redshift our fiducial model ("large seeds, high spins" in WMAP-5UP cosmology -red lines). The hatched area shows our estimate of the observational uncertainty (for a given theoretical model). The blue line shows the effective ionization fraction x eff from equation (7). Dotted, short-dashed, and long-dashed red lines show the contributions from primary and secondary ionization from quasars and from stars respectively. tive (but highly approximate) reionization history. DISCUSSION Since our quasar models cover a wide range of physically plausible possibilities, we can draw some general conclusions from Fig. 3 and 4. In particular, we notice that (i) by z ≈ 6 the quasar contribution to the total ionization budget becomes mostly sub-dominant, not exceeding 50% in the best case, and likely falling below 20% or so. Never-the-less, at z ≈ 8 in all our models quasars contribute from over 50% up to 90% of ionizations. This conclusion is particularly important to theoretical modeling and future observational measurements of the expected redshifted 21 cm emission from neutral hydrogen in the reionization era. While most of the currently planned pathfinder experiments rely, in part, on the existing simulations for critical design decisions, the predictions from those simulations -none of which include a quasar contribution -become inaccurate at z 8. It is, therefore, important to keep in mind that much theoretical work still needs to be done before even the basic observables for the incoming 21 cm experiments (like the fluctuation power spectrum) can be computed theoretically with ∼20% precision (within a given cosmological model). Most of the contribution to the quasar ionizing budget is dominated by MBHs with mass < 10 6 M ⊙ . Such small, lowluminosity MBHs do not contribute to the bright end of the luminosity function of quasars, and are therefore difficult to account from simple extrapolations of the luminosity function of quasars. These small holes are not hosted in extremely massive galaxies residing in the highest density peaks (5 to 6σ peaks), but are instead found in more common, "normal" systems, ∼ 3σ, peaks. Future generation of space-based telescopes, such as JWST and IXO, are likely to detect and constrain the evolution of the population of accreting massive black holes at early times (z ∼ < 10).
2009-08-21T16:27:26.000Z
2009-05-01T00:00:00.000
{ "year": 2009, "sha1": "3afdfa25edce0e87d4a150b06cbc7f3295b113b3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0905.0144", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3afdfa25edce0e87d4a150b06cbc7f3295b113b3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247245779
pes2o/s2orc
v3-fos-license
The Influence of Modifiable Factors on Breast and Prostate Cancer Risk and Disease Progression Breast and prostate cancers are among the most commonly diagnosed cancers worldwide, and together represented almost 20% of all new cancer diagnoses in 2020. For both cancers, the primary treatment options are surgical resection and sex hormone deprivation therapy, highlighting the initial dependence of these malignancies on the activity of both endogenous and exogenous hormones. Cancer cell phenotype and patient prognosis is not only determined by the collection of specific gene mutations, but through the interaction and influence of a wide range of different local and systemic components. While genetic risk factors that contribute to the development of these cancers are well understood, increasing epidemiological evidence link modifiable lifestyle factors such as physical exercise, diet and weight management, to drivers of disease progression such as inflammation, transcriptional activity, and altered biochemical signaling pathways. As a result of this significant impact, it is estimated that up to 50% of cancer cases in developed countries could be prevented with changes to lifestyle and environmental factors. While epidemiological studies of modifiable risk factors and research of the biological mechanisms exist mostly independently, this review will discuss how advances in our understanding of the metabolic, protein and transcriptional pathways altered by modifiable lifestyle factors impact cancer cell physiology to influence breast and prostate cancer risk and prognosis. INTRODUCTION Breast and prostate cancers are among the most commonly diagnosed cancers worldwide, representing 19% of all new cancer diagnoses and 10.7% of cancer-related deaths in 2020 alone (Sung et al., 2021). The pathophysiology of these cancers relies on the complex interplay and exploitation of various biological systems, with systems biology techniques, such as 'omic' approaches, now being employed to understand their pathogenesis (Du and Elemento, 2015). Past research in breast and prostate cancer predominantly focused on aberrations in the human genome driving disease development, but it is now increasingly apparent that this represents only one piece of the complex cancer puzzle (Wang et al., 2018;Wu et al., 2018). While genome and other non-modifiable factors such as age, ethnicity and family history contribute to an individual's disease risk (Nindrea et al., 2017), factors associated with lifestyle choices and environmental influences are becoming increasingly recognized as additional pieces that complete this puzzle (Stein and Colditz, 2004) (Figure 1). Breast and prostate cancer risk of immigrants originally from low disease prevalence countries, increases to reflect that of the destination country (Shimizu et al., 1991;Kolonel et al., 2004), confirming the importance of external factors in the etiology of the diseases. A study of twins demonstrated that heritable factors contributed to 42 and 27% of an individual's risk for prostate and breast cancer, respectively, (Lichtenstein et al., 2000), further demonstrating the major contribution of external factors to disease risk. In addition to being significant risk factors, external factors also influence disease progression post-diagnosis (Davies et al., 2011;Cannioto et al., 2021). As a result, lifestyle changes are being encouraged by health professionals as strategies for cancer prevention, and are thought to have the potential to prevent up to 50% of all cancer cases in developed countries (Stein and Colditz, 2004). Despite current population data demonstrating the significant impact of these modifiable factors on disease progression, the mechanisms of how these external factors influence cell biology to impact cancer phenotype and disease progression is not well understood. This mini review will describe how these modifiable factors can affect cellular systems, including the epigenome, transcriptome, proteome and cellular metabolome, which ultimately determines cancer phenotype. EPIDEMIOLOGICAL EVIDENCE Modifiable risk factors encompasses both lifestyle choices and environmental exposures. These include physical exercise, diet, weight management, tobacco intake, exposure to environmental pollutants and infections (Stein and Colditz, 2004). These factors can contribute to an individual's disease risk, recovery rate and likelihood of disease recurrence, with physical exercise, diet, and weight management being most relevant to breast and prostate cancer (Figure 1). Current epidemiological evidence highlights the positive effects of increasing physical exercise, a healthy diet and maintaining a healthy weight in the prevention and overall disease outcomes for breast cancer patients (Cannioto et al., 2021;Lubian Lopez et al., 2021). Interestingly, the impact of lifestyle interventions on prostate cancer risk has been inconsistent, with some studies demonstrating no effect, while others show decreased disease risk (Shephard, 2017;Sorial et al., 2019). While the contribution to disease risk is controversial, the consensus is that these interventions are beneficial in decreasing an individual's risk of mortality and improving overall outcomes (Kenfield et al., 2011). As a result, it is important to understand the mechanisms of how these modifiable factors can influence patient risk and disease progression to effectively implement these strategies in the clinic. The physiology behind the lifestyle interventions resulting in these outcomes is complex, multi-factorial and often overlap with one another. Implementation of these lifestyle factors may modulate the impact of certain biological molecules, combat the chronic inflammatory state of tumors, decrease FIGURE 1 | Breast and prostate cancer etiology. The etiology of breast and prostate cancer relies on many pieces of a complex puzzle, where environmental influences and lifestyle choices, termed modifiable factors, may complete this puzzle. There are various modifiable factors that may contribute to cancer initiation, with physical exercise, diet, and weight management most relevant to breast and prostate cancer. Frontiers in Physiology | www.frontiersin.org the expression and activity of pro-oncogenic genes and signaling pathways through epigenetic mechanisms, and improve regulation of oxidative stress to minimize oxidative damage (Figure 2). METABOLIC AND HORMONAL INFLUENCE The response to lifestyle and environmental cues occurs initially at the metabolic and hormonal level, which can dynamically alter gene expression through epigenetic and transcriptional mechanisms (Wong et al., 2017). While food consumption stimulates the release of hormones and metabolites such as insulin and insulin-like growth factor (IGF)-1 (Clemmons, 2012;Vernieri et al., 2016), overnutrition is linked to the perturbed activity of these hormones. The increased activity of insulin and IGF-1 result in the activation of oncogenic signaling pathways and subsequently increase proliferation and disease progression (Pollak, 2012). In addition, metabolic substrates derived from lipids, protein and carbohydrates can provide a constant supply of ATP and metabolic precursors for biochemical processes crucial for tumor progression, such as lipid membrane synthesis (Hanahan and Weinberg, 2011;Vernieri et al., 2016). Furthermore, there is a plethora of evidence to support the link between nutritional choices and gut microbiota composition, with low microbiota diversity associated with cancer (Plaza-Diaz et al., 2019;Wastyk et al., 2021). Multi-omic approaches have been used to link gut microbial dysbiosis with the advancement of breast and prostate cancer (Komorowski and Pezo, 2020;Liu et al., 2021). Evidence indicates that this may be due to the contribution of dysbiosisrelated metabolites in chronic inflammation, immune cell recruitment and cancer cell dissemination (Buchta Rosean et al., 2019;Lee et al., 2021). Using metagenomics, Liu and colleagues demonstrated that dysbiosis accelerated prostate cancer progression through upregulation of lysophosphatidylcholine acyltransferase 1 (LPCAT1), a key enzyme in the phospholipid remodeling pathway (Liu et al., 2021). In addition, the gut microbiome-associated metabolites may influence cancer progression indirectly by altering the breast microbiome through systemic effects (Costa et al., 2021). Extending upon this, nutritional metabolomics can provide detailed analyses of metabolites related to the consumption of certain foods, such as alcohol and animal fats, which can be predictive of breast and prostate cancer risk. For example, elevated lysophosphatidylcholines C17:0 and C18:0 levels have been associated with increased prostate cancer risk (Playdon et al., FIGURE 2 | Potential mechanisms of how modifying lifestyle factors can influence cancer phenotype. In general, a side effect of increased physical activity and a balanced diet is weight management and adipose tissue loss. The incorporation of these three modifiable lifestyle factors can result in various physiological effects, including a decrease in nutrient substrates, adipose tissue, proinflammatory processes, reactive oxygen species-mediated effects, and oncogenic signaling, as well as an increase in antioxidant defenses and microbiota diversity. At the patient level, this may explain the reduced risk of breast cancer, decreased progression of breast and prostate cancers, as well as increased survival and decreased disease recurrence that occurs with modifying these lifestyle factors. Frontiers in Physiology | www.frontiersin.org 4 March 2022 | Volume 13 | Article 840826 2017; Röhnisch et al., 2020). As nutrition has the potential to contribute to tumor growth through the discussed metabolic mechanisms, dietary interventions such as "short-term fasting" have been trailed and found to reduce blood glycemia, hyperinsulinemia and IGF-1 levels (Vernieri et al., 2016). Furthermore, participation in physical activity can influence hormone and metabolite levels, such as decreasing insulin and IGF-1 levels, thus reducing their oncogenic effects (Thomas et al., 2017). In addition to metabolic disruptions, ongoing overnutrition results in adipose tissue accumulation. Adipose tissue is known to be a source of estrogen production, particularly in postmenopausal women whose ovaries are no longer the major estrogen source (Hetemaki et al., 2021). Postmenopausal women with increased BMI or weight have an increased risk of developing hormone receptor positive breast cancer (Brown et al., 2017). Aromatase, a key enzyme involved in estrogen biosynthesis, is expressed in adipose tissue with increased BMI correlating with increased aromatase expression (Zhao et al., 2016). Estrogen has a demonstrated role in breast cancer initiation, proliferation and progression Xue et al., 2019), subsequently estrogen exposure has been strongly linked to the development of breast cancer, even in premenopausal women. In fact, there is a 5% increased risk of breast cancer correlated with each year younger at menarche and a 3.5% increase related to each year older at menopause due to the prolonged period of estrogen exposure (Ramakrishnan et al., 2002;Collaborative Group on Hormonal Factors in Breast, 2012). There are various factors that can promote the onset of menarche, with diet, physical activity and BMI being recognized as contributing external factors (Ramezani Tehrani et al., 2014). Diet has been closely linked to menarcheal age, with overnutrition and obesity correlated with decreased age, and undernutrition associated with an increased age to onset of menarche (Merzenich et al., 1993). While this correlation between diet, obesity and spermarche may also be evident in boys, the relationship is harder to determine given that it is more difficult to determine spermarche onset (Wagner et al., 2012;Deng et al., 2018). Research indicates that testosterone levels may not explain the potential relationship between obesity and spermarche, given that obesity has been associated with lower testosterone levels (Glass et al., 1977), however elevated leptin associated with increased adiposity has been highlighted as a potential mediator of pubertal age (Wagner et al., 2012). This may suggest that dietary choices as early as childhood, could contribute to an individual's breast and prostate cancer risk later in life. IMMUNE FUNCTION AND INFLAMMATION Adipose tissue, a major consequence of an unhealthy diet and a sedentary lifestyle, is comprised of adipocytes, adipose stem cells, endothelial cells, immune cells and fibroblasts. Adipose tissue can secrete a range of hormones, growth factors and cytokines, termed adipokines (Gilbert and Slingerland, 2013;Lenz et al., 2020). The balance of these factors is dependent on the composition of the adipose tissue, with the onset of obesity identified as a driver of adipose remodeling. This alters the size and composition of adipose tissue, with an increase in preadipocytes and a decrease in mature adipocytes (Picon-Ruiz et al., 2017). The hypertrophy and proliferation of adipose tissue that occurs with progressive weight gain eventually results in adipose tissue hypoxia, triggering hypoxia-inducible factor-1 (HIF1) transcriptional activity (Lee et al., 2014). Recent multiomic analysis has revealed that HIF1 transcriptional activity is dependent on its cofactor CDK8, which indirectly represses MYC target genes as an adaptive response to promote cell survival (Andrysik et al., 2021). In addition, HIF1 activity upregulates other genes, including vascular endothelial growth factor, which promotes angiogenesis and metastasis of breast and prostate cancer cells (Li et al., 2018;Melegh and Oltean, 2019). Increased HIF1 activity, and the predominantly preadipocyte phenotype, also increases leptin levels while decreasing adiponectin levels, propagating a proinflammatory environment (Gilbert and Slingerland, 2013). The imbalance of these hormones transforms the adipose tissue immune landscape, increasing the recruitment of various proinflammatory immune cells, such as macrophages, resulting in increased immune cell infiltration (Wu et al., 2019). These proinflammatory immune cells in conjunction with the preadipocytes, increase the secretion of inflammatory adipokines such as tumor necrosis factor alpha (TNF-α) and interleukin (IL)-1β, creating a chronic inflammatory condition associated with tumorigenesis (Gilbert and Slingerland, 2013;Picon-Ruiz et al., 2017). The preadipocyte phenotype discourages mature adipocyte differentiation, thus maintaining a proinflammatory state. However, this elevated immune cell mobilization and infiltration is not limited only to states of high adiposity and is typical of breast and prostate cancers Xu et al., 2021). Thus, lifestyle interventions such as physical exercise, may improve the inflammatory state of all patients (Khosravi et al., 2019). While the exact mechanisms are not fully understood, one hypothesis is that exercise reduces monocyte cytokine production (Khosravi et al., 2021). In addition to the impact of adipose tissue expansion through overnutrition, the uptake of certain nutrients, such as saturated fatty acids (SFAs), can also trigger inflammation. SFAs induce toll like receptor (TLR) activation, particularly TLR4 (Rogero and Calder, 2018), with activation of the TLR pathway resulting in increased activity of the transcription factor nuclear factor kappa-light-chain-enhancer of activated B cells (NF-kB), which is responsible for regulating over 100 proinflammatory genes (Pradere et al., 2014), further perpetuating a chronic inflammatory state. Therefore, nutrition choices and the accumulation of adipose tissue may influence the tumor microenvironment required for breast and prostate cancer growth and progression. By actively increasing levels of physical exercise and incorporating a heathier diet, this may decrease adipose-associated inflammation. In addition to the effects of decreased adipose tissue accumulation, partaking in physical exercise, particularly aerobic focused activity, has the capacity to improve immunity and reduce inflammation through the activation of β-adrenergic receptor (β-AR) signaling (Hong et al., 2014) of circulating catecholamines to the β-AR of immune cells, adenylyl cyclase is activated to produce cAMP and activate PKA. The functional consequences of activating this pathway are dependent on the immune cell subtype (Simpson et al., 2021), but a hypothesized mechanism is that exercise-induced activation of the β-AR signaling pathway diminishes the TNF proinflammatory signaling axis, although this relationship is not as strong in obese individuals (Hong et al., 2014). Furthermore, physical exercise has been linked to alterations of the lipid profile and cytokine levels, such that there is an increase in high-density-lipoprotein levels and IL10 levels, respectively. Modulation of these parameters is associated with decreased chronic inflammation (Koelwyn et al., 2015;Meneses-Echavez et al., 2016). A recent study has also used multi-omic and immune profiling to demonstrate striking benefits of a high-fermented-food diet. This diet increased gut microbiome diversity, as well as decreasing inflammatory markers, such as IL-6 and IL-10 (Wastyk et al., 2021). While this study was only performed in healthy adults, there have been some in vitro and in vivo studies highlighting the benefits of fermented foods in breast and prostate cancer, but these findings are yet to be confirmed in the clinic (Tasdemir and Sanlier, 2020). REGULATION OF OXIDATIVE STRESS-INDUCED DNA DAMAGE It is well established that the role of reactive oxygen species (ROS) is paradoxical, in that it has the potential to be beneficial and detrimental to the progression of tumors, depending on the balance of antioxidants (Aggarwal et al., 2019;Perillo et al., 2020). For simplicity, this review will only discuss the pro-tumorigenic impact of ROS. This notion of oxidative stress arises from inefficient clearance of excess free radicals, and is commonly associated with the initiation of cancers, as it can cause oxidative damage to lipids, proteins and DNA, contributing to genomic instability and mutation (Sharifi-Rad et al., 2020). This process can occur naturally with aging, from external environmental stressors, such ultraviolet radiation, and also from lifestyle factors, such as nutritional choices. During overnutrition, the uptake of carbohydrates, lipids and protein trigger the production of ROS, predominantly due to the excess supply of energy substrates for mitochondrial metabolism (McMurray et al., 2016;Saha et al., 2017). This continued state of overnutrition can result in mitochondrial dysfunction and further increase oxidative stress and oxidative stress-induced DNA damage. In addition to food consumption, there has also been a strong link between alcohol intake and breast and prostate cancer risk through the production of ROS species and acetaldehyde arising from alcohol metabolism (Dickerman et al., 2016;Wang et al., 2017). In a pre-malignant context, increased ROS levels provide the opportunity for driver somatic mutations to occur, which during malignancy can drive phenotypes such as cell proliferation (Perillo et al., 2020) and epithelial-mesenchymal transition (EMT) (Radisky et al., 2005) important for metastatic progression. In addition, multi-omics approaches have identified different cancers exhibit varied levels of ROS metabolism, and are beginning to investigate the use of a ROS index to measure cancer outcomes (Shen et al., 2020). Thus, the implementation of diet changes and weight management could influence the amount of oxidative stress and subsequently minimize the effects on cellular damage prior to and following the initiation of carcinogenesis. In addition to dietary modifications, research has indicated that participation in regular, and moderate to high-intensity physical exercise may improve antioxidant defenses both in adult and elderly individuals by upregulating antioxidant enzymes, allowing the body to adopt mechanisms to effectively process large quantities of ROS (Powers et al., 2020). These adaptive mechanisms may be beneficial in managing the potential increase in oxidative stress to decrease the risk and rate of mutation accumulation, and subsequent disease initiation. Conversely, the pro-tumorigenic role of ROS is generally associated with a parallel increase in antioxidant capacity (Perillo et al., 2020) and thus, the contribution of exercise-induced antioxidant capacity in a malignant context may be controversial. Nevertheless, high levels of endogenous antioxidants from physical exercise may act to protect surrounding noncancer tissue against chemotherapy-induced toxicity (Smuder, 2019). REVERSIBLE GENE REGULATION AND ONCOGENIC SIGNALING While genomic material encodes the genotype of an organism, it is the regulation of DNA through epigenetic and transcriptional mechanisms that modulates gene and subsequent protein expression and activity that contribute to phenotype (Mikhed et al., 2015). These reversible modifications can be activated in response to environmental and lifestyle factors (Alegria-Torres et al., 2011) and occur through DNA methylation, histone modification or microRNA expression, with hypermethylation of CpG islands characteristic of both breast and prostate cancer (Garcia-Martinez et al., 2021;Macedo-Silva et al., 2021). More recently, epigenomic approaches have explored how obesity and menopause impact the DNA methylation profile of breast cancer patients, identifying a different epigenome signature in postmenopausal patients with a BMI > 25 compared to premenopausal patients with a BMI < 25 (Crujeiras et al., 2017), suggesting that obesity-induced alterations to the epigenome may contribute to aggressive disease. In addition, hypermethylation of CpG islands through DNA methyltransferase (DNMT) upregulation, inhibits the transcription of various tumor suppressor genes, such as P21 and BRCA1, allowing the proliferation and growth of cancer cells (Banerjee et al., 2014;Pathania et al., 2015). The effects of DNA methylation can be functionally predicted through model-based algorithmic analysis of proteomic data, demonstrating the upregulation of various oncogenic signaling proteins, which then have the potential to further potentiate DNMT hypermethylation via a feedback loop system (Emran et al., 2019). While there are several factors that can alter epigenetic mechanisms, ROS have been implicated as a major regulator of transcriptional activity and the cellular proteome (Bhat et al., 2018), and as discussed ROS levels can be regulated through various lifestyle interventions. While the effects of lifestyle choices begin with metabolic changes that influence the epigenome, the subsequently altered epigenome then has the potential to influence the tumor microenvironment. By modifying these lifestyle choices, an array of physiological effects may occur that can impact the risk, progression, and overall prognosis of breast and prostate cancer patients (Figure 2). CONCLUDING REMARKS With a global goal of decreasing cancer disease burden, this mini review outlines the physiology behind why lifestyle modifications may succeed as a tool to achieve this. Not only would these changes positively impact the number of cancer diagnoses and outcomes, but it would also concurrently decrease the burden of other worldwide epidemics such as obesity and type II diabetes. While the traditional approach to cancer therapy is dependent on pharmacological interventions, it is now being increasingly recognized that external influence may complement these therapies. These may include, but are not limited to, increasing physical exercise, improving dietary choices, and weight management. Future research should incorporate systems biology techniques to provide a more mechanistic and holistic view on the impact of these modifiable factors on the interactions between the various biological components that contribute to tumorigenesis. AUTHOR CONTRIBUTIONS KT designed the study, was responsible for writing the article and the creation of all Figures. MN designed the study and was responsible for writing and revising the manuscript. All authors contributed to the generation of the concepts and ideas provided.
2022-03-07T14:30:29.988Z
2022-03-07T00:00:00.000
{ "year": 2022, "sha1": "6c4ceeac109a79a749f9c0e379bef7a0a371c181", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2022.840826/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "6c4ceeac109a79a749f9c0e379bef7a0a371c181", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18379137
pes2o/s2orc
v3-fos-license
Identification and In Vitro Reactivity of Celiac Immunoactive Peptides in an Apparent Gluten-Free Beer Gluten content from barley, rye, wheat and in certain oat varieties, must be avoid in individuals with celiac disease. In most of the Western countries, the level of gluten content in food to be considered as gluten-free products is below 20 parts per million measured by ELISA based on specific anti-gluten peptide antibody. However, in beverages or food suffering complex hydrolytic processes as beers, the relative proportion of reactive peptides for celiac patients and the analytical techniques may differ, because of the diversity of the resulting peptide populations after fermentations. A beer below 20 parts per million of gluten but yet detectable levels of gluten peptides by anti-gliadin 33-mer antibodies (G12 and A1) was analyzed. We identified and characterized the relevant peptides for either antibody recognition or immunoactivity in celiac patients. The beer was fractionated by HPLC. The relative reactivity of the different HPLC fractions to the G12/A1 antibodies correlated to the reactivity of peripheral blood mononuclear cells isolated from 14 celiac individuals. Peptides from representative fractions classified according to the relative reactivity to G12/A1 antibodies were identified by mass spectrometry. The beer peptides containing sequences with similarity to those of previously described G12 and A1 epitopes were synthesized and confirmed significant reactivity for the antibodies. The most reactive peptides for G12/A1 also confirmed the highest immunogenicity by peripheral blood mononuclear cell activation and interferon γ production from celiac patients. We concluded that preparative HPLC combined with anti-gliadin 33-mer G12/A1 antibodies were very sensitive and specific methods to analyze the relevant immunogenic peptides in hydrolyzed gluten. Introduction Celiac disease (CD) is the most common food intolerance in Western countries, with an estimated prevalence that may rise up to 1% in the Caucasian population [1]. CD can be considered as the intolerance of genetically predisposed individuals to gluten polypeptides from wheat, rye, barley and to lower extend, certain oat varieties [2][3][4]. Gluten is a complex of storage proteins that contains high amounts of the amino acids glutamine, glutamic acid and proline [5]. As a consequence, these proteins are poorly degraded by the gastrointestinal enzymes and remain as relatively large peptides when entering the small intestine. The ability of gluten proteins to resist degradation was suggested to be one reason for their harmful effect on susceptible people [6]. In celiac individuals, immunogenic gluten peptides are deamidated by tissue transglutaminase which association generates potent autoantigens. These biochemical interactions elicit a T-cell mediated pathological response which consequences are the lymphocytary infiltration of the intestinal epithelia and the destruction of intestinal villi. This last effect makes CD patients to suffer from malabsorption and malnutrition that may lead to diarrhea, constipation, iron-deficiency anemia, osteoporosis, dermatitis herpetiformis and even neurological disorders [7][8][9]. The only treatment for CD is a life-long strict gluten-free diet (GFD), which normally leads to a complete remission of the symptoms and mucosal histology [1]. However, a GFD is difficult to maintain since this is a very common food additive. National public organizations and international institutions, as Current Codex Alimentarius and Food and Drug Administration (FDA), propose immunological methods based on antibodies against specific gluten peptides as feasible and reliable methods to ensure the absence of gluten from barley, wheat and rye in food and beverages [10,11]. The use of antibodies specific to epitopes directly associated to the immunogenicity of the gluten peptides may reduce those cases of underestimation or overestimation of relevant gluten peptide content. Methods based on the antibodies for the immunogenic 33-mer peptide -G12 and A1-have been accumulating evidences for the detection of the dominant gluten immunogenic peptides for celiac patients in the food [2,[12][13][14]. Hydrolyzed gluten food or beverages could be the kind of samples where the measured gluten content could mostly differ from the celiac immunogenicity depending on the target sequences to be detected. In a recent report, we examined the levels of gluten peptides equivalent to one of the most immunoactive protease-resistant gliadin 33-mer in 100 Belgian beers, using immunochromatographic (IC) lateral flow test with G12 and A1 antibodies [14]. The G12/A1 reactivity of beer HPLC fractions correlated to the presence of previously described T-cell reactive epitopes. In order to characterize low abundant reactive beer peptides to G12/A1 immunological methods, we have examined a beer legally classified as 'gluten-free' because the net content of gluten was below 20 parts per million (ppm) but with detectable levels indicating trace quantities. We have identified and determined the relevance of the immunoactive peptides in beer detectable by G12/A1 IC-strips and G12 competitive ELISA. We have sequenced peptides in HPLC fractions enriched in reactivity to G12/A1 IC-strips and some of them synthesized. The most reactive peptides to G12/A1 also showed the highest reactivity to peripheral blood mononuclear cells (PBMCs) proliferation and interferon c (INF-c) production from celiac patients. Beer protein fractions Sample barley beer (type Strong Ale) was extracted using UGES according to the instructions of the manufacturer (Biomedal SL, Sevilla, Spain). Beer was separated into different fractions by reversed-phase HPLC (RP-HPLC) on a semi-preparative C 18 column and, next, each fraction was separated by RP-HPLC on an analytical C 18 column [14]. MoAbs G12/A1 immunochromatographic test The assay was carried out with the GlutenTox sticks kit according to the instructions of the manufacturer (Biomedal SL, Sevilla, Spain). Beer fractions were diluted (1:10 to 1:300) in the buffer solution provided and the gluten content was tested. The test sticks were dipped into the solution (300 ml) for 10 min before being removed and allowed to air dry. Enzyme-linked immunosorbent assay Maxisorp microtitre plates (Nunc, Roskilde, Denmark) were coated with gliadin solution (Sigma, St Louis, MO, USA) and incubated overnight at 4uC. The plates were washed with phosphate-buffered saline (PBS) containing 0.05% Tween 20 and blocked with 5% non-fat dry milk in PBS for 1 h at room temperature (RT). 33-mer peptide was used as standard. Serial dilutions of peptides were made in PBS-bovine serum albumin (BSA) 3%, to each of which was added G12-HRP or A1-HRP antibody solution [14]. The samples were pre-incubated at RT, and then added to the wells. After 30 min of incubation at RT, the plates were washed and substrate solution (TMB, Sigma) was added. The reaction was stopped with 1 M sulphuric acid, and the absorbance at 450 nm was measured (microplate reader UVM340; Asys Hitech GmbH, Eugendorf, Austria). Two separate assays were performed, each with three repetitions. Mass spectrometry Beer fractions were analysed by nano LC-Electrospray Ion Trap Mass Spectrometry (LC-ESI-IT-MS) (Ultimate 3000 nano HPLC, Dionex, Sunnyvale, California, USA) and HCT Ultra ion-trap Mass Spectrometry (Bruker Daltonics, Bremen, Germany). For each fraction, 2 mg of total protein was reconstituted in nano HPLC loading buffer (98% H 2 O milli-Q +2% acetonitrile +0.05% TFA). The flow rate was 30 ml/min and the injection volume was 5 ml. Absorbance was monitored with the UV-visible detector at 214 nm and 280 nm. Eluting buffers were buffer A (H 2 0 milli-Q + 0.1% formic acid) and buffer B (80% acetonitrile +20% H 2 0 milli-Q +0.1% formic acid). Proteins were eluted by applying the following gradient conditions: isocratic elution with 4% B for 5 min; 4% to 40% B for 60 min; 40% to 95% B for 1 min; and isocratic elution with 95% B for 7 min. Mass spectrometric data were acquired in the automated data-dependent mode. Mass-and charge-dependent collision energies were used for peptide fragmentation. The 4 most-abundant ions were isolated and fragmented using collision-induced dissociation (CID) (4s/MS/ MS spectrum). The spectra obtained were processed using DataAnalysis 3.4 software (Bruker Daltonics, Bremen, Germany) for analysis of raw data. Peptide masses obtained were exported to BioTools 3.1 software (Bruker Daltonics, Bremen, Germany), and the identification of proteins was carried out by searching for Viridiplantae taxonomically restricted in the database of the National Center for Biotechnology Information (NCBI), using Mascot v.2.3.02 (www.matrixscience.com, Matrix Science, London, UK). Histological and serological analysis of subjects Fourteen patients with biopsy-proven active CD were included in this study. The diagnosis of CD was based on the positive serology and compatible lesion in the duodenal biopsy according to the criteria of Marsh [15] and confirmation of a clinical response to gluten elimination from the diet. Subjects were prospectively screened for CD using anti-endomysial antibodies (AAEMs), tissue transglutaminase antibodies (AATGs) and CDspecific human leukocyte antigen typing (HLA-DQ) ( Table 1). Venous blood was taken at the time of index biopsy. The international standards for ethics (Helsinki declaration for studies in humans) were followed and the study was approved by the ethics committee of the 'Virgen de las Nieves' Hospital, Granada (Spain). The written informed consent was obtained from the next of kin or guardians on behalf of the minors enrolled in this study. Cell proliferation analysis The alcohol-soluble protein fractions were extracted from wheat (Triticum aestivum) and rice (Oryza sativa) whole flour, positive and negative control, respectively. These fractions were subjected to pepsin, trypsin and chymotrypsin sequential digestion, as previously described by Real et al [3]. PBMCs which include lymphocytes (T cells, B cells and NK cells), monocytes and dendritic cells, from patients with active CD (n = 14) on gluten-containing diet were isolated from 6 ml of heparinized blood by Histopaque gradient centrifugation, and cultured at a density of 1x10 6 cells/ml in RPMI-1640 culture medium. After 48 h, PBMCs were incubated with the different samples and the proliferation was determined using a non isotopic Table 1. Clinical data of patients with Celiac disease. assay based on the incorporation of 5-bromo-2-deoxyuridine (BrdU) into the newly synthesized DNA according to manufacturer's instruction (Millipore Chemicon, Temecula, California, USA) [16,17]. Briefly, once the media containing BrdU is removed, the cells are fixed and the DNA is denature. Then an anti-BrdU mouse monoclonal antibody is added followed by and HRP conjugated secondary antibody. Cells without BrdU reagent added (background) and only tissue culture supernatant (blank) were used as controls. The stimulation index (SI) value was calculated by dividing the mean absorbance/10 at 450 nm after stimulation by the mean absorbance of PBMCs exposed to the culture medium alone (background) and divided by 10. All cultures were performed in duplicate. IFN-c production Supernatants from PBMC culture were collected after 48 h and stored at 280uC for IFN-c determination using a commercial ELISA kit in accordance with the manufacturer's instructions (Thermo Scientific, Madrid, Spain). Standards were run on each plate. Assay sensitivity was less than 2 pg/ml. All cultures were performed in duplicate. Statistical analysis Anti-gliadin 33-mer ELISA. Peptide curves were obtained by plotting percentage maximum absorbance against logarithm of antigen concentration. The software package Sigma Plot 9.0 (Systat Software, Inc., Point Richmond, CA, USA) was used to calculate IC 50 and the cross-reactivity (CR) for each peptide. The IC 50 is defined as the concentration that produces a reduction of 50% in the peak signal in the ELISA. The CR was determined as (IC 50 of the peptide that presents the greatest affinity for the antibody/IC 50 of each peptide assayed) x 100. Cell proliferation and IFN-c assays. Statistical analysis was performed with the STATGRAPHICS program. Data are expressed as mean 6 SD. When the interaction was statistically significant, the differences between groups were examined by onefactor analysis of variance (ANOVA). A Bonferroni-corrected t-test was used to compare the individual means. A statistical probability of p,0.05 was considered significant. Results and Discussion Immunoactive potential determination of beer fractions Barley and wheat are the main cereals used in the production of malt beer, the first step in the brewing. More than 40 different proteases in addition to amylases are activated by malting. The metabolic barley proteins that are released in the water solution during mashing along with a much lower amount of hordeins, are extensively hydrolyzed by the endogenous proteolytic enzymes into short peptide fragments [18,19]. Due to their high content in proline and glutamine, a heterogeneous mixture of peptides could be resistant to proteolysis that could be toxic for CD patients. Previous proteomic investigations support a strict consistence of the beer proteome regardless of the brand under a qualitative standpoint, while detectable differences appear confined to the relative quantitative balance among protein components [20]. The capacity to characterize a beer with low gluten peptide content could then be expanded to any beer with superior concentration. A Belgian commercial beer (Strong Ale) classified as potential 'gluten-free' (,20 ppm gluten) according to normative of Codex Alimentarius, was selected as a representative sample for the extensive immunological analysis [14]. To that purpose, this beer was fractionated by RP-HPLC on a semi-preparative C18 column and, next, each fraction was separated by RP-HPLC on an analytical C18 column and assessed by G12/A1 IC-strips. This beer presented 6 reactive fractions out of 18 total fractions, all of them eluted at retention times .10.5 min [14]. Therefore, three groups of fractions could be distinguished: a group with reactivity .500-fold above the detection limit (E16 and E17), a group with intermediate reactivity (E13-E15 and E18) and another group comprising fractions that were not recognized by moAbs (E1-E12). Taking into account that the range of gluten content was about 10 ppm by competitive G12 ELISA, the RP-HPLC technique allowed to concentrate about 50-fold the reactive peptides in E16 and E17 fractions. To test the correlation in G12/A1 reactivity and the immunogenicity of the different fractions in vitro, three of them were selected based on the different level of reactivity to G12/A1 moAbs and were assessed by PBMC proliferation and IFN-c response from peripheral blood isolated from celiac patients. The clinical and immunological profiles of patients with CD are presented in Table 1. Fraction E6 was chosen as representative of non-reactive group, E15 represented the group with intermediate reactivity and E17 that of the greatest reactivity. We found significant differences in PBMCs proliferation with respect to gliadin (positive control) in cultures incubated with E6 (SI = 7.360.9), which showed an activation of PBMCs slightly higher than rice prolamins (negative control SI = 4.260.6) ( Figure 1A). E17 was the most reactive to G12/A1 IC-strips and showed the highest increase in PBMCs proliferation (SI = 18.662.1), and did not show significant differences with respect to cultures incubated with positive control (SI = 20.161.8). We found intermediate proliferation in cultures incubated with E15 (SI = 14.561.2) ( Figure 1A). Release of IFN-c in the culture medium after the exposure of celiac PBMCs to the different representative fractions was assessed ( Figure 1B). The highest values of IFN-c release were found in supernatant of culture incubated with E17 (18.562.1 pg/ml) close to the gliadin values (22.562.6 pg/ml). We found significant differences with respect to positive control in cultures exposed to E6, which induced a lower mean value of IFN-c (7.561.02 pg/ ml). We selected one reactive fraction (E17) and one non-reactive fraction (E6) in order to carry out the protein identification using Mascot after mass spectrometry. The comprehensive list of peptide fragments identified in the E6 and E17 fractions is shown in Figures 1C and 1D, respectively. These dates confirm that the sample beer contained a large number of partially degraded fragments from the gluten proteome of barley and wheat. A total of 7 different gluten proteins were identified in E6 and 14 in E17. The peptides identified differed in both their amino acid composition and length. It was interesting to know that in the E17 fraction, the hordein and gliadin derived peptides identified showed several motifs associated with the induction of CD. Among them, we could find the 'PQQPF' sequence, described as one of the main gliadin toxic motifs [21]. In that way, T-cell epitopes as 'FPQQPQQPF', 'QQPQQPFPQ', 'QLPFPQQPQ' and 'QQPFPQQPQ' were identified in different peptides present in E17 but not in E6 [22][23][24][25]. Beers with gluten peptide content close to the quantitation limit detected by the current analytical methods by antibodies (classified according to international labelling rules as 'gluten-free foods', , 20 ppm, for Codex Alimentarius and/or FDA), should be analyzed in combination with in vitro immunological methods as T-cell proliferative response. Characterization of relative 33-mer epitopes in MS beer peptides Different research lines have shown that the reactivity of moAbs G12 and A1 is correlated with the real potential immunotoxicity T-cell-reactivity analysis and enzymatic detoxification of the proteins showed that the signal of these antibodies was correlated with the potential toxicity of the sample for celiac patients [2,13,14,26,27]. Many of the peptides identified in E17 contained runs Gln and Pro that may elicit an immunological response in celiac patients. To confirm the capacity of the suspected peptides of E17 fraction to be detectable by G12/A1 and their immunogenicity for celiac, we chose different peptides with similarities to epitopes for G12 (QPQ(L/Q)P(Y/F/Q)) and A1 (Q(L/Q)P(F/Y)P(Q/L)(P/Q). A total of five peptides of the sequenced fragments with length of 22 or 24 amino acids presented in the E17 were synthesized (Figure 2A). The affinity of the moAbs to different peptides was determined by competitive ELISA using a standard curve of the gliadin 33-mer peptide, the main immunodominant toxic peptide in celiac patients and one of the digestion-resistant gluten peptides [6]. The IC 50 and CR were determined for each peptide. As shown in Figure 2A, PP 24.1 peptide presented the greatest reactivity for G12/A1 antibodies. Peptides PP 24.2, PP 24.3 and PP 22.1, were also recognized with high sensitivity by G12 and A1. The peptide QP 22.2 showed a drastic decrease in affinity for G12 and A1 moAbs (CR,6% for both antibodies). Currently, the known antibodies recognition sequences described in toxic cereal sequences included nine different heptapeptides for A1 moAb, and five hexapeptides for G12 moAb [12]. Two epitopes of those 14 total sequences were found in the identified beer peptides. Specifically, we detected the highly reactive epitope sequences for G12 and A1, QQPFPQP and QPQLPF respectively ( (Figure 2A). However, the QP 22.2 peptide did not show any of these G12/A1 previously described epitopes (Figure 2A), which may explain the poor reactivity for A1 and G12 antibodies. We identified potential variants epitopes recognized for these antibodies contained in these MS sequenced peptides. Thus, we found two variants derived from the G12 epitope QPQLPF and three derived from A1 epitope QQPFPQP with one or two modifications in the amino acid sequences ( Table 2). To determine the relative affinity of the anti-gliadin 33-mer antibodies for these potential epitopes present in beer peptides described, we constructed hexa and heptapeptide variants of the G12 and A1 epitopes, respectively. The affinity of the anti-gliadin 33-mer . The significantly differences with respect to gliadin were found at *p,0.05 and **p,0.005. Gliadin and rice prolamins were used as the positive and negative controls, respectively. C and D. Peptide fragment identification for E6 and E17 respectively, by Mascot after mass spectrometry. doi:10.1371/journal.pone.0100917.g001 moAbs for different variants was determined by competitive ELISA with immobilized gliadin in microtiter wells and was challenged with the synthetized peptides ( Table 2). For A1 moAb, we found one different variant in the MS high reactive peptides (PP 24.1, PP 24.2, PP 24.3 and PP 22.1), in which, a proline was replacement by glutamine in the last position (QLPFPQQ), showing CR of 5.59%. We found other variants of the A1 recognition sequence, QQPFPQQ and QQPFPLQ, contained in QP 22.2 peptide that presented 30-fold lower affinity to this antibody than those ones of the epitopes located in the high affinity peptides. The variants studied for G12 moAb showed a dramatic reduction in affinity. Therefore, three new reactive sequences were identified for A1: QLPFPQQ, QQPFPLQ and QQPFPQQ. A new sequence QPQQPF showed to be slightly recognized by G12. In any case, there was a correlation between the epitope sequence of the peptides and the reactivity of the anti-gliadin 33-mer moAbs for the different peptides from the MS sequencing. Immune stimulatory properties of the beer peptides To determine whether the variations in the reactivity of antigladin 33-mer antibodies towards the different peptides were correlated with the immunogenicity of peptides, we determined PBMCs stimulatory activity. Two peptides were selected, the most reactivity peptide (PP 24.1) and the peptide with the least affinity (QP 22.2) according to results obtained with the antibodies. The stimulatory activity of peptides was determined by PBMC proliferation and IFN-c production. The ability of these peptides to induce a noxious immune response was studied in comparison with wheat gliadin and rice prolamine as positive and negative control, respectively. The results of cell proliferation from celiac PBMCs clearly showed that the peptide QP 22.2 induced a weak proliferative response (SI = 6.860.9), slightly superior to negative control (SI = 4.760.4). Thus, we found significant differences with regards to the cultures incubated with gliadin (SI = 21.662.4) ( Figure 3A). We could observe similar results in IFN-c production Figure 3B). These results were consistent with those earlier obtained by competitive ELISA using anti-gliadin 33-mer antibodies. In fact, the peptide PP 24.1 was the most potentially immunoactive according to antibodies and cell proliferation. Thus, a direct correlation of the moAbs reactivity and the immunogenicity of peptides were corroborated. We observed that the peptides recognized with the highest affinity by G12/A1 moAbs showed previously described T-cell epitopes in their sequences [25]. However, the peptide recognized with the lowest affinity by G12 and A1 moAbs (QP 22.1), did not show any T-cell epitopes described to date. found a frequently response to nondeamidated gluten peptides, and suggested a model where the deamidation is not a prerequisite for the initiation of response [28]. In this work, we have shown by in vitro studies a direct correlation of the immunogenicity of the different nondeamidated beer peptides with the considerable variability toxicity of peptides present in beer. We have examined that in vitro analysis may enable identification, selection, or production of different beer fractions with low levels of noxious gluten proteins. In any case, the in vitro reactivity to PBMCs of celiac patients showed a good correlation to the reactivity of immunological methods with the anti-gliadin 33-mer G12/A1 antibodies. Although the diversity of peptide populations in hydrolyzed food containing immunogenic cereals could be unlimited, the reactivity of current methods with G12/A1 antibodies seems to react quantitatively with those remaining peptides with potential immunogenicity. Future clinical studies with celiac patients would be necessary to know which level of hydrolyzed gluten content in beer provides reasonable safety. The underestimation of toxic gluten by those antibodies not discriminating the immunoactivity of the peptide might suppose an accumulative damage for celiac safety.
2016-05-12T22:15:10.714Z
2014-06-25T00:00:00.000
{ "year": 2014, "sha1": "6035c90a82f0e68beb8bb44fc7010c92effc3fbe", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0100917&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb5a28dbdbf0dfd0e0bccb0fb6f4be47507c10f5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
263619122
pes2o/s2orc
v3-fos-license
The Causality Between CO2 Emissions and Electricity Generations: Evidence from Environmental Quality This study aims to analyze the causality relationship between CO2 emissions and electricity generations in the 10 most populous countries. This study uses secondary data from the British Petroleum (BP) annual report from 2000-2021. The data analysis method uses Pairwise Dumitrescu-Hurlin Panel causality analysis, carried out by stationarity and cointegration tests. The results of the analysis state that there is a bidirectional relationship that influences each other between CO2 emissions and electricity generations. The implication of this research is the need for countries to prepare for renewable sources of electrical energy to be able to reduce CO2 emissions, especially those sourced from fossil energy. Introduction Global warming and climate change are two environmental problems commonly addressed in scientific research [1]- [5].The Intergovernmental Panel on Climate Change (IPCC), the Sustainable Development Goals, the Paris Agreement, and the G20 are just a few of the international conferences dealing with macroeconomic impacts that primarily focus on these two issues.Annually increasing CO2 emissions are one of the main causes of global warming and climate change [6].The use of energy from fossil fuels, which emit large amounts of CO2, is one of the reasons for the increase in global CO2 emissions [7]- [9].Previous studies have argued that economic activity, international trade, and fossil fuel use are the main causes of global warming [10]- [12].However, the general public continues to use fossil fuels, and it takes a while for fossil fuels to be transformed into fuels that are comparatively environmentally favorable.Because they are still relatively inexpensive and can lower production costs to increase the 1248 (2023) 012016 IOP Publishing doi:10.1088/1755-1315/1248/1/012016 2 economy, people continue to use fossil fuels extensively.Policymakers are focusing on this issue because environmental pollution lowers health standards and endangers the environment.This issue warrants attention since economic development not based on the environment and sustainability is unintentionally causing the increasing global warming and extreme climate change.The green economy framework offers a way out of this impasse by promoting responsible economic expansion while upholding environmental sustainability and quality. FIGURE 1. Electricity Generation (terawatt/hour) and CO2 Emissions (million tons of carbon dioxide) 2000-2021 from British Petroleum [6] People's quality of life is anticipated to improve as a result of economic development [13]- [17].On the other hand, economic growth causes unavoidable environmental externalities like a decline in environmental quality.According to the IPCC, since 1995, human activities and the consumption of energy that increases greenhouse gas (GHG) concentrations have contributed to an increase in global temperature.The World Bank [18] supports the idea that CO2 emissions are what produce GHG and climate change.The community's standard of living is anticipated to rise as a result of economic development.Several studies focus on the factors that cause CO2 emissions [16], [19]- [21].Environmental degradation, such as a major rise in greenhouse gases (GHG), particularly CO2, is directly correlated with an increase in economic activity.The connection between environmental quality and the production of electricity from fossil fuels [22]- [25].The idea is that CO2 emissions and electricity production are related in some way.In order to reduce rising CO2 emissions, it is necessary for all countries to solve the issue of power generation, which is generally more environmentally friendly. Numerous research on the variables that influence emissions have been conducted.But only countries and regions with comparable economic, social, and cultural traits are included in its purview.Literature on demography is necessary, particularly in light of the connection between population growth and rising emissions.This essay focuses on the 10 countries with the largest populations in terms of CO2 emissions.Demand for energy rises with increasing population density.As a result, this study looks at the variables affecting CO2 emission levels between 2000 and 2021.The causal connection between CO2 emissions and power generation will be confirmed by this investigation.The following describes the structure of this essay: Section 1 provides introduction, while Section 2 outlines the research method.The results and discussion are presented in Section 3, and Section 4 presents the conclusion. Methods The study included the 10 most populous countries: China, India, the United States, Indonesia, Brazil, Pakistan, Egypt, Bangladesh, Russia, and Mexico, and measured their carbon emissions and energy production from 2000 to 2021.Panel data and cross-sectional data are also included.2021 This group includes the most populous countries in the world.Statistics from the British Petroleum (BP) Annual Report 2000-2021.The sources and actual definitions of the variables are summarized in Table 1.This research is based on Hurlin dan Venet [26], which analyzes causality using panel data.In a twovariable framework, in Granger's sense, one of the variables can trigger the second variable if the second variable's forecast improves When the first variable's lag is taken into account [27], [28].However, for research data in panel data, the researcher used the pairwise Dumitrescu-Hurlin panel causality test [29]- [31].The variables above are examined using two regression models below.Where: EMIS is CO2 Emissions; EGEN is Electricity Generations; α, β, λ, δ are coefficients; t is time; I and j is lag 1,2,3…k; and u is an error.The regression model tests the hypothesis of the relationship between CO2 emissions and electricity generations where the errors in u_1t and u_2t are not correlated.Causality analysis is carried out with several things that must be met, including data between CO2 emissions and electricity generations that pass the stationarity and cointegration tests. Results and Discussion Table 2 shows that the hypothesis for all variables examined contains one root, which is accepted at the 5% significance level.The panel root test in Table 1 unit tests the first difference.A panel roots test is performed before the cointegration test to examine the integral of each variable. Table 3 is the Pedroni residual cointegration test showing that 7 out of 7 statistical tests support cointegration between variables in the model.Pedroni's cointegration test (31) tests for spurious regression that occurs in the presence of nonstationary variables.Therefore, in this study, we use the residual Pedroni cointegration test to find the terms of the relationship for each variable. Table 4 is the Kao cointegration test [32] and confirms that the result of the Kao test rejects H0.H0 indicates that the variable has a strong long-term relationship between variables in the survey panel data. Finally, this study aims to test causality using the Dumitrescu-Hurlin pairwise causality test to determine causality between variables.Based on the results in Table 5, we can see that the 10 most densely populated countries in the world have a two-way relationship between CO2 emissions and power generation.It has been recognized by many previous researchers that there is a correlation between CO2 emission factors and energy production [9], [33]- [36].Changes in GDP per capita, population, and end-use fuel mix all contribute to increased CO2 emissions, while adjustments in energy intensity and energy efficiency can lead to reductions [37].As stated by Wang et al., was also shown the powertrain construction affects CO2 emissions [38].According to Mendonza, [39], using renewable energy is a strategy to reduce his carbon footprint. Conclusion Panel data from the 10 most populous countries in the globe from 2000 to 2021 is used in this analysis.CO2 emissions and electricity production from the 10 countries are the variables used.The findings indicate a long-term, positive, and significant correlation between the variable of CO2 emission and the generation of power.This outcome indicates that increased CO2 emissions will be produced as a result of the large electricity use.These findings support the idea that CO2 emissions and electricity production are causally related.The conclusion of this study is that in order to reduce CO2 emissions that have a harmful impact on the environment, countries must get ready to switch from fossil fuels to renewable energy.Therefore, policy makers and the government need to apply special rules to industry and society to be wise in the use of electricity.This also needs to be encouraged from the world of education and research to conduct in-depth research in converting the use of fossil-based fuels to fuels that are environmentally friendly and sustainable.
2023-10-04T20:02:49.773Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "6ba72f0106eabdabf36b97314701298100698f41", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/1248/1/012016/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6ba72f0106eabdabf36b97314701298100698f41", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Physics" ] }
49471573
pes2o/s2orc
v3-fos-license
Effects of arterial blood on the venous blood vessel wall and differences in percentages of lymphocytes and neutrophils between arterial and venous blood Abstract Vascular sclerosis mostly occurs in arteries and is mainly related to anatomic structure and hemodynamics of artery. This study aimed to investigate effects of arterial blood on vein wall and explore differences of composition between arterial and venous blood. Ultrasound was used to examine the distal venous structure of arteriovenous fistula in uremia patients. Immunohistochemistry was used to study the pathology of the distal vein. Twelve patients were divided into control group and trial group. Patients received an arteriovenous fistula within 1 month in control group. Patients had undergone this surgery ≥2 years before in the trial group. Blood samples were collected from the aortic, arterial, and venous vessels of 51 patients who had taken coronary angiography and analyzed with blood routine rest, biochemical, and immunological measures to compare the differences of blood composition between artery and vein. This study was registered with the China Clinical Trial Center website under registration number ChiCTR-OOC-16008085. In the trial group, the vascular wall of distal veins of fistula were thickened and hardened. No significant differences of blood composition were found between the aortic and radial arterial blood. However, the differences in the percentages of lymphocytes and neutrophils between arterial and venous blood were significant (Pa = .0095, Pb = .01). Under smooth hemodynamic conditions, arterial blood caused hardening of the venous wall. Arterial and venous blood differed in the percentage of lymphocyte and neutrophils. This may contribute to the vascular sclerosis that is observed in arteries more often than veins. Introduction Atherosclerosis refers to vascular wall thickening and hardening, loss of elasticity, and narrowing of lumen. [1,2] Angiosclerosis can decrease the blood supply to organs, including heart, brain, and kidney, and subsequently cause organ ischemia and dysfunction. Atherosclerosis occurs mostly in the arteries, but rarely in the veins. [3] This is attributed to differences in the anatomical structure of arteries and veins, as proposed by the most relevant studies. [4][5][6] However, vascular sclerosis rarely occurs in the pulmonary artery, which contains venous blood only. [7,8] This has led to a hypothesis that vascular sclerosis was associated with the particular characteristics of arterial blood. Indeed, a number of studies have suggested that the differences in physiological function and hemodynamic environment between arteries and veins contribute to the development of vascular sclerosis. [9,10] Recently, a number of studies showed that vein grafts used in coronary artery bypass surgery also underwent hardening, similar to original coronary arteries. [11,12] The probability of lesion and occlusion arising in the postoperative venous bridge was reportedly ∼15% to 30% at 1 year [13,14] and ∼50% of vein grafts failed within 10 to 15 years after surgery, due to a number of issues, including intimal hyperplasia. [15] Intimal hyperplasia occurred mainly as a response to higher arterial pressures after the vein graft bypass surgery, [16] and was linked to vascular wall thickening and narrowing. [17] Hence, changes in the vascular hemodynamics in the bridge vessel can promote vascular sclerosis. In addition, it is well known that the lipid composition of the blood contributes to the pathogenesis of atherosclerosis. [18] However, whether the differences in blood composition between arterial and venous blood are factors that influence angiosclerosis is not known. In the present study, we explored whether arterial blood components were associated with hardening of the venous wall in addition to hemodynamics, and differences in the composition of arterial and venous blood. Study design This study was approved by the Institutional Ethics Committee of the Third Xiangya Hospital of Central South University. All of the patients provided signed informed consent before their enrollment in the study. The study was conducted in accordance with the ethical principles of the Declaration of Helsinki and was registered as Chinese Clinical Trial Registry No. ChiCTR-OOC-16008085. Subjects The subjects for our study are included 2 phases; in the first phase, 12 uremia patients were selected who had undergone arteriovenous fistulation to explore whether arterial blood components were associated with hardening of the venous wall in addition to hemodynamics; in the second phase, 58 patients were recruited to detect the differences in the composition of arterial and venous blood. The first phase of the retrospective study included 12 uremia patients who had undergone arteriovenous fistulation sometime between November 27, 2015, and January 20, 2017. These patients were divided into those who had had surgery within 1 month (control group) and those who had undergone surgery ≥2 years before (trial group). Arteriovenous vascular ultrasonography was conducted on their arteriovenous fistulas. If the patients had undergone fistulation or refistulation because of vascular occlusions, vein vessels on the surgical areas were taken for pathological examination. The second phase of the study included patients aged 18 to 75 years in whom both arterial and venous blood were readily collectible. The participants consisted of patients with cardiovascular diseases who required coronary angiography. Patients with any of the following were excluded from the present analysis: taking statins within the recent 3 months; acute infection; malignant tumor; intractable hypertension or arrhythmia; thyroid disease; or use of systemic steroids or cyclosporine therapy. This study began on November 27, 2015, and ended September 30, 2016. Among the initial 58 patients enrolled, 4 failed to complete the blood collection process and were excluded from the study, and 3 were excluded due to the hemolysis test results. Thus, 51 patients participated in the second phase of the study. Blood samples were collected from the following locations: the radial artery's outer periphery during coronary angiography with implantation of radial artery sheath; the aortic sinus when the angiography catheter was inserted into the aortic sinus; and a left arm vein. Data were regularly reviewed by the independent China Clinical Trials Registry. Raw data were audited and published by the ResMan Clinical Trial Public Management Platform for Research. As the study collected blood and pathological specimens from the sampled patients, all patients involved in the study have signed the informed consent. Research method The PHILPS epiq7c color Doppler ultrasound system was used for vascular ultrasound examinations, and hematoxylin and eosin (H&E) staining was used for immunohistochemical examination. During the coronary angiography, 6 mL of blood were collected from each of the following: the radial artery, aortic sinus, and peripheral veins. The blood specimens were sent for laboratory tests, including routine blood, liver and kidney function, blood lipid, blood glucose, electrolyte, and complete immunity tests. Statistical analysis The second phase of the study analyzed the data from the blood specimens. The data from these 3 groups of specimens (aortic sinus, radial artery, and peripheral vein) were compared using analysis of variance and the Bonferroni post-hoc test for 2-group comparison. All descriptions of the test results are presented as mean ± standard deviation, and a P value < .05 was considered statistically significant. Hardening and thickening intima of veins in patients of trial group We selected uremia patients who had undergone arteriovenous fistula surgery within 1 month (control group) or ≥2 years previously (trial group) for vascular ultrasonographic examination. In the normal arteries of patients, blood flow was fast, and the blood flow spectrum was a serrated or wavy waveform. In veins, the blood flow was smooth and slow, and the blood flow spectrum waveform was continuous. In the patients in the trial group, the blood flow around the fistula opening was fast, and its blood flow spectrum was also a serrated waveform. On a site far away from the fistula opening, the blood flow spectrum was similar to the continuous waveform of the venous blood flow. This suggests that hemodynamically, the blood flow had restored the venous blood flow. Currently, in artificial fistulation, a narrow vascular lumen is considered if the peak systolic velocity at the fistula opening is ≥2.5 times that of the arterial blood inflow 2 cm from the fistula, or if the diameter stenosis rate of the fistula ≥50%. [19] In the present study, the intimal thickness of the veins 5 cm away from the fistula in the control group for a short time after surgery was normal ( Fig. 1 A, B). However, in the trial group, the vascular ultrasound showed that the blood flow spectrum for veins ≥5 cm away from the fistula opening registered as a continuous waveform, and the vascular intima was hardening as well as thickening (Fig. 1C, D). Long-term fistulation promotes venous wall hardening and thickening Next, we selected veins behind the fistula for immunohistochemical examination. In the control group, the venous wall contained a few layers of endothelial cells (Fig. 2 A). However, in patients of the trial group, the endothelial cells of the venous wall were thick, and mucoid degeneration of the venous wall, fiber hyperplasia, and fibroblast proliferation in the venous wall were obvious. In addition, proliferation of small vessels occurred outside the vascular wall, and the vascular intima was thickened and hardened ( Fig. 2 B). This suggests that long-term fistulation promotes venous wall hardening and thickening. Compositions of blood from the aortic sinus, radial artery, and peripheral vein We performed a complete analysis of the composition of the blood specimens collected from the aortic sinus, radial artery, and peripheral vein ( Table 1). The complete immunity tests showed no statistically significant differences among the 3 sites of Further pairwise comparisons using the Bonferroni post-hoc test revealed that the glycemic value of blood from the peripheral vein group was significantly lower than that of the aortic sinus group (P = .0277). The glycemic value in the peripheral vein group was also lower than that of the radial artery group, although the difference was statistically insignificant (P = .054). In addition, the percentage of lymphocytes from these 3 groups showed statistically significant differences (F = 4.82, P = .0095). Further comparisons using the Bonferroni post-hoc test revealed that the percentage of lymphocytes in the blood of the peripheral vein group was significantly lower than that of the aortic sinus group (P = .0167) or radial artery group (P = .0336). In addition, there was a significant difference in the percentages of neutral granulocytes among these 3 groups (F = 4.77, P = .01). Similar to the results for the percentage of lymphocytes, Bonferroni post-hoc analysis revealed that the percentage of neutral granulocytes of the peripheral vein group was significantly higher than that of the aortic sinus group (P = .0165) or the radial artery group (P = .0382). Thus, we conclude that the major difference in the compositions of arterial and venous blood is the percentages of lymphocytes and neutral granulocytes. Discussion There were 3 major findings obtained from the present study. In the trial group of patients who had undergone arteriovenous fistula surgery ≥2 years previously, vascular sclerosis occurred in the wall of the vein ≥5 cm away from the fistula opening. Second, in this group, also the vein under the fistula had obviously thickened and hardened. Finally, arterial blood contained a higher percentage of lymphocytes, and a lower percentage of neutral granulocytes, than venous blood. This may potentially contribute to the development of angiosclerosis, in addition to hemodynamics. In the present study, we first conducted ultrasonography to examine the vessel walls in uremia patients who had undergone a venous fistula operation, either 1 month before (control group), or ≥2 years previously (trial group). We observed that in patients of the trial group, vascular sclerosis occurred in the walls of veins Table 1 Comparisons of blood compositions from the aortic sinus, radial artery, and peripheral vein by various indexes (n = 51). The blood composition data represented in the form of Xs following the tests and P values among the 3 different parts. P values are the results of the variance analysis among the 3 groups: P1 is derived from the comparison between aortic sinus and radial artery; P2 is from the comparison between aortic sinus and peripheral vein; and P3 from the comparison between radial artery and peripheral vein. >5 cm away downstream from the fistula opening. That is, obvious hardening and thickening had occurred after the veins had delivered a flow of arterial blood for a prolonged time, while hemodynamically the blood flow had turned into a stable and smooth venous blood flow. These findings were further supported by the immunohistochemical examination, which revealed significant intimal thickening of the venous wall under the fistula. Although previously a number of studies indicated an important role for hemodynamic changes after fistulation in the development of vascular sclerosis, [20] our findings support another notion that arterial blood may also contribute to the pathogenesis of angiosclerosis. Previous studies relevant to ours mainly focused on the differences in oxygen saturation between arterial and venous blood. [21,22] To the best of our knowledge, our study is the first to comprehensively explore the differences between human arterial and venous blood, based on routine blood test, biochemical, and immunological indexes. We did not see any significant differences in composition in the blood between the aortic sinus and radial artery. Although a significant difference was detected in the blood glucose level between arterial and venous blood, the blood glucose level remained within the normal range. Thus, it is unlikely that the differences in blood glucose levels between arterial and venous blood contributed to the development of angiosclerosis. Intriguingly, we found that the blood from the peripheral vein had a significantly lower percentage of lymphocytes, but a higher percentage of neutrophils, than that of the aortic sinus or radial artery. Previous studies suggested a link between blood components such as lipids, uric acid, and arterial sclerosis. [23,24] However, our study did not show any significant difference in these components between arterial and venous blood. The mechanisms accounting for this discrepancy between our findings and others are not clear; it is likely due to the participants selected and the sample size. In the present study, we found that the percentage of lymphocytes in the arterial blood was significantly higher, but the percentage of neutrophils was lower, than that in venous blood. Previous studies proposed a link between the ratio of neutrophils and lymphocytes (N:L) in circulating blood and a number of cardiac diseases, including critical limb ischemia, coronary ectasia, and inflammatory vascular diseases. [25,26] However, those studies mainly used peripheral blood to investigate a link between the N:L and the severity of vascular diseases, and failed to show any difference between arterial and venous blood. In addition, an increase in the percentage of neutrophils generally reflects the presence of acute lesions, [27,28] while atherosclerosis is regarded as a chronic disease. [29,30] On the basis of our findings, we speculate that high lymphocyte levels may promote filtration into venous walls and subsequently contribute to hardening and thickening of the venous wall. [31] However, the exact mechanisms by which an elevated percentage of lymphocytes in arterial blood (compared with that in venous blood) promote the pathogenesis of angiosclerosis remain to be elucidated. The recycling of lymphocytes may potentially account for differences in the percentages of lymphocytes and neutrophils between the peripheral vein and artery. [32,33] In this recycling process, lymphocytes and other immune cells penetrate the postcapillary micro veins and enter the lymphatic tissues and organs, and from there they enter the lymphatic circulation. Thus, the lymphatic circulation and blood circulation are closely connected. On the basis of the recycling path of lymphocytes, we hypothesize a mechanism for the differences in percentages of lymphocytes between arterial and venous blood: at the postcapillary micro veins, some lymphocytes from the arterial blood may be transmitted to the lymphatic circulation, causing a change in the percentage of lymphocytes in the veins. There are some limitations present in this study. While the sample size was small, the difference in the percentages of lymphocytes between arterial and venous blood was significant and consistent. In addition, the study only empirically indicated that lymphocytes might have an essential role in causing atherosclerosis. Further studies with a large cohort will be needed to provide further evidence that may directly link elevated lymphocyte levels in arterial blood to atherosclerosis. In conclusion, we have demonstrated that arterial blood has a substantially higher percentage of lymphocytes, but a lower percentage of neutrophils, compared with venous blood. This is potentially responsible for the more frequent occurrence of vascular sclerosis in arteries than veins. This hypothesis needs to be further examined in more mechanistic studies, both in vitro and in vivo.
2018-07-10T00:14:25.509Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "85e088915cf0c6843306ab9842527b4cb6644f92", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000011201", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85e088915cf0c6843306ab9842527b4cb6644f92", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235795170
pes2o/s2orc
v3-fos-license
Memory effects in friction: the role of sliding heterogeneities We report on memory effects involved in the transient frictional response of a contact interface between a silicone rubber and a spherical glass probe when it is perturbed by changes in the orientation of the driving motion or by velocity steps. From measurements of the displacement fields at the interface, we show that observed memory effects can be accounted for by the non-uniform distribution of the sliding velocity within the contact interface. As a consequence of these memory effects, the friction force may no longer be aligned with respect to the sliding trajectory. In addition, stick-slip motions with a purely geometrical origin are also evidenced. These observations are adequately accounted for by a friction model which takes into account heterogeneous displacements within the contact area. When a velocity dependence of the frictional stress is incorporated in this the model, transient regimes induced by velocity steps are also adequately described. The good agreement between the model and experiments outlines the role of space heterogeneities in memory effects involved in soft matter friction. Introduction Transient frictional regimes pertain to many practical situations encountered in everyday life. As typical examples, one can mention the transition of a contact from rest to steady-state sliding or the wide variety of contact instabilities, including stick-slip motions, which can be encountered up to the earthquake scale. Early experiments carried out on rocks by geophysicists [1,2,3] or on metals by Rabinowitcz [4] demonstrated that the time-dependent changes in the friction force during transient regimes involve complex history effects. A typical situation explored by these studies is the frictional response of a contact when the slip rate is changed suddenly from one value to another greater value: a positive jump in the frictional stress followed by a long-term decay to steady-state over a characteristic length-scale is then observed. In order to describe these observations, Rice and Ruina [5,6] have developed a seminal constitutive law where the friction force is dependent on slip rate and on phenomenological state variables accounting for the fading memory of the contact. In this model, the state variables basically reflect the internal degrees of freedom of the sliding system. At the microscopic scale, the underlying physical mechanisms behind the state-and-rate friction laws have mostly been addressed within the context of rough, multi-contact, interfaces [7,8,9,10,11,12]. In these approaches, the history dependence of friction is ascribed to contact area ageing and rejuvenation as a result of creep and slippage mechanisms at the scale of micro-asperity contacts. Noticeably, state-and-rate models mostly consider extended contact interfaces, where the non uniformity of the deformation of the contacting bodies is generally discarded. In finite size contacts, deformation gradients are however invariably induced at the contact scale. Such gradients are expected to be especially strong in the case of finite size contacts between soft substrates such as rubbers or gels where frictional shear stresses are typically of the order of magnitude of the shear modulus [13]. As a consequence, the non uniform deformation field of the contacting bodies should be specified to describe the state of the system. Any approach ignoring these degrees of freedom -as it is the case in most macroscopic friction models -may thus fail in the description of history effects involved in transient regimes. In this study, we tackle memory effects in friction from the perspective of the transient sliding heterogeneities which result from the deformation of a finite size contact area during unsteady state sliding regimes. For that purpose, a smooth, single-asperity, contact interface between a deformable rubber substrate and a rigid spherical probe is perturbed by the application of either non rectilinear sliding motions or a velocity step. In the case of non rectilinear motions, we show from measurements of the displacement fields within the contact that stress and strain heterogeneities keep a memory of the past trajectories. At the macroscopic scale, one of the consequences of this memory effect is the development of friction force components normal to the sliding trajectories. In such unsteady state situations, we show that the observed transient regimes correspond to the characteristic sliding distance which is needed to the contact to recover from a trajectory or sliding velocity perturbation. In the case of rigid surfaces with anisotropic frictional properties and/or curved sliding paths, the existence of directional effects in friction have previously been reported both theoretically [14,15,16] and experimentally [17,18,19,20]. We show here that they can also be a consequence of the loss of symmetry resulting from the transient contact deformations induced by changes in the orientation of the sliding motion. We also show that another feature of non rectilinear sliding paths is the possible occurrence of new kind of stick-slip motions with a purely geometric origin. Differently from classical stick-slip motions induced by the coupling between the constitutive friction law and the dynamics of the system [21,22], discontinuous sliding motions are here induced by the curvature of the trajectory. As a starting point, we first consider the simplified, theoretical, situation of a point contact driven by an isotropic spring at a constant velocity along a linear trajectory which undergoes a sudden change in direction. Broken line sliding experiments with finite size contacts between a silicone elastomer and a spherical glass probe are subsequently discussed in section 3 in the light of this point contact toy model. From the analysis of the orientation of the macroscopic friction force and of the sliding velocity within the contact, we formulate in section 4 a friction model based on the assumption that the interfacial shear stress is oriented along the local interfacial velocity. In section 5, this model is extended to the description of experimental results for various trajectories (broken lines, circles, sine waves). In a last section, this discussion is declined in the case of a velocity step. Full details regarding the friction devices and experimental conditions are provided in a Method section at the end of the manuscript. 2 A primer: the point contact as a toy model As a toy model, we first consider a point contact which is held in the laboratory frame by means of a system including a 2D isotropic stiffness k. The point contact is lying on a flat substrate which is driven at a constant velocity along a linear trajectory which undergoes a sudden change of direction. We assume that inertial forces can be neglected and that friction obeys a standard model: the magnitude of the friction force applied to the sphere F T remains constant in the sliding regime : |F T (t)| = T . Lower values of the friction force correspond to stick phases, where the relative velocity between the slider and the substrate vanishes. Denoting respectively R(t) and r(t) the positions of the holder and of the contact point in a frame attached to the moving substrate, the equilibrium condition for the sphere reads In the sliding regime, as the magnitude of the friction force is constant, we have |r(t) − R(t)| = λ, where we introduce a tribo-elastic length λ = T /k. Then, in this regime, the distance between the slider and the driving point is λ. It turns out that the slider follows a tractrix curve on the moving substrate, i.e. a curve with the property that the distance from any point on the curve to a given line, measured along the tangent of the curve, is constant (see Supplementary Information for more details). This is, for example, the curve followed by the back wheel of a bike for a prescribed trajectory of the front wheel. Tractrix have a very rich mathematical history. It appears in many textbook on differential geometry of curves and a concise history can be found in the introduction of reference [23], for example. (a) When the reorientation angle θ < π/2, the slider (at position r) remains at a distance λ from the holder (at position R) and continuously slides along a smooth path which is a portion of a tractrix passing through point F (in red). (b) When θ > π/2, the slider stops at the point F at a distance λ from the reorientation point O and stays there until the slider reaches the point Q. Its trajectory is then a portion of a tractrix which passes through F . In the stick regime, the distance between the slider and the driving point is less than the characteristic length λ and the position of the slider remains fixed on the substrate plane, r(t) = r S . This stick phase lasts until the slider-driving point distance reaches λ again. Passed this point, the slider follows a tractrix with r S as initial condition. To determine the condition for sliding, one may express that, for thermodynamics reasons, the work per unit of time paid by the driver against sliding friction should be positive This sliding condition expresses that the angle between the velocity of the moving substrate and the vector joining the slider to the holder should be less than π/2. When the substrate follows a broken line, then two situations may occur. When the reorientation angle is less than π/2, the slider remains in a sliding regime and it follows a portion of a tractrix curve (figure 1a). In the opposite case, the slider stops, as its distance from the holder becomes less than λ and it slides again when this spacing reaches λ again (figure 1b). Then, it follows a portion of a tractrix (see Supplementary Information for more details). In figure 2a, a vector plot of the calculated friction force applied to the point contact is presented, before and after a sudden change of the direction of the imposed linear motion of the substrate for three different reorientation angles. A transient domain is observed where the friction force gradually changes from the initial orientation to the new one. In this region, the friction force is not tangent to the imposed trajectory. More generally, for any driving curve, the slider trajectory is then analogous to that of a dog pulled by its master at the end of a leash. The dog stops as soon as the leash is loosened. The qualitative characteristics of the dog trajectories highly depend on the relative values of the leash length and of the geometrical characteristics of the driving curve. For highly curved driving trajectories, stick and slip phases may also alternate. In real friction situations with a finite contact size, even with a rigid driving device, an isotropic stiffness exists in the system which is related to the elasticity of the contacting materials. One may therefore expect the same qualitative behaviours as in the point contact case to be observed. However, the details should differ since the displacements and stress fields may not be uniform in the contact area. In the following, experiments are presented for a glass lens in contact with a rubber substrate driven along different paths. Broken line linear sliding experiments We now consider finite size contacts between a smooth silicone substrate and a smooth glass lens under imposed normal load conditions. Here, an elastic length λ arises from the deformation of the PDMS substrate. It may be defined as λ = F S /k c , where F S = |F S | is the magnitude of the steady-state friction force under linear sliding and k c is the lateral contact stiffness. Here, k c was determined experimentally from the initial linear part of the force versus displacement curves, i.e. when the static contact of radius a 0 is dragged on the surface of the PDMS substrate in the absence of any significant slip at the interface [24]. Using the friction set-up detailed in 3 (for further details, see the Methods section), a sudden change θ in the orientation of the linear sliding trajectory of the PDMS substrate is applied after the contact has been prepared in a steady-state frictional state at a constant velocity V . The latter is achieved by moving the PDMS substrate over a distance of 2 mm, i.e. larger than the initial static contact radius (a 0 = 1.38 ± 0.01mm). Figure 2b displays vector plots of the Figure 3: Schematic description of the custom-built setup for friction measurements under non rectilinear sliding motions. A surface-marked PDMS substrate (a) is fixed on two crossed, motorized, linear translation stages (b) allowing to vary the sliding direction. Contact with a plano-convex glass lens of radius R =12.96 mm (c) is achieved under imposed normal load using a dead weight arm (d). A CMOS camera (e) allows contact visualization through the transparent PDMS substrate. The frictional force components in the contact plane are measured using a custom made sensor consisting in a silicone disk (f) enclosed between a glass disk (g) and a glass plate patterned on their internal surfaces (patterned areas are indicated by the thick marks). After stiffness calibration, the force is determined from sub-pixel measurements of the relative displacement between the two patterns using a CMOS camera (h). friction force F T applied to the glass lens for various positions (X(t), Y (t)) ⊺ of the moving PDMS substrate driven at a velocity V = 0.1mm s −1 . In this figure, the blue lines correspond to the trajectories of the PDMS substrate for θ = π/4, π/2 and 3π/4, the change θ in their orientations being applied at the position (X = 0, Y = 0). Immediately after the change in the orientation of the driving motion, the friction force is no longer collinear to the sliding trajectory. Then, a progressive realignment of the friction force with respect to the sliding direction is observed, in a way which is qualitatively similar to the calculated point contact situation (cf figure 2a). In figure 4, the angle of the friction force with respect to the X axis of the sliding motion is reported as a function of the sliding distance for values of the angle θ ranging from π/8 to π. Except for θ = π, it turns out that the reorientation of the friction force occurs over a length close to the size of the static contact radius (shown as a horizontal bold line in the figure). Additional experiments carried out at V = 10 µm s −1 (not shown) indicated that this length is not significantly affected by the sliding velocity. For θ = π, the motion is fully reversed with the friction force passing through zero; as a consequence, the direction of the friction force switches instantaneously from 0 to π. . From bottom to top: θ = π/8, π/4, 3π/8, π/2, 5π/8, 3π/4, 7π/8 and π. Open symbols correspond to the prescribed change θ in the direction of the sliding motion. The radius of the static contact is indicated by the length of the horizontal bold line. Dotted lines corresponds to simulations carried out using equation (15) in situations without stick (θ ≤ π/2). Figure 5 (top) shows the magnitude of the friction force F T during the transient regime for π/4 ≤ θ ≤ π. In order to account for slight variations in the magnitude of F T between different experiments, the friction force has been normalized with respect to its steady-state value F S . When θ is increased above π/2, a transient decrease in the magnitude of the friction force F T is evidenced whose amplitude increases with θ. From the examination of the sliding velocity fields within the contact (figure 5, bottom), it turns out that this drop in F T is associated to a re-stick of the contact immediately after the orientation of the sliding trajectory has been changed. Indeed, the slope of the force versus displacement curve immediately after the change in the orientation of the trajectory would correspond to the lateral contact stiffness k c of the sheared contact. As the PDMS substrate is further displaced with respect to the lens, slip progressively re-invades the contact from its periphery, until a full sliding condition is achieved close to the point where the magnitude of the friction force recovers its steady state-value (in figure 5 (top), the occurrence of such a partial slip condition is indicated by bold lines). The development of slip within the contact results in a continuous decrease in the contact stiffness which would make a quantitative comparison with the point contact model difficult. However, similarly to the point contact situation, the finite size contact re-stick for a critical value of the angle θ close to π/2. This analogy between point and finite size contacts regarding the slip-to-stick condition can be accounted for by the fact that, at the time the sudden change θ in the orientation of the trajectory is applied, the velocity field is very uniform, except for small Poisson's effects [25]. The contact is thus characterized by a nearly unique orientation of the sliding velocity, irrespective of its size: this is equivalent to a point contact at the position where it undergoes a change in the sliding direction. At this stage, it turns out that the simple point contact model and the associated tractrix can provide a qualitative understanding of the change in the orientation and of the magnitude of the friction force following a discontinuity in the sliding direction. However, we show below that the details of the orientation of the friction force during the transient regime depend on the heterogeneous sliding conditions which are achieved at the contact interface as a result of the deformation of the soft substrate. For that purpose, we focus on the relationship between the orientation of the macroscopic friction force and the distribution of the sliding velocity within the contact for θ ≤ π/2, i.e. in the absence of stick. As shown in figure 6a, measurements of the displacements at the surface of the substrate reveal an heterogeneous distribution of the velocity field. It also appears that, on an average, the local sliding velocity is strongly misaligned with respect to the imposed motion. In figure 6b, the orientation of the macroscopic friction force F T is reported as a function of the average orientation of the normalized sliding velocity V/ |V| for θ ≤ π/2. It turns out that the orientation of the macroscopic friction force matches perfectly the average orientation of the sliding velocity field within the contact. This observation can be justified by considering that (i) locally, the frictional shear stress τ is tangent to the sliding direction; (ii) τ is independent on contact pressure. This latter condition is supported by previous results using similar smooth single asperity glass/PDMS [25,26]. In what follows, we derive a model based on these two assumptions in order to account for the progressive re-orientation of the velocity field and for the macroscopic friction force during the transient regime. Unsteady-state friction model In this section, a model is derived to account for the experimental results, by assuming only that, within the contact area, the interfacial shear stress is oriented along the local interfacial velocity, the velocity field being itself shaped by the friction history. In accordance with earlier experimental results, the amplitude of the friction stress for this smooth glass-rubber system is supposed to be independent of the normal stress and (weakly) dependent of the interfacial velocity [25,26]. The model is implemented within the framework of the linear elasticity approximation, neglecting inertia effects. To simplify, the Poisson's ratio of the substrate is taken as ν = 1/2 as it is nearly the case for rubber materials. While experimental observations show that the contact is not perfectly circular during sliding (cf figure 6a), it is assumed for the sake of simplicity that the contact is a disk. A spherical lens, fixed in the laboratory frame, is maintained in contact with a rubber substrate which is displaced in its plane. Its trajectory R(t) is prescribed and the problem is to describe the displacements and the friction stress u = (u x , u y ) ⊺ and τ = (τ xz , τ yz ) ⊺ in the contact area to deduce the global friction force F T which applies to the lens. The velocity dependence of the frictional stress reads where v is the interfacial velocity and τ 0 (|v|) is a prescribed friction law which, as a first approximation, can be deduced as the ratio of the friction force to the contact area in steady state rectilinear friction experiments. In order to evaluate the interfacial velocity, one may remark that a point ρ which is fixed in the lens frame is, at a time t, in contact with a point of the substrate, which would be at R 0 (t) at rest in the substrate frame and which underwent a displacement u(t, R 0 (t)) under the influence of the stress field: The displacement velocity is, up to the first order, The relative velocity of this point with respect to the lens is then In the case of incompressible materials such as rubber, Green's analysis establishes that the static surface displacements induced by the vertical and lateral components of a point loading on an elastic half-space are fully decoupled [27]: where the symbol * stands for a 2D convolution operation and With a matricial notation, one may express u = G * τ (10) and thus The local friction hypothesis equation (3) together with the above expression give a self-consistent problem, which contains a time differential equation for the traction field τ . The non-linearity of the friction law equation (3) makes it difficult solve the problem in a closed form. In the following, the equation is numerically solved by evaluating the above equation in the Fourier space:v For steady state situations, the time partial derivative term in eq. (11) vanishes and we are left with a simple selfconsistent equation. In the case of a rectilinear stationary regime along the y-axis, R(t) = (0, V t) ⊺ . Eq. (11) can thus be written as where j is a unit vector along the y-axis. An iteration method, where the convolutions are computed in the Fourier space, is used to numerically evaluate the self-consistent solution of the problem using an empirically determined logarithmic friction law in equation (3). The obtained velocity and stress fields are mostly homogeneous with a small distortion due to Poisson's effect. The results are very similar to the experimentally obtained fields [25] though no detailed comparison is presented here since the distortions are weak. In the case of broken lines experiments, the system is first prepared by a displacement along the y-axis, long enough to reach a steady state. Then, the substrate is suddenly driven along a direction which makes an angle θ with the initial displacement axis.The corresponding equation then reads where the calculated stationary state described above is used as an initial condition. This system is also solved iteratively with a time step allowing for the convergence of the solution. for the sake of simplicity, the calculations were carried out under the assumption that the contact area remains circular. The model presented here can be extended to more complex situations quite straightforwardly. One may describe the stress behaviour for different local friction laws: a linear dependence of the traction to the normal pressure (Coulomb's law), for example, or, for a rough glass lens on a rubber substrate, a power law [28]. The amplitude of the interfacial stress is then determined using Hertzian stress. For smooth glass/PDMS contacts, we have also previously shown that the frictional shear stress τ is proportional to the local stretch ratio ζ within the contact, i.e. τ = ζτ 0 , where τ 0 is the stress in the absence of stretch [13]. Such a feature could also be implemented in the numerical resolution of the problem. For non-rubber substrate, the Poisson's ratio being different from 1/2, normal and lateral components of stress and displacements are coupled. Though this complicates the self-consistent equations, it does not pose any particular difficulty. The occurrence of large deformations as well as non circular contact areas could also be taken into account using numerical schemes such as Finite Element (FE) simulations. It can be remarked that, for situations involving a change in the orientation of the sliding trajectory, the weak velocity dependence does not affect very much the numerical results as the amplitude of the interfacial velocity is rather homogeneous. An explicit dependence was however included in the related computations, since it does not complicates the calculation. This model is also used later in the text to account for circular trajectories or for velocity steps in linear sliding. 5 Discussion of the friction model for non rectilinear trajectories 5.0.1 Broken line trajectories As a validation of the friction model, we first consider the orientation of the friction force in the case of broken line sliding experiments. Numerical calculations using equation (15) have been performed in situations without stick, i.e. for θ ≤ π/2. From linear sliding experiments carried out at driving velocities V ranging from 1 µm s −1 to 1 mm s −1 , an empirical logarithmic friction law was derived in the form with v 0 = 1 µm s −1 , τ 0 = 0.163 MPa and k = 0.25. As shown by the dotted lines in figure 4, theoretical results using this friction law are in very good agreement with experimental results. While in the point contact situation, the characteristic length involved in the reorientation of F T depends only on two geometrical parameters, namely the angle θ and the characteristic length λ, an additional length, the contact radius a, arises from elasticity in the case of finite size contacts. The relevance of our friction model basically depends on the ratio λ/a which, for a pressure-independent frictional stress τ , can be shown to be close to τ /E. For highly rigid materials, τ /E ≪ 1, the reorientation of the friction force thus occurs over a very short distance. Conversely, when τ /E ≫ 1, the reorientation of F T obeys mainly the point contact model irrespective of the contact size. For soft materials such as elastomers, it is usually observed that τ /E ≈ 1 [25,29] and the orientation of the friction force then involves the sliding heterogeneities which are accounted for in the model. Circular trajectories The model was further applied to another situation where circular trajectories are applied to the PDMS substrate. Here, a steady-state situation is achieved: the substrate displacement field is stationary in a rotating frame where the origin of the substrate plane is fixed. By virtue of the curvature of the trajectory, there is a loss of symmetry of contact deformation. As a consequence, the friction force is no longer collinear to the tangent to the circular sliding path. As schematized in figure 7a, the PDMS substrate initially at rest is linearly displaced by a distance L and then describes a circular trajectory around its initial position. Experiments have been carried out for radii of the trajectory L ranging from 0.1 to 2 mm (the static contact radius is a 0 = 1.37 ± 0.01 mm). Here, we only consider the steady-state situations which are achieved after the substrate has described more than a whole circle. When L < 0.5 mm, a partial slip condition is evidenced from the velocity fields: as shown in the figure 7b for L = 0.4 mm, a large part of the contact remains stuck while some slip occurs in a crescent-like area at the contact periphery. On the other hand, a full slip condition is achieved under steady-state sliding when L > 0.5 mm. In figure 7c, the angle α of the macroscopic friction force with respect to the tangent to the imposed circular motion is reported as a function of the normalized radius of the trajectory (τ L/Ea 0 ) where τ is the average frictional stress and E is the Young's modulus (τ /E = 0.19 for the considered velocity, V = 40 µm s −1 ). When L → 0, the size of the slip zone vanishes and the measured force then reflects the elastic response of the sheared contact. According to the loading path shown in figure 7a, the lateral force is aligned with respect to the radius of the trajectory, i.e. α = π/2. Conversely, when L → ∞, the curvature of the trajectories of points on the surface of the substrate becomes negligible within the contact. As a consequence, the angle of the friction force with respect to the tangent to the trajectory tends to vanish. This situation was addressed theoretically by our friction model. For a steady-state circular substrate trajectory R(t) = (L cos ωt, L sin ωt) ⊺ , the mechanical fields are stationary in a frame which is centred on the lens but which is in rotation with it (see figure 8) i.e. u(t, ρ) = R ωt u (0, R −ωt ρ) , . The vertical dotted line delimits the transition from partial slip to full slip condition (for τ L/Ea 0 ≤ 0.07) and the horizontal dotted line corresponds to π/2. The solid blue line corresponds to theoretical calculations using equation (19). where R α is a rotation operator of angle α. The time partial derivative of the displacement can then be expressed as The interfacial velocity field eq. (11) evaluated at t = 0 reads In figure 7c, it turns out that the friction model captures adequately the change in the friction force angle α with the radius of the circular trajectory as far as a full sliding condition is considered. In its present state, our model is not able to predict the critical radius for the transition from full to partial slip. However, the impossibility of achieving convergence in the calculations close to this critical value may be viewed as a consequence of a lack of a solution ensuring a full sliding condition. As a conclusion, the problem of circular sliding is thus driven by the ratio of the radius of the trajectory L to the contact radius a. Any analogy with the point contact situation, i.e. a = 0, would therefore be inoperative. A striking feature of circular trajectories is the change in the orientation of the friction force from a radial to an orthoradial orientation when L/a increases from low values in the elastic stick regime to large values in the full sliding regime. Extension to sine wave motions We now extend our analysis of curvilinear sliding to sine wave motions. In such situations, the sliding interface is continuously perturbed by the reorientation of the sliding trajectory. Similarly to the broken line experiments, some stick-slip motions are induced depending on the curvature of the trajectory. In figure 9, the domain for the occurrence of such phenomena is mapped in a phase diagram as a function of the normalized amplitude A/λ and wave length Λ/λ for static contact radii ranging from 1.3 to 2.0 mm. From experimental data, it turns out that the boundary between the stick-slip (SS) and continuous sliding (CS) regimes is independent on the contact size. In other words, all the effects of the contact radius are embedded within the elastic length λ ≈ τ a 0 /E. The independence of the boundary between CS and SS regimes on contact size is further evidenced by point contact calculations (see Supplementary Information for details): as shown by the continuous line in figure 9, these calculations provide a very good description of the CS/SS boundary for finite size contacts. This agreement can be rationalized from a consideration of the sliding velocity field at the transition from slip to stick conditions. As shown in figure 10, the sliding velocity field is very homogeneous just before the contact re-stick. As a consequence, contact size is no longer a relevant length scale. Moreover, the slip to stick transition systematically occurs when the angle between the friction force and the tangent to the imposed motion is close to π/2 (results not shown), as predicted by the point contact model. However, the comparison with the point contact situation fails when considering the orientation of the friction force. As shown in figure 11 for experiments carried out in the continuous sliding regime, the friction force may never be aligned with respect to the tangent to the sliding trajectory of the PDMS substrate (indicated by the blue line in the figure). This is indeed the case for A = 0.4 mm. For A = 2 mm, the friction force only realign in a portion of the sinus cycle where the trajectory is nearly linear. As for the broken line experiments, it was observed that the orientation of the friction force corresponds to the average orientation of the sliding velocity within the contact (results not shown). Here again, the orientation of the friction force with respect to the sliding trajectory is dictated by the loss of homogeneity of contact deformation which results from the curvature of the trajectory. Velocity steps We now address the generic situation of a velocity step from V 0 to V which is often considered in the discussions of state-and-rate friction models. In figure 12, the normalized friction force F T /F 0 T (where F 0 T is the steady state friction force just before the velocity step) is reported as a function of the normalized distance V t/a 0 . We consider here a one decade velocity step from 0.001 mm s −1 to 0.01 mm s −1 and, conversely, from 0.01 mm s −1 to 0.001 mm s −1 . Here, no peak force is evidenced when the velocity is increased. The transient regime occurs over a typical distance much less than the contact size and its magnitude depends on whether the velocity is increased or decreased. As detailed in the Supplementary Information, the transient regime can first be addressed in the light of the point contact situation. Just after the jump, calculations show that which can be numerically solved to determine the sliding velocity field following an instantaneous change in the driving velocity from V 0 to V . In these simulations, the value k = 0.37 of the parameter of the logarithmic friction law (equation 16) was set from the experimental values of the frictional stress under steady-state sliding at V = 0.001 and V = 0.01 mm s −1 , respectively. As shown in figure 12, the simulations are in very good agreement with the experimental data. This agreement is also preserved if one consider the sliding velocity field. This is evidenced in figure 13 where experimental and calculated velocity profiles taken along contact cross-sections perpendicular to the direction of the sliding motion are reported for various non dimensional distances V t/a 0 . These profiles show that the perturbation induced by the velocity step is progressively accommodated from the periphery to the centre of the contact until steady-state is achieved, at least for the positive velocity jump. This transient reorganization of the velocity field at the interface and the associated changes in the friction force are thus adequately described by our simple contact model which takes into account only the velocity-dependence of the frictional stress and the elastic response of the rubber substrates. Going back to the framework of the state-and-rate friction model proposed by Rice and Ruina [6,5], this means that a physical description of the state variable accounting for memory effects could tentatively be developed here from the modelling of the sliding heterogeneities induced by contact deformation. Some limitations in this description are, however, found when the velocity step is carried out in a range of higher velocities. As shown in the inset of figure 12, when the velocity is increased from 0.1 to 1 mm s −1 , a peak force is induced during the transient regime which is not captured by our model. Here, some crack-like processes such as ! " Figure 12: Normalized friction force F T /F 0 T as a function of the normalized distance V t/a 0 during a velocity step from 0.001 to 0.01 mm s −1 (blue) and from 0.01 to 0.001 mm s −1 (red). F 0 T is the steady-state friction force before the application of the velocity step at V t/a 0 = 0. Large dotted lines correspond to the simulations using equation 21. Small dotted lines correspond to equation (20) where k = 6.9 10 3 N m −1 was determined from the measured lateral contact stiffness. Inset: same for a velocity step from 0.1 to 1.0 mm s −1 , showing the occurrence of a force peak. those involved in the stiction of adhesive contacts [30,31] probably occur. The description of such phenomena would deserve further investigations which are beyond the scope of this work. Conclusion In this study, we have addressed the issue of the transient frictional response of a glass/rubber contact interface when it is perturbed by either non rectilinear sliding motions or sliding velocity steps. The heterogeneous deformation of the interface in the finite size contact was found to control the frictional stress and the resulting macroscopic friction force. These effects were found to account for a wide variety of spectacular macroscopic behaviours which elude classical friction models. As an example, the friction force may non longer be tangent to the sliding trajectory. Some stickslip motions induced by the curvature of the trajectories were also identified. The observed behaviour are especially relevant to soft matter systems where the ratio of the frictional stress to the Young's modulus is frequently close to unity. In order to account for these phenomena, we have developed a simple friction model which assumes that the interfacial shear stress is oriented along the local interfacial velocity. This model was found to describe accurately the observed behaviours with non rectilinear sliding trajectories. When a velocity dependence of the frictional stress is added in the model, it also allows to account for the transient regimes resulting from velocity steps under linear sliding conditions. The observed memory effects are thus adequately explained by this model. This agreement is preserved as far as a full sliding condition is maintained at the contact interface; in its current state the model does not allow to describe partial slip situations. The description of sliding heterogeneities at the contact scale thus provides a physical substance to the memory effects embedded in the state-and-rate friction model. This is thought to pave the way to a more physical description of several engineering problems, for which dynamic friction phenomena are involved. Bolted assemblies are for instance sometimes designed in order to ensure and localize energy dissipation in a mechanical structure such as friction dampers used as seismic resistant connections. Such bolted assemblies are also investigated for the passive control of For all these applications, the fine scale description of the sliding heterogeneities is also thought to establish grounds for a macro-scale description based on yield surfaces, such as for plasticity. The simulation of complex systems involving dynamic friction could then benefit from the numerous robust numerical schemes developed for the simulation of plasticity. Methods Friction experiments are carried out using transparent poly(dimethyl siloxane) (PDMS) flat substrates in contact with a smooth BK7 plano-convex glass lens (radius R =12.9 mm). Parallelepiped PDMS specimens 15x40x40 mm 3 are obtained by cross-linking at 70°C for 48 hours a mixture of commercially available silicone prepolymer and hardener (Sylgard 184, Dow Chemicals, USA) in a 10:1 weight ratio. As detailed in reference [25], a square network of small cylindrical holes (diameter 20 µm, depth 5 µm and spacing 80 µm) is stamped on the PDMS surface by means of standard soft lithography techniques in order to measure surface displacement fields in the contact zone. Indeed, under transmitted light observation conditions, this pattern appears as a network of dark spots which are easily detected using image processing. Friction experiments are performed using a custom-built device where a constant normal load (from 0.9 to 3.3 N) is applied to the glass lens by means of a dead weight arm. During experiments, the position of the glass lens is fixed while the PDMS substrate is moved by means of two crossed motorized translation stages (M.404.1PD and M.404.6PD, PI, Germany). The synchronous displacement of these two stages allows to generate various sliding trajectories (circles, broken lines or sine wave motions) at a constant imposed velocity V ranging from 0.04 to 0.4 mm s −1 . The components of the friction force F T within the contact plane are continuously monitored using a dedicated, homemade,sensor located just beneath the glass lens. As fully described in [32], this sensor consists in a thin (≈ 1 mm) PDMS layer enclosed between to glass disks 20 mm in diameter. The inner faces of the two disks are patterned in order to allow for optical detection. After calibration of the isotropic shear stiffness of the sensor (5.05 10 5 N m −1 ), the friction force components are determined from the optical measurement of the relative displacements between the two glass disks using a CCD camera (MV1-D1312, PhotonFocus, Switzerland) and a zoom lens operated in reflection mode. Pictures of the contact are continuously recorded through the thickness of the transparent PDMS substrate using a CCD camera (2048 x 2048 2 , 8 bits, SVS Exo,Vistek, Germany) and a long-working distance objective (APO Z16, Leica, France). Displacement fields are measured with sub-pixel accuracy from the detection of the dots pattern on the PDMS substrate using conventional image processing techniques. They were systematically corrected from the measured displacements of the lens which result from the compliances of the dead weight arm supporting the lens and of the load sensor. Velocity steps experiments were carried out using a separate, dedicated, linear sliding device equipped with a high bandwidth (5 kHz) lateral load sensor (Sensotec 5N, Spain) and a data acquisition system operated up to a 1 kHz sampling rate. Experiments were carried out at imposed velocity under a constant 1 N normal force applied using a linear coil actuator (PIMag V-275, PI, Germany) in closed loop control. The sliding velocity was varied from 0.001 to 1.0 mm s −1 by means of a motorized linear translation stage (M403.4PD, PI, Germany). The ratio of the lateral device stiffness to the lateral contact stiffness is greater than 50. Contact pictures were recorded through the PDMS substrate using the same optics as that described above and a camera (1024 2 ,8 bits, MV-D1024, PhotonFocus, Switzerland) which was operated up to a frame rate of 90 kHz. Supplementary Information, Fazio et al Point contact in the sliding regime: generalised tractrix As schematically depicted in figure 14, a point contact (red bullet) is lying on a moving substrate and held to a fixed holder by means of a flexible fibre with an 2D isotropic compliance k in the plane of the substrate. In the (x, y) plane, the location of the point contact and of the vertical projection of the holder are denoted r(t) = (x(t), y(t)) ⊺ and R(t) = (X(t), Y (t)) ⊺ , respectively. The distance in between these two points is the tribo-elastic length λ and the line passing through both points is tangent to the slider trajectory. These conditions define this trajectory as a generalised tractrix corresponding to the directrix R(t) and to the parameter λ (see for example reference [23]) 1 . It obeys the following equations: An initial condition must be given, where the slider-holder distance is λ. As discussed in the main text, this sliding regime exists ifṘ(t)(R(t) − r(t)) =Ṙ(t)ṙ(t) > 0. When this condition fails at t = t s , the slider stops with respect to the substrate until the sliding condition |r(t s ) − R(t)| = λ becomes fulfilled again. The trajectory is then a tractrix again with a new initial condition. Linear displacement For a linear displacement of the substrate along the x-axis, then, X(t) = V t, Y (t) = 0, it can be checked that the general solution of this system verifying x(0) = λ cos ϕ, y(0) = λ sin ϕ with π/2 ≤ ϕ ≤ π can be written as To describe an experiment where the driving direction is suddenly reoriented by an angle θ at t = 0 after a sliding phase along the x-axis, the previous trajectory must be rotated by an angle θ around the origin. Two situations may occur: • When θ < π/2, there is not stick-phase. For t > 0 the trajectory of the slider follows a tractrix curve passing through F . The initial condition at u = 0 is ϕ = π − θ (see Fig. 1a in the main text). • When θ > π/2, the slider stops at t = 0 while the driver still moves until the distance between both points reaches λ again, at an instant t 0 = −2(λ/V ) cos(θ). This condition corresponds to ϕ = θ (see Fig. 1b in the main text). Sine driving curve For a general driving curve, depending on the geometrical characteristics of the curve, the slider may follow pieces of generalised tractrices during the slip phases or remain motionless on the substrate during the stick phases. For a sine driving curve (X(t) = vt, Y (t) = A sin(2πvt/Λ)), the steady state regime may correspond to continuous sliding or periodic stick-slip. The trajectory is numerically determined, by solving eq. (SI. 22) in the sliding phases. For a given period, by calculating the solution for increasing values of the amplitude, the boundary of the steady sliding regime is found when in the established regime the quantity dR/dt (R(t) − r(t)) becomes negative at some point. If this quantity remains positive, no stick phases occurs and the numerically obtained solution is valid. If it changes its sign at t = t s , the slider stops (ṙ(t) = 0) until |r(t s ) − R(t)| reaches the value λ again, giving a new initial condition for the eq. (SI. 22). The boundary domain obtained in this way is reported in figure 8 in the main text for comparison to the finite contact size case. Velocity step We consider here the response of the point contact to a sudden jump in the sliding velocity, from V to V 0 at time t = 0. The following velocity-dependent friction law is assumed where A and B are two constants and V 0 is the initial velocity. This velocity step from V 0 à V thus corresponds to a change in the elastic length from λ 0 = A/k to λ = λ 0 + B/k ln V /V 0 . The motion obeys the following equation figure 15 for velocity steps from 10 −3 to 10 3 by one decade increments. For each increase in velocity, an unique slope is achieved at short times whatever the magnitude of the velocity step. The same is observed for a decrease in velocity but with a steeper slope. Accordingly, the calculation shows that just after the jump
2021-07-13T01:15:58.611Z
2021-07-12T00:00:00.000
{ "year": 2021, "sha1": "61293ed880c166f39a4aae7d11e45b981f1a5abc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2107.05281", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "61293ed880c166f39a4aae7d11e45b981f1a5abc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235823728
pes2o/s2orc
v3-fos-license
The relationship between Google search interest for pulmonary symptoms and COVID-19 cases using dynamic conditional correlation analysis This study aims to evaluate the monitoring and predictive value of web-based symptoms (fever, cough, dyspnea) searches for COVID-19 spread. Daily search interests from Turkey, Italy, Spain, France, and the United Kingdom were obtained from Google Trends (GT) between January 1, 2020, and August 31, 2020. In addition to conventional correlational models, we studied the time-varying correlation between GT search and new case reports; we used dynamic conditional correlation (DCC) and sliding windows correlation models. We found time-varying correlations between pulmonary symptoms on GT and new cases to be significant. The DCC model proved more powerful than the sliding windows correlation model. This model also provided better at time-varying correlations (r ≥ 0.90) during the first wave of the pandemic. We used a root means square error (RMSE) approach to attain symptom-specific shift days and showed that pulmonary symptom searches on GT should be shifted separately. Web-based search interest for pulmonary symptoms of COVID-19 is a reliable predictor of later reported cases for the first wave of the COVID-19 pandemic. Illness-specific symptom search interest on GT can be used to alert the healthcare system to prepare and allocate resources needed ahead of time. Scientific Reports | (2021) 11:14387 | https://doi.org/10.1038/s41598-021-93836-y www.nature.com/scientificreports/ There is also some disagreement in the field about the validity of using Google Trends as a tool for digital epidemiology [14][15][16] . GT data can be influenced by many factors: historical events, public interest, or media coverage. However, when we study the relationship of this fragile data with actual data such as daily confirmed COVID-19 cases, the resulting correlation can be more reliable. Thus, monitoring this relationship can be a viable tool for understanding the movement of the pandemic. Lippi et al. investigated the capacity of Google search volume of symptoms such as fever, cough, and dyspnea to predict the trajectory of the early 2020 COVID-19 outbreak in Italy using Spearman's correlation method. They concluded that GT's continuous monitoring is a valuable instrument in the early detection of COVID-19 outbreaks 12 . Most studies used conventional correlation methods to determine the relationship between symptom search and cases 12,[17][18][19] . Other studies employed moving average (MA) methods to smooth daily fluctuations of symptoms and later new case emergence, and they selected three to seven days as their moving average 20,21 . Some authors also preferred shifting the symptom search results to match the GT search and new cases [21][22][23] . One common denominator in all these studies was the use of non-dynamic statistical procedures. Another approach is to use wave analysis to detect the co-movement between symptoms and cases 24 . However, this approach has the limitation of not seeing correlation over time. Asseo et al. relied on sliding windows correlations, a straightforward time-varying approach to assess the relationship between taste and smell loss on GT, and emerging case numbers. The sliding windows correlation method allows for monitoring correlations for each time period separately but still uses Pearson correlations 25 . Asseo et al. 's approach carries the limitation of conventional correlation, which lacks the ability to work with time-varying co-movement. On the other hand, the DCC model considers both time-varying correlation and time-varying variances, and this method is more powerful than the conventional correlation methods, including sliding windows with Pearson or Spearman correlation analysis 26 . The DCC model, developed initially for financial time series, has been used by several researchers in finance and neuroscience. In finance, several studies used the method to investigate Google search interest and financial market behaviors [27][28][29] . In neuroscience, Lindquist et al. used the DCC model to study the time-varying correlation among several brain signals in functional magnetic resonance imaging (fMRI). The authors concluded that the DCC model better captured time-varying correlations as it minimizes random noise in the estimations 26 . We believe the DCC model can also be used in health sciences to capture the time-varying relationship between symptom search and new case emergence. We aim to present DCC as a model that better fits the time-lagged nature of our data set and compare its viability against the sliding window correlation method to study the relationship between searches of fever, cough, and dyspnea on GT and new cases in Turkey, Italy, Spain, France, and the UK. Methods Data. Google search interest trends are calculated by dividing the number of queries of interest by the total number of queries for all search terms over the same time and region. Each query share is normalized on a scale of 0 to 100, with 100 representing the share's maximum value for the period and region selected. The scaled query share values are plotted daily, generating a time series. Search terms included pulmonary symptoms, e.g., fever, cough, dyspnea, as previously reported to be associated with COVID-19 infection [ 15 Lippi]. Searches for these terms covered Turkey, Italy, Spain, France, and the United Kingdom (UK), the European countries most affected by the COVID-19 pandemic. We have focused on these countries due to differences in geographic locations, cultures, and health systems. Furthermore, the first wave of the pandemic presented at different times across these countries. At the same time, similar precautionary measures such as the shutdown of all schools and universities, closure of museums, cultural centers, cinemas, theatres, pubs, and the suspension of international flights were undertaken in all countries at approximately the same time 30,31 . Google searches for pulmonary symptoms were obtained by R X64 40.2(R: A Language and Environment for Statistical Computing) using "gtrendsR" package for the dates between January 01 and August 31, 2020. Search terms were determined in Turkish and were later translated to the relevant languages (Italian, French, Spanish, and English) via Google Translate, and then checked for accuracy by native speakers. We used "fever," "cough" and "dyspnea" or "shortness of breath" as search terms for pulmonary symptoms ("ateş", öksürük", "nefes darlığı" for Turkish, "febbre ", tosse" and "dyspnée" for Italian, "fièvre", "toux" and "essoufflemen" for French, "fiebre", "tosse" and "dyspnea", for Spanish). Each term was searched, selecting "all categories" for each particular country. The search was conducted on September 1, 2020. We obtained the data of new cases for each country from the WHO COVID-19 database 3 . Statistical analysis. An initial check of the raw data revealed very high fluctuations and time lags between symptoms and new cases (see Fig. 1). Previous studies used a 3-to 7-day moving averages 20,21,32 to transform the data. We analyzed our data using various moving averages ranging from 3 to 7 days to deal with the high fluctuations and observed that five days was most appropriate to smooth the data. Next, we shifted symptom search results forward to capture the time lag between symptom searches and new case reports. We realized that each symptom in each country needed a unique time period. We used the RMSE approach to determine the best fit period for each symptom in each country. Symptoms were shifted forward until the minimum RMSE was observed. We used the sliding windows correlation offered by Asseo et al. The authors selected a time frame of 31 days and rolled the correlation with one day 25 . We deployed the same method and calculated sliding window correlations for the raw data, moving average, and shifted data. We later carried out the DCC model to understand the dynamic correlation between Google search interest for the three identified pulmonary symptoms and new cases. The DCC method, originally proposed by Engle Results We initially checked the normality of the data using the Shapiro-Wilk test and observed that not all the series were normally distributed. Therefore, we used Spearman correlations instead of Pearson correlations for the rest of our analyses. The Spearman correlations for raw data and five days moving averaged and shifted data are presented in Table 1. When we examined the raw data, we found correlation coefficients to be weak (less than 50%) and/or non-significant for most symptom searches and cases. We first transformed the observations to a five-day moving average, then shifted symptoms separately with the RMSE values. We found that Spearman correlation coefficients between symptoms and new cases increased to moderate levels and became significant at p < 0.01 (see Table 1 for symptom-specific p-values). Figure 2 shows the RMSE values for fever, cough, dyspnea in five countries. The arrows show the optimum shift days where the RMSE values are minimum. Table 2 lists the symptom search shifts for each symptom in each country. We observed that the optimum time lag for each symptom ranged from 8 to 24 days. These findings show that search terms on GT may need to be shifted separately to better fit the nature of the phenomenon at hand. Table 3 shows that the constant correlation hypothesis should be rejected for all of the series at p < 0.01. We suggest using a time-varying correlation to monitor the co-movement of symptom search and new cases emergence. Table 4 reports the DCC coefficients for the relationship between pulmonary symptom searches and new case emergence in Turkey, Italy, Spain, France, and the UK. We found that significant, moderate to high DCC correlations. The correlation degree of pulmonary symptom search was different for each of the symptoms and countries. The findings demonstrate that the null hypothesis of constant correlation should be rejected. (p < 0.01). Looking at the DCC and sliding window correlation results with raw and MA-shifted data for fever, cough, and dyspnea symptoms, we found that: First, the DCC model proved a better fit than sliding windows correlation models during the first wave of the pandemic. Second, high fit periods for DCC coefficients (r ≳0.90) were different in each country. For fever, the high fit period is April 10-May 14 for Turkey, March 31-June 5 for Italy, April 2-June 4 for Spain, April 14-May 7 for France, April 18-June 21 for the UK (see Fig. 3 for details). For cough, the high fit period (r ≳0.90) is April 10-May 14 for Turkey, March 31-June 5 for Italy, April 2-June 5 for Spain, April 14-May 7 for France, April 18-June 18 for the UK, cough symptom search fit is the highest in the UK (see Fig. 4 for details). For dyspnea, the DCC coefficient fluctuates after the pandemic's first wave. The high fit period (r ≳0.90) is April 10-June 4 for Turkey, March 31-June 5 for Italy, April 2-June 5 for Spain, April 14-May 7 for France, and April 18-June 16 for the UK (see Fig. 5 for details). Discussion This study shows that for three pulmonary symptoms (fever, cough, and dyspnea), google search interest is correlated with COVID-19 new cases utilizing a DCC model. We also demonstrated that the DCC model's performance is better than the sliding windows correlation for data from the first wave of COVID-19 pandemics. Our findings suggest that monitoring Google interest using GT would provide valuable information to produce preventive and intervention related programming. Previously, Asseo et al. examined the relationship between smell and taste loss symptoms of COVID-19 Google searches and new COVID-19 cases. They employed a sliding window correlation for time frames of one month between March 4-August 25, 2020. The authors could not find stable correlations between taste and smell loss searches and new cases for Italy and the US during the pandemics' first wave. However, they observed that this link fluctuates over time and concluded that the correlation between searches of novel symptoms of infectious disease and the number of new cases fluctuates and decreases over time 25 . We found similar fluctuations. However, we observed much less fluctuation in our data set during the pandemic's first wave. Fluctuations did increase, and correlations decreased as the western hemisphere moved into the summer period. Lippi et al. investigated the relationship between the volume of Google searches for the most frequent symptoms (fever, cough, and dyspnea) of SARS-CoV-2 infection and new cases using the Spearman correlation test. They did not find a significant correlation between cough/fever and new cases, respectively, but did detect a significant correlation for dyspnea. However, the correlation between newly diagnosed COVID-19 cases and "cough" and "fever" search terms became statistically significant with a 3-week delay. This study used a standard three-week time period and failed to see correlations in several symptoms 15 . However, we used symptom and country-specific time periods ranging from 8-24 days. This dynamic approach helped reveal the correlation in a more fine-tuned manner. Most studies used conventional correlation methods, and they could not observe the time-varying co-movement between GT symptom search and new case emergence 17,20 . It is important to detect correlations in different periods, such as wave periods in pandemics. The time-varying correlations approach allows to monitor this comovement in different periods and provides us with multiple correlation indexes. The DCC model is such a model that performs better than other time-varying correlation approaches such as the sliding windows correlation 35 . Thus, we used DCC to find time-varying correlations between pulmonary symptoms and new COVID-19 cases. Previous studies selected a fixed day for each of the countries 14,17,19 , but the research presented here shows that a heuristic approach (RMSE) results in different shift days for each country's GT symptom. Our results illustrate that Italy, France, and Spain have a shorter time delay ranging from 8 to 21 days; UK and Turkey have a range of 18 to 24 days from symptom search to new case report. These differences may be due to variations in www.nature.com/scientificreports/ peoples' web search reaction to pulmonary symptoms or the procurement of PCR test results and processing of the test results in each country. Attention paid to higher risk symptoms such as dyspnea may indicate the need for hospitalization, which is of distinct importance. DCC analysis methods show that fever, cough, and dyspnea symptoms correlate well with new cases during the first wave of the pandemic. However, by May 2020, many fluctuations in correlations begin to appear. We suggest that these fluctuations can result for various reasons independent of the modelling used: symptoms may become too well known, testing of non-symptomatic cases may have become more common, the definition of new case reporting may have changed. New symptoms of interest related to COVID-19 may also have emerged, such as loss of taste, backache, etc. Constant monitoring of the public interest in GT is very important to formulate relevant search models. Yet, we suggest that the modeling approach will remain relevant. One limitation of this study is the terms selected to carry out the searches. Local/colloquial use of terms was not included, which may have affected the results by limiting the scope. The second limitation is in the selection of the data sources. Though "Google" is the most used web-based search engine, the use of different search tools like "Yahoo," "Msn," or "Yandex" may have led to more accurate assessments of public interest. New case reports were based on the WHO database, and the case reporting protocols may have changed during the pandemic. The 10 . The 2020 COVID-19 pandemic is the largest global public health challenge of this century. Our findings reveal that pulmonary symptom queries are crucial early signs for emerging epidemics. For example, dyspnea search interest may signal potential hospitalization and the need for intensive care. Policymakers are advised to pay attention to and utilize these search interests to plan preventive and/ or intervention strategies. Monitoring search terms may also help understand the populace's lay beliefs and worries, revealing the need for further guidance. Our results may be of particular importance as we approach the vaccination period with an already existing anti-vaccination movement in place.
2021-07-15T06:16:39.577Z
2021-07-13T00:00:00.000
{ "year": 2021, "sha1": "ce3fda838bf1814f52003927a60be137dabe1adc", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-93836-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "301eaceb476778d5105bc9aaf50e185c7612654f", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
51688576
pes2o/s2orc
v3-fos-license
spBayes for large univariate and multivariate point-referenced spatio-temporal data models In this paper we detail the reformulation and rewrite of core functions in the spBayes R package. These efforts have focused on improving computational efficiency, flexibility, and usability for point-referenced data models. Attention is given to algorithm and computing developments that result in improved sampler convergence rate and efficiency by reducing parameter space; decreased sampler run-time by avoiding expensive matrix computations, and; increased scalability to large datasets by implementing a class of predictive process models that attempt to overcome computational hurdles by representing spatial processes in terms of lower-dimensional realizations. Beyond these general computational improvements for existing model functions, we detail new functions for modeling data indexed in both space and time. These new functions implement a class of dynamic spatio-temporal models for settings where space is viewed as continuous and time is taken as discrete. Introduction The scientific community is moving into an era where open-access data-rich environments provide extraordinary opportunities to understand the spatial and temporal complexity of processes at broad scales.Unprecedented access to spatial data is a result of investments to collect data for regulatory, monitoring, and resource management objectives, and technological advances in spatially-enabled sensor networks along with geospatial information storage, analysis, and distribution systems.These data sources are increasingly diverse and specialized, e.g., computer model outputs, monitoring station instruments, remotely located sensors, and georeferenced field measurements.Across scientific fields, researchers face the challenge of coupling these data with imperfect models to better understand variability in their system of interest.The inference garnered through these analyses often supports decisions with important economic, environmental, and public health implications; therefore, it is critical to correctly estimate inferential uncertainty.However, developing modeling frameworks capable of accounting for various sources of uncertainty is not a trivial task-massive datasets from multiple sources with complex spatial dependence structures only serve to aggravate the challenges. Proliferation of spatial data has spurred considerable development in statistical modeling; see, for example, the books by Cressie (1993), Chilés and Delfiner (2012), Møller and Waagepetersen (2003), Schabenberger and Gotway (2004), Wackernagel (2003), Diggle and Ribeiro (2007) and Cressie and Wikle (2011) for a variety of methods and applications.The statistical literature acknowledges that spatial and temporal associations are captured most effectively using models that build dependencies in different stages or hierarchies.Hierarchical models are especially advantageous with datasets having several lurking sources of uncertainty and dependence, where they can estimate much richer models with less stringent assumptions than traditional modeling paradigms.These models follow the Bayesian framework of statistical inference (see, e.g., Carlin and Louis 2011;Gelman, Carlin, Stern, and Rubin 2004), where analysis uses sampling from the posterior distributions of model parameters. Computational advances with regard to Markov chain Monte Carlo (MCMC) methods have contributed enormously to the popularity of hierarchical models in a wide array of disciplines (e.g., Gilks, Richardson, and Spiegelhalter 1996;Robert and Casella 2004), and spatial modeling is no exception (see, e.g., Banerjee, Carlin, and Gelfand 2004).In the realm of spatial statistics, hierarchical models have been widely applied to analyze both areally referenced as well as point-referenced or geostatistical data.For the former, a class of models known as Conditionally Autoregressive (CAR) models have become very popular as they are easily implemented using MCMC methods such as the Gibbs sampler.In fact, these models are somewhat naturally suited for the Gibbs sampler which draws samples from conditional distributions that are fully specified by the CAR models.Their popularity has increased in no small measure due to their automated implementation in the OpenBUGS software package which offers a flexible and user-friendly interface to construct multilevel models that are implemented using a Gibbs sampler.This is performed by identifying a multilevel model with a directed acyclic graph (DAG) whose nodes form the different components of the model and allow the language to identify the full conditional distributions that need to be updated.OpenBUGS is an offshoot of the BUGS (Bayesian inference Using Gibbs Sampling) project and the successor of the WinBUGS software. From an automated implementation perspective, the challenges are somewhat greater for point-referenced models.First, expensive matrix computations are required that can become prohibitive with large datasets.Second, routines to fit unmarginalized models are less suited for direct updating using a Gibbs sampler in the BUGS paradigm and results in slower convergence of the chains.Third, investigators often encounter multivariate spatial datasets with several spatially dependent outcomes, whose analysis requires multivariate spatial models that involve matrix computations that are poorly implemented in BUGS.These issues have, however, started to wane with the delivery of relatively simpler R (R Core Team 2013) packages via the Comprehensive R Archive Network (CRAN) (http://cran.r-project.org) that help automate Bayesian methods for point-referenced data and diagnose convergence.The Analysis of Spatial Data (Bivand 2013) and Handling and Analyzing Spatio-Temporal Data (Pebesma 2013) CRAN Task Views provide a convenient way to identify packages that offer functions for modeling such data.These packages are generally listed under the Geostatisics section in the Task View.Here, those packages that fit Bayesian model include geoR (Ribeiro Jr. and Diggle 2012), geoRglm (Christensen and Ribeiro Jr. 2011), spTimer (Christensen and Ribeiro Jr. 2011), spBayes (Finley and Banerjee 2013), spate (Sigrist, Kuensch, and Stahel 2013), and ramps (Smith, Yan, and Cowles 2011).In terms of functionality, spBayes offers users a suite of Bayesian hierarchical models for Gaussian and non-Gaussian univariate and multivariate spatial data as well as dynamic Bayesian spatial-temporal models. Our initial development of spBayes (Finley, Banerjee, and Carlin 2007) provided functions for modeling Gaussian and non-Gaussian univariate and multivariate point-referenced data. These hierarchical Bayesian spatial process models, implemented through MCMC methods, offered increased flexibility to fit models that would be infeasible with classical methods within inappropriate asymptotic paradigms.However, with this increased flexibility comes substantial computational demands.Estimating these models involves expensive matrix decompositions whose computational complexity increases in cubic order with the number of spatial locations, rendering such models infeasible for large spatial datasets.Through spBayes version 0.2-4, released on CRAN on 4/24/12, very little attention was given to addressing these computational challenges.As a result, fitting models with more than a few hundred observations was very time consuming-on the order of hours to fit models with ∼1,000 locations.spBayes version 0.3-7 (CRAN 6/1/13) comprises a substantial reformulation and rewrite of core functions for model fitting, with a focus on improving computational efficiency, flexibility, and usability.Among other improvements, this and subsequent versions offer: i) improved sampler convergence rate and efficiency by reducing parameter space; ii) decreased sampler run-time by avoiding expensive matrix computations, and; iii) increased scalability to large datasets by implementing a class of predictive process models that attempt to overcome computational hurdles by representing spatial processes in terms of lower-dimensional realizations.Beyond these general computational improvements for existing models, new functions were added to model data indexed in both space and time.These functions implement a class of dynamic spatio-temporal models for settings where space is viewed as continuous and time is taken as discrete.The subsequent sections highlight the fundamentals of models now implemented in spBayes. Bayesian Gaussian spatial regression models where y is an n × 1 vector of possibly irregularly located observations, X is a known n × p matrix of regressors (p < n), K(θ) and D(θ) are families of r×r and n×n covariance matrices, respectively, and Z(θ) is n×r with r ≤ n, all indexed by a set of unknown process parameters θ.The r ×1 random vector α ∼ N (0, K(θ)) and the p×1 slope vector β ∼ N (µ β , Σ β ), where µ β and Σ β are known.The hierarchy is completed by assuming θ ∼ p(θ), a proper prior distribution.The Gaussian spatial models in spBayes emerge as special cases of (1), which we will see later.Bayesian inference is carried out by sampling from the posterior distribution of {β, α, θ}, which is proportional to (1). Below, we provide some details behind Bayesian inference for (1).This involves sampling the parameters θ, β and α from their marginal posterior distributions and carrying out subsequent predictions.Direct computations usually entail inverting and multiplying dense matrices and also computing determinants.In software development, care is needed to avoid redundant operations and ensure numerical stability.Therefore, in the subsequent sections we describe how we use Cholesky factorizations, solve triangular systems, and minimize expensive matrix operations (e.g., dense matrix multiplications) to perform all the computations. Sampling the process parameters Sampling from (1) employs MCMC methods, in particular Gibbs sampling and random walk Metropolis steps (e.g., Robert and Casella 2004).For faster convergence, we integrate out β and α from the model and first sample from (θ).This matrix needs to be constructed for every update of θ.Usually D(θ) is diagonal and XΣ β X is fixed, so the computation involves the matrix Z(θ)K(θ)Z(θ) .Assuming that Z(θ) and K(θ) are computationally inexpensive to construct for each θ, Z(θ)K(θ)Z(θ) requires rn 2 flops (floating point operations). We adopt a random-walk Metropolis step with a multivariate normal proposal (same dimension as there are parameters in θ) after transforming parameters to have support over the entire real line.This involves evaluating where , where chol(Σ y | θ ) returns the lower-triangular Cholesky factor L of Σ y | θ .This involves O(n 3 /3) flops.Next, we obtain u = trsolve(L, y − Xµ β ), which solves the triangular system Lu = y − Xµ β .This involves O(n 2 ) flops and Q(θ) = u u requires another 2n flops.The logdeterminant in (2) is evaluated as 2 n i=1 log l ii , where l ii are the diagonal entries in L. Since L has already been obtained, the log-determinant requires another n steps.Therefore, the Cholesky factorization dominates the work and computing (2) is achieved in O(n 3 ) flops. where Computations proceed similar to the above.We first evaluate L = chol(Σ y | β,θ ) and then obtain [v : U ] = trsolve(L, [y : X]), so Lv = y and LU = X.Next, we evaluate where w i,i 's and l ii 's are the diagonal elements in W and L respectively.The number of flops is again of cubic order in n. Importantly, our strategy above avoids computing inverses.We use Cholesky factorizations and solve only triangular systems.If n is not large, say ∼10 2 , this strategy is feasible.The use of efficient numerical linear algebra routines fetch substantial reduction in computing time (see Section 3).Our implementation employs matrix-vector multiplication and avoids dense matrix-matrix multiplications wherever possible.Multiplications involving diagonal matrices are programmed using closed form expressions and inverses are obtained by solving triangular linear systems after obtaining a Cholesky decomposition.However, when n ∼ 10 3 or higher, the computation becomes too onerous for practical use and alternative updating strategies are required.We address this in Section 2.3 Mapping point or interval estimates of spatial random effects is often helpful in identifying missing regressors and/or building a better understanding of model adequacy. The vector b here is computed analogously as for β.For each k = 1, 2, . . ., M we now evaluate For computing B, one could proceed as for β but that would involve chol(K(θ)), which may become numerically unstable for certain covariance functions (e.g., the Gaussian or the Matérn with large ν).For robust software performance we define G(θ) −1 = Z(θ) Σ −1 y | α,θ Z(θ) and utilize the identity (Henderson and Searle 1981) We remark that estimating the spatial effects involves Cholesky factorizations for n × n positive definite linear system.The above steps ensure numerical stability but they can become computationally prohibitive when n becomes large.While some savings accrue from executing the above steps only for the post burn-in samples, for n in the order of thousands we recommend the low rank spatial models offered by spBayes (see Sections 2.3 and 4.2). The special case of low-rank models The major computational load in estimating (1) arises from unavoidable Cholesky decompositions for dense n × n positive definite matrices.The required number of flops is of cubic order and must be executed in each iteration of the MCMC.For example, when a specific for of ( 1) is used to analyze a dataset comprising n = 2, 000 locations and p = 2 predictors, each iteration requires ∼0.3 seconds of CPU time (see Section 4.2.1).Marginalization, as described in Section 2.1, typically require fewer iterations to converge.But even if 10, 000 iterations are required to deliver full inferential output, the associated CPU time is ∼50 minutes.Clearly, large spatial datasets demand specialized models. One strategy is to specify Z(θ) with r << n.Such models are known as low-rank models.Specific choices for Z(θ) will be discussed later -spBayes models Z(θ) using the predictive process (see Section 4.2).To elucidate how savings accrue in low-rank models, consider the marginal Gaussian likelihood obtained by integrating out α from ( 1) We could have integrated out β too, as in Section 2.1, but there is apparently no practical advantage to that.For the low-rank model, each iteration of the Gibbs sampler updates β and θ from their full conditional distributions. The β is drawn from N (Bb, B), where b and B are as in (4).The strategy in Section 2.2 would be expensive for large n because computing B, though itself p × p, involves a Cholesky factorization of the n × n matrix Σ y | β,θ for every new update of θ.Instead, we utilize the Sherman-Woodbury-Morrison formula where We perform the above operations for each iteration in the Gibbs sampler, using the current update of θ, and sample the β as in (5). We update process parameters θ using a random-walk Metropolis step with target log-density where where d ii (θ) and t ii are the diagonal entries of D(θ) and T respectively. Once the Gibbs sampler has converged and we have obtained posterior samples for β and θ, obtaining posterior samples for α can be achieved following closely the description in Section 2.2.In fact, since we the posterior samples of β are already available, we can draw α from its full-conditional distribution, given both β and θ.This amounts to replacing µ β with β and Σ y | α,θ with D(θ) in ( 6).The algorithm now proceeds exactly as in Section 2.2 and we achieve computational savings as D(θ) is usually cheaper to handle than Σ y | α,θ . Spatial predictions To predict a random t × 1 vector y 0 associated with a t × p matrix of predictors, X 0 , we assume that where is the n × t cross-covariance matrix between y and y 0 , and C 22 (θ) is the variance-covariance matrix for y 0 .How these are constructed is crucial for ensuring a legal probability distribution or, equivalently, a positive-definite variance-covariance matrix for (y , y 0 ) in (10).A legitimate joint distribution will supply a conditional distribution p(y 0 | y, β, θ), which is normal with mean and variance Bayesian prediction proceeds by sampling from the posterior predictive distribution p(y For each posterior sample of {β, θ}, we draw a corresponding y 0 ∼ N (µ p , Σ p ).This produces samples from the posterior predictive distribution. Observe that the posterior predictive computations involve only the retained MCMC samples after convergence.Furthermore, most of the ingredients to compute µ p and Σ p have already been performed while updating the model parameters.For any posterior sample p ). Low-rank models, where r << n, are again cheaper here.The operations are dominated by the computation of C 12 (θ) C 11 (θ) −1 C 12 (θ), which can be evaluated as U U − V V , where U = D(θ) −1/2 C 12 (θ), V = HU and H is as in (7).This avoids direct evaluation of C 11 (θ) −1 and avoids redundant matrix operations. Updating y (k) 0 's requires Cholesky factorization of Σ p , which is t × t and can be expensive if t is large.In most practical settings, it is sufficient to take t = 1 and perform independent individual predictions.However, if the joint predictive distribution is sought, say when full inference is desired for a function of y 0 , then the predictive step is significantly cheaper if we use the posterior samples of α as well.Now posterior predictive sampling amounts to drawing y )), which cheap because D(θ) is usually diagonal.Low rank models are especially useful here as posterior sampling for α is much cheaper with r << n. Computing environment The MCMC algorithms described in the preceding sections are implemented in spBayes functions.These functions are written in C++ and leverage R's Foreign Language Interface to call Fortran BLAS (Basic Linear Algebra Subprograms, see Blackford, Demmel, Dongarra, Duff, Hammarling, Henry, Heroux, Kaufman, Lumsdaine, Petitet, Pozo, Remington, and Whaley 2001) and LAPACK (Linear Algebra Package, see Anderson, Bai, Bischof, Blackford, Demmel, Dongarra, Du Croz, Greenbaum, Hammarling, McKenney, and Sorensen 1999) libraries for efficient matrix computations.Table 1 offers a list of key BLAS and LAPACK functions used to implement the MCMC samplers.Referring to Table 1 and following from Section 2.1, chol corresponds to dpotrf and trsolve can be either dtrsv or dtrsm, depending on the form of the equation's right-hand side.As noted previously, we try and use dense matrixmatrix multiplication, i.e., calls to dgemm, sparingly due to its computational overhead.Often careful formulation of the problem can result in fewer calls to dgemm and other expensive BLAS level 3 and LAPACK functions. Models offered by spBayes All the models offered by spBayes emerge as special instances of (1).The matrix D(θ) is always taken to be diagonal or block-diagonal (for multivariate models).The spatial random effects α are assumed to arise from a partial realization of a spatial process and the spatial covariance matrix K(θ) is constructed from the covariance function specifying that spatial process.To be precise, if {w(s) : s ∈ d } is a Gaussian spatial process with positive definite covariance function C(s, t; θ) (see, e.g., Bochner 1955) and if {s 1 , s 2 , . . ., s r } is a set of any r locations in D, then α = (w(s 1 ), w(s 2 ), . . ., w(s r )) and K(θ) is its r × r covariance matrix. Full rank univariate Gaussian spatial regression For Gaussian outcomes, geostatistical models customarily regress a spatially referenced dependent variable, say y(s), on a p × 1 vector of spatially referenced predictors x(s) (with an intercept) as where s ∈ D ⊆ 2 is a location.The residual comprises a spatial process, w(s), and an independent white-noise process, ε(s), that captures measurement error or micro-scale variation. With any collection of n locations, say S = {s 1 , . . ., s n }, we assume the independent and identically distributed ε(s i )'s follow a Normal distribution N (0, τ 2 ), where τ 2 is called the nugget.The w(s i )'s provide local adjustment (with structured dependence) to the mean and capturing the effect of unmeasured or unobserved regressors with spatial pattern. Customarily, one assumes stationarity, which means that C(s, t) = C(s − t) is a function of the separation of sites only.Isotropy goes further and specifies C(s, t) = C( s − t ), where s − t is the Euclidean distance between the sites s and t.We further specify C(s, t) = σ 2 ρ(s, t; φ) in terms of spatial process parameters, where ρ(•; φ) is a correlation function while φ includes parameters quantifying rate of correlation decay and smoothness of the surface w(s).Var(w(s)) = σ 2 represents a spatial variance component.Apart from the exponential, ρ(s, t; φ) = exp(−φ s−t ), and the powered exponential family, ρ(s, t; φ) = exp(−φ s−t α ), spBayes also offers users the Matérn correlation function Here φ = {φ, ν} with φ controlling the decay in spatial correlation and ν controlling process smoothness.Specifically, if ν lies between positive integers m and (m + 1), then the spatial process w(s) is mean-square differentiable m times, but not m + 1 times.Also, Γ is the usual Gamma function while K ν is a modified Bessel function of the second kind with order ν. The hierarchical model built from (12) emerges as a special case of (1), where y is n × 1 with entries y(s i ), X is n × p with x(s i ) as its rows, α is n × 1 with entries w(s i ), Z(θ) = I n , K(θ) is n × n with entries C(s i , s j ; θ) and D(θ) = τ 2 I n .We denote by θ the set of process parameters in K(θ) and D(θ).Therefore, with the Matérn covariance function in (13), we define θ = {σ 2 , φ, ν, τ 2 }. Example The marginalized specification of ( 12) is implemented in the spLM function.The primary output of this function is posterior samples of θ.As detailed in the preceding sections, sampling is conducted using a Metropolis algorithm.Hence, users must specify Metropolis proposal variances, i.e., tuning values, and monitor acceptance rates for these parameters. Alternately, an adaptive MCMC Metropolis-within-Gibbs algorithm, proposed by Roberts and Rosenthal (2009), is available for a more automated function call. A key advantage of the first stage Gaussian model is that samples from the posterior distribution of β and w can be recovered in a posterior predictive fashion, given samples of θ.In practice we often choose to only use a subset of post burn-in θ samples to collect corresponding samples of β and w.This composition sampling, detailed in Section 2.2, is conducted by passing a spLM object to the spRecover function. An analysis of a synthetic dataset serves to illustrate use of the spLM and spRecover functions. The data are formed by drawing 200 observations from (12) within a unit square domain.The model mean includes an intercept and covariate with associated coefficients β 0 = 1 and β 1 = 5, respectively.Model residuals are generated using an exponential spatial correlation function, with τ 2 = 1, σ 2 = 2 and φ = 6.This choice of φ corresponds to an effective spatial range of 0.5 distance units.For our purposes, the effective spatial range is the distance at which the correlation equals 0.05. Figure 1 provides a surface plot of the observed spatial random effects along with the location of the 200 observations. All spLM function arguments, and those of others functions highlighted in this paper, are defined in the package manual available on CRAN.Here we illustrate only some of the possible argument specifications.In addition to a symbolic model statement, the spLM function requires the user to specify: i) the number of MCMC samples to collect; ii) prior distribution, with associated hyperpriors for each parameter; iii) starting values for each parameter, and; iv) tuning values for each parameter, unless the adaptive MCMC option is chosen via the amcmc argument. For this analysis, we assume an inverse-Gamma (IG) distribution for the variance parameters, τ 2 and σ 2 .These distributions are assigned shape and scale hyperpriors equal to 2 and 1, respectively.With a shape of 2, the mean of the IG is equal to the scale and the variance is infinite.In practice, the choice of the scale value can be guided by exploratory data analysis using a variogram or similar tools that provide estimates of the spatial and nonspatial variances.The spatial decay parameter φ is assigned a uniform (U) prior with support that covers the extent of the domain.Here, we assume φ lies in the interval between 0.1 to 1 in distance units, i.e., working from our definition of the effective spatial range this corresponds to the prior U(−log(0.05)/1,−log(0.05)/0.1).In the code below, we define these priors along with the other necessary arguments that are passed to spLM.The resulting posterior samples of θ are summarized using the coda package's summary function and each parameter's posterior distribution median and 95% credible interval (CI) is printed. Number of covariates 2 (including intercept if specified). Using the exponential spatial correlation model. The previous implementation updated β from its full conditional distribution in each MCMC iteration and sampled θ using a Metropolis algorithm that did not take advantage of triangular solvers and other efficient computational approaches detailed in the preceding sections. For comparison, the current version of spLM generates the same number of samples in 0.069 minutes. Low-rank predictive process models spBayes offers low-rank models that allow the user to choose and fix r << n within a hierarchical linear mixed model framework such as (1).Given the same modeling scenario as in Section 4.1, the user chooses r locations, say S * = {s * 1 , s * 2 , . . ., s * r }, and defines the process Banerjee, Gelfand, Finley, and Sang (2008) call w(s) the predictive process.Replacing w(s) with w(s) in ( 12) yields the predictive process counterpart of the univariate Gaussian spatial regression model. The predictive process produces a low-rank model and can be cast into (1).For example, if we take α to the r × 1 random vector with w(s * i ) as its entries, then the predictive process counterpart of ( 12) is obtained from (1) with D(θ) = τ 2 I, K(θ) = C * (θ) and Z(θ) = C(θ) C * (θ) −1 , where C(θ) is n × r whose entries are the covariances between w(s i )'s and w(s * j )'s and C * (θ) −1 is the r × r covariance matrix of the w(s * i )'s.When employing the computational strategy for generic low-rank models described in Section 2.3, an alternative, but equivalent, parametrization is obtained by letting K(θ) = C * (θ) −1 and Z(θ) = C(θ) .This has the added benefit of avoiding the computation of C * (θ) −1 , which, though not expensive for low-rank models, can become numerically unstable depending upon the choice of the covariance function.Now α ∼ N (0, C * (θ) −1 ) is no longer a vector of process realizations over the knots but it still is an r × 1 random vector with a legitimate probability law.If the spatial effects over the knots are desired, they can be easily obtained from the posterior samples of α and θ as C * (θ)α. We also offer an improvement over the predictive process, which attempts to capture the residual from the low-rank approximation by adjusting for the residual variance (see, e.g., Finley, Sang, Banerjee, and Gelfand 2009).The difference between the spatial covariance matrices for the full rank model ( 12) and the low-rank model is C w (θ) − Z(θ)K(θ)Z(θ) , where C w (θ) is the n × n covariance matrix of the spatial random effects for (12). The modified predictive process model approximates this "residual" covariance matrix by absorbing its diagonal elements into D(θ).Therefore, D(θ) = diag{C w (θ)−Z(θ)K(θ)Z(θ) }+ τ 2 I n , where diag(A) denotes the diagonal matrix formed with the diagonal entries of A. The remaining specifications for Z(θ), K(θ) and α in (1) remain the same as for the predictive process. We often refer to the modified predictive process as wε (s) = w(s) + ˜ (s), where w(s) is the predictive process and ˜ (s) is an independent process with zero mean and variance given by var{w(s)} − var{ w(s)}.In terms of the covariance function of w(s), the variance of ˜ (s) is C(s, s; θ) − c(s, θ) C * (θ) −1 c(s), where c(s) is the r × 1 vector of covariances between w(s) and w(s * j) as its entries.Also, w * , w and w denote the collection of w(s * i )'s over the r knots, w(s i )'s over the n locations and w (s i )'s over the n locations respectively. A key issue in low-rank models is the choice of knots.Given a computationally feasible r one could fix the knot location using grid over the extent of the domain, space-covering design (e.g., Royle and Nychka 1998), or more sophisticated approach aimed at minimizing a predictive variance criterion (see, e.g., Finley et al. 2009;Guhaniyogi, Finley, Banerjee, and Gelfand 2011).In practice, if the observed locations are evenly distributed across the domain, we have found relatively small difference in inference based on knot locations chosen using a grid, space-covering design, or other criterion.Rather, it is the number of knots locations that has the greater impact on parameter estimates and subsequent prediction.Therefore, we often investigate sensitivity of inference to different knot intensities, within a computationally feasible range. Example Moving from (12) to its predictive process counterpart is as simple as passing a r × 2 matrix of knot locations, via the knots argument, to the spLM function.Choice between the nonmodified and modified predictive process model, i.e., w(s) and wε (s), is specified using the modified.pplogical argument.Passing a spLM object, specified for a predictive process model, to spRecover will yield posterior samples from w or wε and w * . We construct a second synthetic dataset using the same model and parameter values from Section 4.1.1,but now generate 2, 000 observations.Parameters are then estimated using the following candidate models: i) non-modified predictive process with 25 knot grid; ii) modified predictive process with 25 knot grid; iii) non-modified predictive process with 100 knot grid, and; iv) modified predictive process with 100 knot grid. The spLM call for the 25 knot non-modified predictive process model is given below.The starting, priors, and tuning arguments are taken from Section 4.1.1.As noted above, the knots argument invokes the predictive process model.The value portion of this argument c(5, 5, 0) specifies a 5 by 5 knot grid with should be placed over the extent of the observed locations.The third value in this vector controls the extent of the this grid, e.g., one may want the knot grid to extend beyond the convex haul of the observed locations.The placement of these knots is illustrated in Figure 2(b).Users can also pass in their own knot locations via the knots argument. Number of covariates 2 (including intercept if specified). Using the exponential spatial correlation model. Using non-modified predictive process with 25 knots. For comparison with Table 2, the full rank model required 5.18 minutes to generate the 5, 000 posterior samples.Also parameter estimates from the full rank model were comparable to those of model iv.These attractive qualities of the predictive process models do not extend to all settings.For example, if the range of spatial dependence is short relative to the spacing of the knots, then covariance parameter estimation will suffer.We are obviously forgoing some information about underlying spatial process when using an array of knots that is coarse compared to the number of observations.This is most easily seen by comparing estimated spatial random effects surfaces to the true surface used to generate the data, as shown in Figure 2.This smoothing of the random effects surface can translate into diminished predictive ability and, in some cases, model parameter inference, compared to a full rank model.Following from Section 2.4, given coordinates and predictors for new locations, and a spLM object, the spPredict function returns posterior predictive samples from y 0 .The spPredict function provides a generic interface for prediction using most model functions in spBayes. The code below illustrates prediction using model iv for 1, 000 holdout locations.Here, X.ho is the 1, 000 × 2 (i.e., t × p) predictor matrix associated with the 1, 000 holdout coordinates stored in coords.ho. Multivariate Gaussian spatial regression models Multivariate spatial regression models consider m point-referenced outcomes that are regressed, at each location, on a known set of predictors y j (s) = x j (s) β j + w j (s) + j (s) , for j = 1, 2, . . ., m , where x j (s) is a p j × 1 vector of predictors associated with outcome j, β j is the p j × 1 slope, w j (s) and j (s) are the spatial and random error processes associated with outcome y j (s).Customarily, we assume the unstructured residuals ε(s) = ( 1 (s), 2 (s), . . ., m (s)) follow a zero-centered multivariate normal distribution with zero mean and an m × m dispersion matrix Ψ. Spatial variation is modeled using an m × 1 Gaussian process w(s) = (w 1 (s), . . ., w m (s)) , specified by a zero mean and a cross-covariance matrix C w (s, t) with entries being covariance between w i (s) and w j (t).spBayes uses the linear model of coregionalization (LMC) to specify the cross-covariance.This assumes that C w (s, t) = AM (s, t)A , where A is m × m lowertriangular and M (s, t) is m×m diagonal with each diagona entry a spatial correlation function endowed with its own set of process parameters. Suppose we have observed the m outcomes in each of b locations.Let y be n × 1, where n = mb, obtained by stacking up the y(s i )'s over the b locations.Let X be the n × p matrix of predictors associated with y, where p = m j=1 p j , and β is p × 1 with the β j 's stacked correspondingly.Then, the hierarchical multivariate spatial regression models arise from (1) with the following specifications: D(θ) = I b ⊗ Ψ, α is n × 1 formed by stacking the w i 's and K(θ) is n×n partitioned into m×m blocks given by AM (s i , s j )A .The positive-definiteness of K(θ) is ensured by the linear model of coregionalization (Gelfand, Schmidt, Banerjee, and Sirmans 2004).spBayes also offers low rank multivariate models involving the predictive process and the modified predictive process that can be estimated using strategies analogous to Section 2.3.Both the full rank multivariate Gaussian model and its predictive process counterpart are implemented in the spMvLM function.Notation and additional background for fitting these models is given by Banerjee et al. (2008) and Finley et al. (2009) as well as example code in the spMvLM documentation examples. Non-Gaussian models Two typical non-Gaussian first stage settings are implemented in spBayes: i) binary response at locations modeled using logit or probit regression, and; ii) count data at locations modeled using Poisson regression.Diggle, Moyeed, and Tawn (1998) unify the use of generalized linear models in spatial data contexts.See also Lin, Wahba, Xiang, Gao, and Klein (2000), Kammann and Wand (2003) and Banerjee et al. (2004).Here we replace the Gaussian likelihood in (1) with the assumption that E[y(s)] is linear on a transformed scale, i.e., η(s) ≡ g(E(y(s))) = x(s) β + w(s) where g(•) is a suitable link function.We refer to these as spatial generalized linear models (GLM's). With the Gaussian first stage, we can marginalize over the spatial effects and implement our MCMC over a reduced parameter space.With a binary or Poisson first stage, such marginalization is precluded and we have to update the spatial effects in running our Gibbs sampler.We offer both the traditional random-walk Metropolis as well as the adaptive random-walk Metropolis (Roberts and Rosenthal 2009) to update the spatial effects.spBayes also provides low-rank predictive process versions for spatial GLM's.The analogue of (1) is where f (•) represents a Bernoulli or Poisson density with η(s) represents the mean of y(s) on a transformed scale.This model and its predictive process counterpart is implemented in the spGLM function.These models are extended to accommodate multivariate settings, outlined in Section 5, using the spMvGLM function. Dynamic spatio-temporal models There are many different flavors of spatio-temporal data and an extensive statistical literature that addresses the most common settings.The approach adopted here applies to the setting where space is viewed as continuous, but time is assumed to be discrete.Put another way, we view the data as a time series of spatial process realizations and work in the setting of dynamic models.Building upon previous work in the setting of dynamic models by West and Harrison (1997), several authors, including Stroud, Müler, and Sansó (2001) and Gelfand, Banerjee, and Gamerman (2005), proposed dynamic frameworks to model residual spatial and temporal dependence.These proposed frameworks are flexible and easily extended to accommodate nonstationary and multivariate outcomes. Dynamic linear models, or state-space models, have gained tremendous popularity in recent years in fields as disparate as engineering, economics, genetics, and ecology.They offer a ver-satile framework for fitting several time-varying models (West and Harrison 1997).Gelfand et al. (2005) adapted the dynamic modeling framework to spatio-temporal models with spatially varying coefficients.Alternative adaptations of dynamic linear models to space-time data can be found in Stroud et al. (2001). Model specification spBayes offers a relatively simple version of the dynamic models in Gelfand et al. (2005).Suppose, y t (s) denotes the observation at location s and time t.We model y t (s) through a measurement equation that provides a regression specification with a space-time varying intercept and serially and spatially uncorrelated zero-centered Gaussian disturbances as measurement error t (s).Next a transition equation introduces a p × 1 coefficient vector, say β t , which is a purely temporal component (i.e., time-varying regression parameters), and a spatio-temporal component u t (s).Both these are generated through transition equations, capturing their Markovian dependence in time.While the transition equation of the purely temporal component is akin to usual state-space modeling, the spatio-temporal component is generated using Gaussian spatial processes.The overall model is written as ind. ∼ N (0, τ 2 t ) ; where the abbreviations ind.and i.i.d are independent and independent and identically distributed, respectively.Here x t (s) is a p × 1 vector of predictors and β t is a p × 1 vector of coefficients.In addition to an intercept, x t (s) can include location specific variables useful for explaining the variability in y t (s).The GP (0, C t (•, θ t )) denotes a spatial Gaussian process with covariance function C t (•; θ t ).We customarily specify C t (s 1 , s 2 ; θ t ) = σ 2 t ρ(s 1 , s 2 ; φ t ), where θ t = {σ 2 t , φ t } and ρ(•; φ) is a correlation function with φ controlling the correlation decay and σ 2 t represents the spatial variance component.We further assume β 0 ∼ N (m 0 , Σ 0 ) and u 0 (s) ≡ 0, which completes the prior specifications leading to a well-identified Bayesian hierarchical model with reasonable dependence structures.In practice, estimation of model parameters are usually very robust to these hyper-prior specifications.Also note that (17) reduces to a simple spatial regression model for t = 1. We consider settings where the inferential interest lies in spatial prediction or interpolation over a region for a set of discrete time points.We also assume that the same locations are monitored for each time point resulting in a space-time matrix whose rows index the locations and columns index the time points, i.e. the (i, j)-th element is y j (s i ).Our algorithm will accommodate the situation where some cells of the space-time data matrix may have missing observations, as is common in monitoring environmental variables. Conducting full Bayesian inference for (17) is computationally onerous and spBayes also offers a modified predictive process counterpart of (17).This is achieved by replacing u t (s) in ( 17) with ũt (s) = t k=1 [ wk (s) + ˜ k (s)], where wk (s) is the predictive process as defined in (14) and the "adjustment" ˜ t (s) compensates for the oversmoothing by the conditional expectation component and the consequent underestimation of spatial variability (see Finley, Banerjee, and Gelfand 2012) for details. the regression within each time step.This can be easily assembled using the lapply function as shown in the code below.Here too, we define the station coordinates as well as starting, tuning, and prior distributions for the model parameters.Exploratory data analysis using time step specific variograms can be helpful for defining starting values and prior support for parameters in θ t and τ 2 t .To avoid cluttering the code, we specify the same prior for the φ t 's, σ 2 t 's, and τ 2 t 's.As in the other spBayes model functions, one can choose among several popular spatial correlation functions including the exponential, spherical, Gaussian and Matérn.The exponential correlation function is specified in the spDynLM call below.Unlike other model functions described in the preceding sections, the spDynLM function will accept NA y t (s) values.The sampler will provide posterior predictive samples for these missing values.If the get.fitted argument is TRUE then these posterior predictive samples are save along with posterior fitted values for locations where the outcomes are observed. Model choice The spDiag function provides several approaches to assessing model performance and subsequent comparison for spLM, spMvLM, spGLM, and spMvGLM objects.These include the popular Deviance Information Criterion (Spiegelhalter, Best, Carlin, and Linde 2002) as well as a measure of posterior predictive loss detailed in Gelfand and Ghosh (1998) and a scoring rule defined in Gneiting and Raftery (2007). 9. Summary and future direction spBayes version 0.3-7 (CRAN 6/1/13), and subsequent versions, offers a complete reformulation and rewrite of core functions for efficient estimation of univariate and multivariate models for point-referenced data using MCMC.Substantial increase in computational efficiency and flexibility in model specification, compared earlier spBayes package versions, is the result of careful MCMC sampler formulation that focused on reducing parameter space and avoiding expensive matrix operations.In addition, all core functions provide predictive process models able to accommodate large data sets that are being increasingly encountered in many fields. We are currently developing an efficient modeling framework and sampling algorithm to accommodate multivariate spatially misaligned data, i.e., settings where not all of the outcomes are observed at all locations, that will be added to the spMvLM and spMvGLM functions.Prediction of these missing outcomes should borrow strength from the covariance among outcomes both within and across locations.In addition, we hope to add functions for non-stationary multivariate models such as those described in Gelfand et al. (2004) and more recent predic-tive process versions we developed in Guhaniyogi, Finley, Banerjee, and Kobe (2013).We will also continue developing spDynLM and helper functions.Ultimately, we would like to provide more flexible specifications of spatio-temporal dynamic models and allow them to accommodate non-Gaussian and multivariate outcomes. Finley et al. (2007) outline the first version of spBayes as an R package for estimating Bayesian spatial regression models for point-referenced outcomes arising from Gaussian, binomial or Poisson distributions.For the Gaussian case, the recent version of spBayes offers several Bayesian spatial models emerging from the hierarchical linear mixed model framework Figure 1 : Figure 1: Interpolated surface of the observed (a) and estimated (b) spatial random effects. Figure 2 : Figure 2: Interpolated surfaces of the (a) observed spatial random effects and (b), (c), (d), (e) are the estiamted spatial random effects from models i, ii, iii, and iv, respectively.Filled circile symbols in (b), (c), (d), (e) show the location of predictive process knots.(f) plots holdout observed versus candidate model iv predicted median and 95% CI intervals with 1:1 line. -Model fit with 28 observations in 62 time steps.Number of missing observations 117.Number of covariates 4 (including intercept if specified).Using the exponential spatial correlation model.Number of MCMC samples 5000.Priors and hyperpriors: beta normal: Figure 4 : Figure 4: Posterior distribution medians and 95% credible intervals for model intercept and predictors. Figure 6 : Figure 6: Posterior predicted distribution medians and 95% credible intervals, solid and dashed lines respectively, for three stations.Open circle symbols indicate those observations use for model parameter estimation and filled circle symbols indicate those observations withheld for validation. Table 1 : Common BLAS and LAPACK functions used in spBayes function calls.Function Description dpotrf LAPACK routine to compute the Cholesky factorization of a real symmetric positive definite matrix. dtrsv Level 2 BLAS routine to solve the systems of equations Ax = b, where x and b are vectors and A is a triangular matrix.dtrsm Level 3 BLAS routine to solve the matrix equations AX = B, where X and B are matrices and A is a triangular matrix.dgemv Level 2 BLAS matrix-vector multiplication.dgemm Level 3 BLAS matrix-matrix multiplication.A heavy reliance on BLAS and LAPACK functions for matrix operations allows us to leverage multi-processor/core machines via threaded implementations of BLAS and LAPACK, e.g., Intel's Math Kernel Library (MKL; http://software.intel.com/en-us/intel-mkl).With the exception of dtrsv, all functions in Table 1 are threaded in Intel's MKL.Use of MKL, or similar threaded libraries, can dramatically reduce sampler run-times.For example, the illustrative analyses offered in subsequent sections were conducted using R, and hence spBayes, compiled with MKL on an Intel Ivy Bridge i7 quad-core processor with hyperthreading.The use of these parallel matrix operations results in a near linear speadup in the MCMC sampler's run-time with the number of CPUs-at least 4 CPUs were in use in each function call.spBayes also depends on several R packages including: coda (Plummer, Best, Cowles, Vines, Sarkar, and Almond 2012) for casting the MCMC chain results as coda objects for easier posterior analysis; abind (Plate and Heiberger 2013) and magic (Hankin 2013) for forming multivariate matrices, and; Formula (Zeileis 2013) for interpreting symbolic model formulas. - -Sampled: 5000 of 5000, 100.00%Report interval Metrop.Acceptance rate: 32.24% Overall Metrop.Acceptance rate: 33.72% Table 2 : -Parameter estimates and run-time (wall time) in minutes for candidate predictive process models.Parameter posterior summary 50 (2.5, 97.5) percentiles.
2013-10-30T15:16:32.000Z
2013-10-30T00:00:00.000
{ "year": 2013, "sha1": "b3b8303359846d6325564757022b69090d79b254", "oa_license": "CCBY", "oa_url": "https://www.jstatsoft.org/index.php/jss/article/view/v063i13/v63i13.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "32e62f09b19a4fd8f137f37728e838b578dcc348", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
773689
pes2o/s2orc
v3-fos-license
Spectroscopy of Massive Stars Although rare, massive stars, being the main sources of ionizing radiation, chemical enrichment and mechanical energy in the Galaxy, are the most important objects of the stellar population. This review presents the many different aspects of the main tool used to study these stars, i.e. spectroscopy. The first part consists in an introduction on these objects and their physical properties (mass, wind, evolution, relation with their environment). Next, the spectral behaviour of single massive stars is investigated, in the visible as well as in the X-ray domain. Finally, the last part of this paper deals with massive binaries, especially those exhibiting a colliding wind phenomenon. Astrophysics of Massive stars: Introduction 1.1. Main characteristics. A star is considered "massive" if it can ignite carbon burning in its core during the late stages of its evolution. Such stars are the progenitors of black holes and neutron stars and have masses larger than 10M ⊙ . Since they go further into nucleosynthesis stages than stars like the Sun, these stars are the most important sources of chemical enrichment in galaxies. Such massive stars, of spectral type O, are blue and bright objects. Their luminosities amount to 10 5 − 10 6 L ⊙ : such objects are thus visible from far away in the Universe. In addition, their effective temperatures are larger than 30kK, meaning that the majority of their radiation is emitted in the ultraviolet (UV). Massive stars are therefore the main sources of ionizing radiation in galaxies, and this explains why these stars are surrounded by bright nebulae that are H ii regions of ionized gas. The distribution of mass amongst stars follows a law of the form dN = KM −α dM = Γ with N the number of stars in the mass interval [m, m + dm], K a constant and M the initial mass of stars. The parameter α is generally considered to be 2.35 (the Salpeter value). Consequently, the number of stars decreases when the mass increases. Massive stars are therefore very rare among the stellar population: for each star with a mass between 60 and 120 M ⊙ there form 250 stars with masses between 1 and 2 M ⊙ . This paucity also means that such stars (LMC), was once thought to be a single, supermassive star. c HST Figure 2. WR20a, the most massive system ever weighed, is an eclipsing binary (Top) and a spectroscopic binary (Bottom): the masses of the components can thus be evaluated precisely. (G. Rauw, private communication) are also generally distant: the nearest O-type star, ζ Oph, is situated between 417 and 509 light years; whereas the nearest Wolf-Rayet star (see below) belongs to the binary system γ 2 Vel and is somewhere between 740 and 975 light years from the Earth. Whilst the lowest possible mass of a massive star is generally well known, the question of the maximum mass of these stars is not settled yet, mainly because of the difficulties in measuring the stellar masses. Estimating the mass of an object can be done by modelling its spectrum, but these models are not always reliable, especially when approaching the limits of the parameter space. The use of a massluminosity relation has also been widely popular amongst astronomers but it can result in unrealistic large masses when the object is not spatially resolved. For example, R136a, at the core of the 30 Doradus nebula in the LMC, was once thought to be a single, 1000-2500M ⊙ star. It was later found that R136a was actually composed of a dozen components (Weigelt et al. 1991, see also Fig. 1). In fact, the only reliable method for deriving masses is to observe eclipsing binaries. Using Kepler's laws, one can then constrain all physical parameters of the system by observing the photometric eclipses and the spectroscopic signature of the orbital motion. However, we may note that only young binaries, where no interactions have taken place, can lead to reliable, typical masses. The most massive stars detected so far by this method belong to WR20a and have masses of 82 and 83 M ⊙ , see Fig. 2). In addition, another method has been used by Figer (2005) to derive an upper limit on the stellar mass: when observing the Arches cluster, he should have detected 20 to 30 stars of mass larger than 150M ⊙ , but this was not the case (see Fig. 3). He therefore concludes that there exists no star with a mass larger than 150M ⊙ . We emphasize that this upper limit is much larger than the actual largest mass observed in WR20a. The Arches cluster, observed here by HST, contains more than 2000 stars. Bottom : In this cluster, the existence of many stars with more than 150 M ⊙ had been predicted theoretically but none was found. c HST Figure 4. Excerpt from the digital atlas of Walborn & Fitzpatrick (1990). It can also be noted that the very first generation of massive stars (when there was no metal 1 in the Universe, i.e. population III objects), could have reached much larger masses (hundreds to 1000M ⊙ ). They could have given birth to intermediate-mass black holes, that might have become seeds for the supermassive black holes at the center of galaxies. These first stars should also have been very luminous and are therefore thought to be responsible for the re-ionisation of the Universe approximately one billion years after the Big Bang. Finally, they would have been the first celestial objects to build metals, sowing the whole Universe. However, all this is still very putative. 1.2. Classification of Massive Stars. There exist two ways of classifying O-type stars: the spectral morphology or the determination of line ratios. The first criterion fits the general philosophy of the MKK classification scheme, where a sample of 'typical' standard spectra is constructed and serves as comparison when determining spectral classes. For Otype stars, the main tool of this kind is the atlas made by Walborn & Fitzpatrick (1990), which provides low-resolution spectra in the 4000-4700Å range (see Fig. 4). As O-type stars are very hot, the lines useful for classification are indeed helium lines: the stronger the He ii lines compared to the He i lines, the earlier (i.e. the hotter) the star. The transition from O to B-type stars occurs when the He iiλ4542 line is extremely weak, barely detectable. Note that a morphological classification is also used to classify evolved massive stars, i.e. Wolf-Rayet stars. The second criterion is more quantitative and uses the determination of equivalent widths (EWs). The EW of a specific line represents the width of a rectangular line of the same area as the actual observed line. Conti & Frost (1977) and Mathys (1988Mathys ( ,1989 have shown that the spectral type of an O star can be determined by comparing the EW of the He iλ4471 line to that of the He iiλ4542 line (see Table 1). Attempts have been made to create a similar quantitative scale in other parts of the electromagnetic spectrum but the Conti-Mathys scale is still the most popular one. The luminosity class can also be deduced from similar EW measurements (see the same Table). 1.3. Stellar Winds. It is well known that the Sun possesses a solar wind, which is notably responsible for generating polar auroras on Earth. The origin of this wind is linked to the existence of a solar corona. Outside the photosphere (∼6000K), the temperature of the gas increases to reach 10 6 K in the corona. At such temperatures, the gas pressure is very high and it then expands naturally in the lower pressure surrounding regions. By this mechanism, the Sun loses ∼ 10 −14 M ⊙ yr −1 . In comparison, massive stars lose 10 −7 − 10 −4 M ⊙ yr −1 and they do not possess any corona. In fact, the presence of winds in luminous stars is linked to the large luminosity of these objects. In the atmosphere of such stars, there is a transfer of momentum from the photons of the stellar radiation field to the material surrounding the star. In other words, the winds are driven by the absorption of the stellar radiation and its subsequent scattering (see e.g. Lamers & Cassinelli 1999). If we consider an atom moving in a radial direction that absorbs a photon of frequency ν, its momentum becomes mv ′ r = mv r + hν c . A moment later, the atom re-emits the photon at an angle α (see Fig. 5) and the momentum along the radial direction becomes mv ′′ r = mv ′ r − hν ′ c cos(α). Figure 5. The stellar wind of massive stars is linedriven. Atoms and ions absorb and then re-emit the stellar radiation. Figure 6. Formation of a P Cygni profile, resulting from the superposition of an emission profile and an absorption profile. The emission comes from light scattered into the line-of-sight from all regions of the wind (front, blueshifted wind and back, redshifted wind). The absorption comes from light scattered away the line-of-sight by atoms between the star and the observer: in this part, the wind is coming towards us, and the absorption is thus blueshifted. The atom will mainly absorb photons at the frequency of specific lines. The frequency, in the rest frame, of such a line will be denoted ν 0 . If the atmosphere were static, only radiation near the photosphere would be absorbed. However, since there is a velocity gradient in the wind, each region actually absorbs at a different frequency because of Doppler shift. If the atom is moving with a velocity v r relative to the star, source of photons, it will see stellar photons with a shifted frequency ν(1 − v r /c), so the absorbed photon has a frequency ν = ν 0 (1 + v r /c) in the rest frame of the star. When the photon is subsequently re-emitted, an observer will see that an atom with a velocity of v ′ r has emitted a photon of frequency ν ′ = ν 0 (1 + v ′ r /c). Therefore, the final velocity is v ′′ Replacing v ′ r by the value derived above and assuming that v << c and hν 0 << mc 2 , we then find: v ′′ r − v r = hν 0 mc (1 − cos(α)). Therefore, we can see that forward scattering (α=0) does not increase the momentum of the atom whereas backward scattering (α=180 • ) increases it by 2 hν 0 c . Since the scattering takes place randomly, we can calculate the average momentum gained by averaging over a solid angle of 4π : If the photons were coming from any direction, the net momentum gain would be zero. However, since they come only from the star, a radial acceleration appears, which results in a so-called beta velocity law: v(r) = v ∞ (1 − R * r ) β where R * is the stellar radius and β amounts to 0.8-1 for O-type stars and 1-2 for WR stars. The terminal velocities v ∞ can reach 1000-3000 km s −1 for massive stars, whereas the velocity of the solar wind is much lower, only 400-700 km s −1 . The acceleration is mainly provided by the absorption and re-emission of UV photons in the lines of the most abundant metallic ions (CNO+Fe). As a consequence, acceleration is less effective and the mass-loss rateṀ is much smaller in a low metallicity environment. The energy gained by the metallic ions is subsequently shared with all other particles through interactions with them (Coulomb coupling), so that the whole wind expands. If all photons leaving the stars were absorbed or scattered, the momentum of the wind would equal that of the radiation, i.e.Ṁ max v ∞ = L/c -this is called the single scattering limit. However, note that a photon can be absorbed and re-emitted several times! The presence of the wind affects the emergent spectrum. The wind acceleration, especially in the UV, generates P Cygni line profiles (see Fig. 6). In addition, some emission lines (e.g. N iiiλλ4634,4641 or He iiλ4686) will be formed in the wind, and not in the photosphere as most of the absorption lines. The spectra of Wolf-Rayet stars give an extreme example of this: they display exclusively broad emission lines formed in their thick stellar wind (see Fig. 7). The radiation-driven process is an unstable one, so winds of massive stars are intrinsically unstable. Therefore, the wind is more patchy than a smooth ejection of matter. The mass-loss diagnostic lines, like Hα, thus arise in small, dense regions (called clumps): if the lines are analyzed with a spherically symmetric smooth wind model, the massloss rate will be overestimated. Generally, the filling factor of the wind, i.e. the ratio between the volume of the clumps and the total volume, is found to be 0.05-0.25. Since the winds are driven by radiation, the mechanical momentum of the wind should be related to the luminosity of the star. This relation is often expressed as the "wind-momentum luminosity relation": where D 0 is a function of the spectral type and the luminosity class of the star and x depends on the spectral type, luminosity class, and metallicity of the star (see Herrero 2005 and references therein). Figure 8. In M17, a baby massive star accretes matter from a circumstellar disk (here seen in silhouette). c ESO 1.4. Evolution. Although massive stars have been observed for centuries, astronomers still debate on how they actually form. In fact, these stars can ignite hydrogen core burning before they have reached their final mass: the radiative pressure of the light then emitted should prevent any further direct accretion. Three main theories have been proposed to overcome this problem. Massive stars could form through the coalescence of lower mass proto-stars, through accretion from a circumstellar disk (just as low-mass stars, but with a higher accretion rate), or through competitive accretion (the most massive clusters producing the most massive stars). Up to now, several evidences seem to favor the accretion model (e.g. Chini et al. 2004, see Fig. 8). Note that the formation of massive binaries is not completely understood either: tidal capture and fragmentation of the unstable accretion disk have been proposed to explain their existence. Once they are formed, massive stars emit large quantities of radiation: they actually burn the hydrogen in their core at a much higher rate of nuclear reactions than lower mass objects. Therefore, although massive stars have more "fuel", they have a much shorter lifetime: a few million years, whereas the Sun will live ten billion years. Because of this short life, these stars do not generally have the time to wander far away from their birth place. Most of the massive stars are thus found in clusters, and the few isolated objects are generally considered as having formed in a cluster and been ejected later on, because of tidal interactions or following a supernova kick (if the star belonged to a binary system). If the formation of massive stars is not completely understood, this is also the case for the details of their subsequent evolution and death. The most popular evolution scenario is the so-called Conti-scenario: massive stars follow the path O→X→H-poor WN→WC. WN and WC are two types of Wolf-Rayet (WR) stars, WN being nitrogen enriched and WC carbon-enriched; X is a phase that depends on the mass of the star, i.e. a red supergiant (RSG) phase if the star has M < 40M ⊙ and a luminous blue variable (LBV) stage otherwise (for more details, see Maeder et al. 2005 and references therein). The wind already plays an important role during the main-sequence lifetime since it is responsible for steadily decreasing the mass of the star at a rate of about 10 −7 − 10 −6 M ⊙ yr −1 . However, the mass-loss further increases during the later stages of the evolution of massive stars: the mass-loss rate of typical Figure 9. Left: η Carinae is an LBV that underwent two eruptions in the 19th century, losing 1-2M ⊙ in the process. Right : WR124, a Wolf-Rayet star, is surrounded by thick ejecta from previous evolutionary stages. c HST Wolf-Rayet stars is 10 −5 − 10 −4 M ⊙ yr −1 and gigantic mass ejections (whose actual trigger is not known) can even take place during the RSG or LBV phases (see Fig. 9). The separation between the photosphere and the very thick wind is thus less and less clear as the star evolves and material that was once in the convective core of the star, where the nuclear reactions happen, becomes exposed at the stellar surface. This explains the anomalous abundances observed in the spectrum of these stars: for example, WN stars show at the surface the results of hydrogen burning through the CNO process whereas WC stars expose layers that, since they were originally deeper inside the star, have been rather affected by helium burning. Finally, it is supposed that the massive stars end their life in a supernova explosion. Massive progenitors have even been proposed for the powerful gamma-ray bursts. 1.5. Interactions with the surroundings. Stars are never completely isolated, and the large energy output of the massive stars certainly has a huge impact on their environment. These stars interact with their surroundings through their intense ionizing radiation, their powerful stellar winds, and eventually their final supernova explosion. The winds participate in the dissemination of the chemical elements built in the star, but they also transfer large amounts of mechanical momentum to the InterStellar Medium (ISM). We may note in this context that the total amount of energy released through the wind during the entire lifetime of a massive star is comparable to the energy of one supernova explosion: the wind contribution can thus not be neglected. By sweeping up the surrounding gas, the winds are therefore able to shape the ISM into bubbles. These bubbles have sizes of about ten lightyears when they are blown by single stars, but can reach thousands of lightyears if several of these stars act collectively (they are then called superbubbles and supergiant shells). This input of mechanical energy into the surroundings can be enough to induce the formation of new generations of stars. One of the first attempts to model these peculiar structures was presented by Weaver et al. (1977, and references therein). This simple model has been so often used that it is generally considered as the 'standard' one. In this scheme, a typical bubble consists of four regions, which are, starting from the star outwards, (1) a region where the wind freely expands, (2) a zone containing the shocked wind, (3) Figure 10. Top: The structure of a typical wind-blown bubble in the energy-driven case (see e.g. Weaver et al. 1977). Bottom : NGC6888, a bubble blown by a Wolf-Rayet star. c SDSU/MLO/Y. Chu et al. a region with the shocked ISM, and (4) the undisturbed ambient ISM (see Fig. 10). The evolution of a typical bubble proceeds mainly through three stages (see e.g. Lamers & Cassinelli 1999). At first, the wind is not stopped by the ISM, and the bubble expands so fast that the radiative cooling does not have enough time to affect its evolution: this is called the 'adiabatic phase'. As the amount of swept-up matter increases, the shock velocity begins to decrease, and the cooling of the shell of shocked ISM finally becomes significant: the gas compresses into a very thin and dense shell. During that phase, the shocked wind becomes on the contrary hotter and hotter, and has not enough time to cool. The pressure of this hot, shocked wind is so high that it now drives the expansion of the shell. This marks the beginning of the second phase, during which the bubble is generally called an 'energy-driven' one. Most observed bubbles are in this phase. Finally, the hot wind also cools, and in turn collapses into a dense shell. At this point, the wind impacts directly on the shell, and directly transfers momentum to it: the bubble is then called a 'momentum-conserving' one. During the second phase, the bubbles are detectable through (nearly) the whole electromagnetic spectrum. The stellar wind flows first at thousands of km s −1 , but it nearly stops at the reverse shock that expands with velocities of only a few tens km s −1 . Its entire kinetic energy is then converted into thermal energy, heating up the shocked wind to millions of degrees. The interior of the bubble, filled with hot gas, will thus be detectable in X-rays. On the other hand, the thin shell of shocked ISM has cooled but is nevertheless typically at a temperature of ∼ 10 4 K, since it is still ionized by the stellar radiation: it will thus be detectable through the usual nebular emission lines (Hα, [O iii], ...). The interface layer between these two regions is detected through absorptions in the UV range, but a gap between the hot interior and the dense shell can also reveal the presence of this layer (Chu et al. 2003). The Weaver et al. (1977) model attempts to describe the evolution of such a bubble. It relies on a few hypotheses: the ISM is initially at rest, and it has a uniform density n 0 ; the source of the constant wind is point-like and the pressure inside the bubble is uniform. To find the evolution of an energy-driven bubble, one then simply needs to apply the conservation equations: where R, V exp and M shell are the radius, expansion velocity and mass of the thin shell of shocked ISM; n 0 is the ambient number density (in cm −3 ); µ is the mean molecular weight in the H ii region 6 ; P is the pressure inside the bubble; E is the internal energy and γ is the adiabatic index of the hot gas (taken to be 5/3); and L w = 0.5Ṁ v 2 ∞ is the mechanical luminosity of the stellar wind. The solution to these three equations is The temperature and density inside the bubble can further be derived if the heat conduction from the shocked wind into the shocked ISM is considered (Weaver et al. 1977). Once the temperature distribution is known, the X-ray luminosity of the hot shocked wind can also be estimated (Chu et al. 1995). However, when confronted to observations, this model generally experiences several difficulties. The wind luminosity has to be decreased by an order of magnitude to account for the observed shell radius, shell expansion and ISM density. In addition, the predicted X-ray luminosity is generally not compatible with the observations. More specifically, Oey (1996) has performed numerical simulations of superbubbles, taking into account the evolution of their stellar content, including the supernova explosions, and has compared these models to the observational properties of several superbubbles. She found that these structures could be divided in two classes, one with very discrepant expansion velocities, and one whose velocity is more compatible with the model. However, even these latter structures require a drastic reduction of the input mechanical luminosity of the stars to reach a perfect agreement with the simulations. Dunne et al. (2001) also showed that superbubbles are generally brighter in the X-ray domain than expected from their stellar content. Mass loading and/or off-center supernovae are often thought to be responsible for these discrepancies. The observed properties of single-star bubbles, like those detected around WR stars, have also been investigated and compared to the theoretical expectations. Garcìa-Segura (1994) has extended the work of Weaver et al. (1977) to this specific case. He has taken into account the mass-loss evolution of the massive star prior to the WR stage, but his simulations agree more qualitatively than quantitatively with the observations. Again, the wind luminosity has to be decreased in order to match the observed radius, velocity and X-ray luminosity of the bubbles. Superbubbles and WR bubbles are rather complex objects, in which a lot of poorly known factors (e.g. the exact ISM density distribution, the exact mass-loss history of the star) could influence the shape and evolution of the bubble. To get a realistic comparison between the so-called 'standard' theory and the observations, one should rather consider the most simple objects: bubbles blown by a single, main sequence massive star interacting directly with the ISM. Such bubbles are called 'interstellar bubbles', but only a few of them are known. Nazé et al. (2001bNazé et al. ( , 2002 have discovered several such structures in N11B, N180B, and N44, and their properties agree better with theoretical expectations than in the case of WR bubbles and superbubbles, but the agreement is still far from perfect. Spectroscopy of Single Massive Stars Although many wavelength ranges (e.g. infrared and ultraviolet) provide important spectroscopic information, we will focus in this section on the results obtained in the visible and X-ray domains. 2.1. Visible Domain. Apart from classifying the star, visible spectra are also used as input for modelling and variability studies. Modelling spectra constitutes a crucial step in astrophysics since it is the only way to derive intrinsic stellar properties like temperature, mass-loss rate, gravity, chemical composition, rotation,... Even distances could be estimated thanks to modelling: once the windmomentum luminosity relation (see above) is calibrated, the determination of the radius, mass-loss rate and terminal velocity through the modelling of the observed stellar spectrum leads to an estimate of the intrinsic luminosity of the star, hence its distance. Although modelling stellar spectra is not a trivial task, it has nevertheless greatly improved in recent years. The first models were indeed rather simple: they assumed plane-parallel and static atmospheres composed only of hydrogen and helium. This led to the first determinations of the temperature scales of massive stars. Very soon, a problem arose: the mass predicted by these models was systematically smaller than that predicted by evolutionary models on the basis of the position of the star in the HR diagram (Herrero 2005 and references therein). This socalled mass discrepancy was mostly solved by new, improved models that include: • non-LTE (Local Thermodynamic Equilibrium) effects. Since massive stars possess a very intense radiation field, the radiative phenomena largely dominate over the collisional effects. • a spherical dynamic atmosphere. Plane-parallel approximation is valid only if the height of the atmosphere is small compared to the stellar radius. This is not the case for massive stars, since the optical depth of the wind can be significant out to several tens of stellar radii. In addition, the windy atmosphere of massive stars is expanding, and a photon emitted at one point can be absorbed much farther by another line thanks to the Doppler effect. • the line-blanketing. Metals, although rare, play an important role in the ionisation of the wind since they are very efficient at absorbing photons. This is especially true in the UV, where numerous lines are present: different atoms/ions can thus absorb radiation in the same frequency range. The inclusion of metals in the modelling thus results in a blocking of the UV flux, that leads (1) to a heating of the inner atmosphere (backwarming) and (2) to the cooling of the outer atmosphere (where the ionisation, determined by the now reduced UV flux, is decreased). Such models have led to a new parameter scale for massive stars (see Martins et al. 2005, parameters reproduced in Table 2). Due to the backwarming effect of the inner atmosphere where the absorption lines form, the same ionisation (i.e. helium ratios) can be reached with a lower effective temperature, and this leads to a reduction of the temperature scale by up to 8000K. On the contrary, for Wolf-Rayet stars, the emission lines are formed in the outer parts of the atmosphere, where the inclusion of the line blanketing results in a reduction of the ionisation: higher effective temperatures are thus needed to fit the spectra. However, we may mention that the most recent models are still 1-D and stationary ones, and that the work continues to further improve Table 2. Theoretical stellar parameters as a function of spectral types and luminosity classes, as determined by Martins et al. (2005). The effective temperature T ef f , here displayed in kK, is the temperature of a blackbody emitting the same amount of radiation as the star (therefore L = 4πR 2 * σT 4 ef f ); R * is the stellar radius in R ⊙ and M * the stellar mass in M ⊙ . the models. Subtype Dwarf Stars (V) Giant Stars (III) Supergiant Stars Another possible analysis of the spectrum of a single star is to investigate its variability. A very efficient tool for this task is the Temporal Variance Spectrum (TVS) that was defined by Fullerton et al. (1996). Consider a dataset of N normalized spectra with the same wavelength sampling and arrange them in a matrix S where S ij is the jth wavelength element of the ith spectrum. To search for variability, the spectra have to be compared to a mean spectrum, and the differences tested statistically. However, the signal-to-noise ratio is different for each spectrum and a weighting is therefore needed in order to perform meaningful statistical tests. To this aim, we define the weighting factor w i as σ 2 0 /σ 2 ic , where σ ic is the inverse of the signal-to-noise in the continuum of the ith spectrum, and σ 2 0 is equal to 1 , i.e. σ 0 is the inverse of the rms signal-to-noise in the continuum of the dataset. The weighting factor therefore enables to reduce the importance of low quality (i.e. low signal-to-noise) spectra. It is also important to take into account the wavelength-to-wavelength variations of the noise: indeed, the signal-to-noise ratio is higher in emission lines, but the intrinsic variations are also higher and larger deviations are thus normally expected for these lines. The observed deviations have thus to be "normalized" by a correcting factor reflecting the expected noise. If the exposure time is sufficient, the instrumental noise is negligible compared to the photon noise and a good correction factor α ij = σ 2 ij /σ 2 ic would be simply S ij , the spectrum itself: α ij is then < 1 if there is an absorption line (since a lower signal implies a lower noise) and > 1 in the case of an emission line above the continuum. Taking these correcting factors into account, the TVS at a given wavelength is then defined as: This TVS follows a distribution σ 2 0 χ 2 N −1 (χ 2 being here the reduced chisquared), and a statistical test can therefore be performed easily: the deviations are generally considered significant if they exceed the 99% level. An example of such a TVS is shown in Fig. 11. Once a variability is detected and if the spectra are sufficiently numerous, the dataset can be searched for periodicities in the observed variations. This can be done for example with the Generalized Fourier Transform (see Heck et al. 1985 andremarks in Gosset et al. 2001) that extends the Fourier Transform to the case of non-regular sampling. For massive stars, a stochastic variability is generally expected because of the presence of the unstable stellar wind that generates shortlived small-scale structures. In fact, the instability of the wind is intimately linked to the line-driven mechanism. Consider an atom or ion at a distance r of the star and moving at a velocity v r . It absorbs photons at a frequency ν 0 in its reference frame, i.e. a frequency ν 0 (1 + v r /c) in the reference frame of the star (cf. above). The velocity can be slightly perturbed and become v + δv. If δv > 0, the atom/ion will absorb photons of higher frequency, of which plenty are available, and it will therefore accelerate even more: the perturbation is thus amplified. On the contrary, if δv < 0, the atom/ion can only absorb photons of lower frequency but these have already been absorbed by the slower material closer to the star and are no more available: the particle will therefore decelerate even more with respect to the unperturbed v r velocity law, and the perturbation is again amplified. Hence, any slight perturbation of the velocity is doomed to be amplified, provoking the formation of small-scale structures (so-called clumps) in the wind. These structures are thought to produce a stochastic variability of the spectrum. Sometimes, however, variations appear to be regular, with periods ranging from a few hours to several years. The most common sources of variability include: • Pulsations. It is well known that the spectra of Cepheids change as these stars pulsate radially. Non-radial pulsations of lower intensity can also modify the line profiles and magnitudes of stars. Generally, asteroseismology has focused on low-mass stars, but a few massive objects (e.g. ζ Oph, HD 152219, HD 93521) apparently also display pulsations. • Structures in the Wind. Structures in the wind on a rather large scale can appear and disappear, modifying the observed spectrum. For example, if the stellar surface harbors a cold (resp. hot) point, the mass-loss rate above that point will be decreased (resp. increased) and the velocity of the gas will become larger (resp. smaller) because of the reduced (resp. increased) absorption: this modified wind will soon collide with the "normal" surrounding gas. Due to the stellar rotation, spiral structures might then appear, and give rise e.g. to Discrete Absorption Components (DACs, see Fig. 12 and Cranmer & Owocki 1996). • Magnetic Fields. Because of the very broad lines in the spectra of the massive stars, it is very difficult to estimate their magnetic field and so far there are only two cases, the stars θ 1 Ori C and HD 191612, where it has been measured with certainty. In the θ 1 Ori C system, the wind material is funneled by the magnetic field towards the magnetic equator, creating a dense equatorial region where some emission lines can arise. A recurrent modulation of the spectrum then appears because the magnetic axis is not aligned with the rotation axis (hence the name magnetic oblique rotator used for θ 1 Ori C): different parts of the "disk" are therefore seen at different phases of the rotation cycle (see Fig. 13 and Stahl et al. 1996). In this context, the peculiar variability of the Of?p stars needs to be mentioned. The Of?p category was introduced by Nolan Walborn in 1972 to describe two stars, HD 108 and HD 148937, with spectra that were slightly different from those of normal Of supergiants. Notably, they present C iii lines around 4650Å with an intensity comparable to that of the neighbouring N iii lines. In addition, their spectra show sharp emission lines and some P Cygni profiles. A third star was soon added to this new class, HD 191612. The observation of HD 108, the best studied member of this class, led to conflicting results in the past, with explanations for the radial velocity variations ranging from binary motion to stochastic wind instabilities. Using a 15yr monitoring campaign of the star, Nazé et al. (2001a) discovered that the star actually underwent long-term line profile variations: the Balmer lines and the He i lines passed from emission or P Cygni profiles to absorptions while other emission and absorption lines were unchanged, like He ii λ 4542. These variations appear recurrent with a timescale of approximately 50-60 years. A few years later, Walborn et al. (2003Walborn et al. ( , 2004 reported a very similar phenomenon in the spectrum of another Of?p star, HD 191612 (see Fig. 14), but the timescale appears much shorter, about ∼540 days. The same timescale was subsequently detected in Hipparcos photometry (see Nazé et al. 2005 and references therein). Investigations to determine the exact nature of these peculiar stars are still ongoing. 2.2. X-rays. Because of technical difficulties, X-ray astronomy was born quite recently. Indeed, rockets or satellites are needed to overcome the absorbing effect of our atmosphere -doing X-ray astronomy was thus not possible before World War II. In addition, the efficiency of X-ray detectors and telescopes has improved very slowly, and that explains why the first generation of "great observatories" (i.e. Chandra and XMM-Newton) was only launched in 1999. Spectroscopy in X-rays can be performed in 3 different ways: • CCDs. CCDs for X-rays can provide much more than a simple image. Due to the low luminosities of astronomical objects in Figure 15. X-ray spectrum of 9Sgr, obtained with XMM-Newton using a CCD (top) or a grating (bottom). In the low resolution spectrum, the lines seen at high resolution are blended, forming a bell-shape pseudocontinuum. (from Rauw et al. 2002) the X-ray range, the X-ray photons can in fact be recorded one at a time. The electron shower generated at the arrival of the photon will therefore be recorded precisely in position and also in intensity. As the number of electrons is directly proportional to the energy of the incident photon, CCDs provide a cheap and simple way to do spectroscopy, although only at a rather low-resolution (R = dE/E ∼ 10 − 50). • Gratings. Gratings (either in transmission or reflection) can be used in the X-rays with only little modification compared to the visible domain. Such instruments provide a higher resolution than CCDs: R ∼ 200 − 2000. • Bolometers. As for CCDs, X-rays are detected once at a time in bolometers, where the photon energy is converted into thermal energy of the electrons. Since a higher frequency photon will lead to a larger temperature increase of the bolometer, spectroscopy is a direct by-product of the use of bolometers. They provide high-resolution spectra (R ∼ 1000), but they need to be cooled to 0.1K (because an X-ray photon will provoke a ∆T of only a few 0.001K!). The Japanese observatories Astro-E and Astro-E2 should have used the first bolometers for X-ray astronomy, but the former exploded after launch and the latter lost all its liquid helium (necessary to cool the bolometer) shortly after orbit insertion. An example of spectra obtained with the first and second methods is shown in Fig. 15. As far as massive stars are concerned, X-ray astronomy really began 25 years ago. At that time, the Einstein observatory had just been launched and NASA was trying to calibrate it by observing well-known sources. The observation of one of these sources, Cyg X-3, revealed four nearby spots (see Fig. 16). At first thought to be due to an instrumental effect, the spots were soon found to mark the discovery of X-ray emission from 4 massive stars belonging to the Cyg OB2 star cluster. Indeed, Einstein and its followers showed that X-ray emission is very common among massive stars. Figure 16. The X-ray emission from massive stars was discovered serendipitously in December 1978, when Einstein observed the bright Cyg X-3 for calibration purposes. The four "spots" above Cyg X-3 correspond to the massive stars Cyg OB2 #5, 8, 9 and 12. c Einstein However, the exact origin of that emission is still under debate. Some authors had first proposed that a corona at the base of the wind, analogous to what exists in low-mass stars, could be responsible for the high-energy emission of massive stars. However, several observational objections against such models were raised (see Owocki & Cohen 1999 and references therein): absence of a strong attenuation by the stellar wind (this suggests that the source of the X-ray emission lies significantly above the photosphere, at several stellar radii), too low an X-ray output, inconsistencies between UV and X-ray predictions compared to observations,... As an alternative, a scenario based on the instability of the line-driven mechanism has been proposed. In fact, an unstable line-driven wind does not flow at the same velocity everywhere and shocks between the different parts are expected, causing the formation of dense shells which will be distributed throughout the whole wind. At first, the forward shocks between a fast wind and the ambient slow ("shadowed") material were considered as the probable cause of X-ray emission, but subsequent hydrodynamical simulations rather showed the presence of strong reverse shocks which decelerate the fast, low density material. However, the resulting X-ray emission from such material after it crossed the reverse shock is very low, and can probably not explain the level of X-ray emission observed amongst O stars. More recent simulations by Feldmeier et al. (1997) have shown that mutual collisions of dense shells of gas compressed in the shocks would lead to substantial X-ray luminosities, comparable to the observed ones. Such models also predict significant short-term variations of the X-ray flux but since this is not observed, it was concluded that the winds are most probably fragmented (or clumpy), so that individual X-ray fluctuations are smoothed out over the whole emitting volume, leading to a rather constant X-ray output (Feldmeier et al. 1997). In addition, a supplementary X-ray emission may result from other mechanisms, which are not necessarily present in every massive star. For example, an accreting compact object will generally emit a wealth of X-rays and can have a drastic impact on the stellar wind structure (e.g. Kallman & Mc Cray 1982). In binary systems containing two hot stars, a colliding-wind phenomenon (see next section) and/or inverse Compton scattering by relativistic electrons accelerated by the shocks (Chen & White 1991) will also lead to additional X-ray emission. Finally, the wind from both hemispheres, deviated by a magnetic field, can collide in the equatorial regions, and provide another substantial source of X-ray emission (Babel & Montmerle 1997). On the observational side, it was soon found that the X-ray luminosity scales with the bolometric luminosity of massive stars. Although quite dispersed in the past, the data now show the relation to be rather tight: log(L X in 0.5 − 10. keV ) = log(L BOL ) − 6.91 ± 0.15 (Sana et al. 2006b). On the other hand, low-resolution spectroscopy has unveiled the fundamental characteristics of the X-ray emission. First, it is not of the blackbody-type: as the heated gas is optically transparent at these wavelengths, the observed emission corresponds to the superposition of discrete emission lines. This emission can thus be fitted by optically thin thermal plasma "mekal" or "Raymond-Smith" models. Moreover, the temperature of the emitting gas is found to be about 0.5keV, or 6MK, and the wind absorption is generally low, except for Wolf-Rayet stars. The high-resolution spectra enable to go further into the analysis of the X-ray emission. Two main types of studies can be undertaken. The first one is related to the f ir triplets. These lines are seen in helium-like ions and correspond to transitions from the n = 2 level to the n = 1 ground level. The f line, or forbidden line, arises from the transition 1s 2s ( 3 S 1 ) → 1s 2 ( 1 S 0 ) whereas the r (for resonance) line is linked to the transition 1s 2p ( 1 P 1 ) → 1s 2 ( 1 S 0 ) and the i (for intercombination) line to the transition 1s 2p ( 3 P 1 ) → 1s 2 ( 1 S 0 ). In stellar coronae, the ratio of the f and i lines mainly depends of the electron density. However, in the hot plasma surrounding massive stars, the UV radiation plays an important role by coupling the upper levels of these two lines thereby reducing the f line in favor of the i line. Therefore, the ℜ = f /i ratio is a diagnosis of the dilution factor 0.5 1 − 1 − R * r 2 of the UV radiation, i.e. a diagnosis of the distance r from the stellar surface where the X-rays are emitted (see Fig. 17). For example, the X-rays from ζ Pup and 9 Sgr apparently form at a few stellar radii, as predicted by the standard model 11 , whereas the f /i ratio observed for θ 1 Ori C suggests that the X-ray emission arises very close to the photosphere (Rauw 2005b). Note that the G = f +i r ratio is sensitive to the temperature of the gas. In addition, high-resolution spectroscopy also enables to investigate the detailed morphology of the X-ray lines, leading to additional physical information. If the wind were optically thin without any absorption, the lines would appear flat-topped (see right part of Fig. 18). However, the wind absorbs part of the X-rays that it has emitted itself. This absorption will be larger if there is more material between the emission region and the observer, as is the case for X-rays emitted in the receding part of the wind. The absorption will thus affect more the red part of the line: the observed line will therefore appear blueshifted (see right part of Fig. 18). In addition, the wind expands with a large velocity, so that the lines should appear rather broad. Such broad, asymmetric lines are observed for ζ Pup and 9 Sgr. However, ζ Ori displays broad but symmetric lines (Fig. 18, Rauw 2005b). To explain this difference, Oskinova et al. (2005) have proposed to consider a porous wind consisting of optically-thick clumps: although each clump can absorb efficiently the radiation, the X-rays can still escape freely by passing between them. Therefore, the absorption does not depend on the opacity of the wind (clumps), but on the spatial distribution of the clumps. If the clumps are tightly packed, the wind is nearly homogeneous and the lines will be skewed. On the other hand, if the clumps are rare and distant, the lines will be symmetric. Finally, δ Ori presents narrow symmetric lines that could be due to a colliding wind phenomenon (see below) while the lines of θ 1 Ori C can be easily explained by the confined wind model (Rauw 2005b). Spectroscopy of Massive Binaries 3.1. Colliding Winds : Introduction. As we have seen before, massive stars blow dense and powerful stellar winds. If such stars belong to a binary system, a collision between the two winds is unavoidable. Since the winds are supersonic, the shock between them is a strong one and the gas will become very hot and dense after the collision. This phenomenon was predicted a few decades ago, but it was only considered seriously since recent years, when the observational evidence began to accumulate. Some theoretical considerations will first be presented, before reviewing the observational data. The temperature of the gas after the shock can be evaluated by T = , the temperature can reach 60MK. To understand the physical properties of the post-shock gas, we can use the ratio between the characteristic timescale of radiative cooling and the time to escape the shock zone (Stevens et al. 1992): where x 7 is the separation between the considered star and the stagnation point (the intersection of the contact surface with the axis joining the two stars' centers), expressed in 10 7 km and the mass-loss rateṀ should be given in 10 −7 M ⊙ yr −1 . For values of χ close to or larger than one, the cooling does not have the time to play a role and the shock can be considered as adiabatic: the gas remains at a high temperature. In this case, no optical emission line should form in the collision zone. On the contrary, when χ << 1, the cooling is very efficient and the shock radiates a lot (i.e. emission lines over a broad range of wavelength, including the optical domain, will now be generated in the shock zone). In this case, the shock zone is compressed and subject to many instabilities (see Fig. 19 and Stevens et al. 1992). The geometry of the collision zone can also be derived rather easily since the contact surface of the two winds corresponds to the equilibrium between the two wind ram pressures (Stevens et al. 1992 and Fig. 20): or ρ 1 v 2 1 cos 2 φ 1 = ρ 2 v 2 2 cos 2 φ 2 Since the continuity equation impliesṀ = 4 π r 2 v ρ, the above relation can be re-written as: As we have seen before, the velocity in the wind follows a β-law, i.e. v(r) = v ∞ (1 − R r ) β and we can therefore write: (1−R 2 /r 2 ) β 2 /2 cos φ 1 if we define the on-axis momentum ratio as R = The equation is indeed equivalent to: cos φ 2 = λ cos φ 1 In addition, we know that π/2 − φ 1 = β − θ 1 and π/2 − φ 2 = θ 2 − β (see Fig. 20), and therefore cos φ 1 = sin β cos θ 1 − cos β sin θ 1 cos φ 2 = cos β sin θ 2 − sin β cos θ 2 The equation then becomes r 1 r 2 = λ tan β cos θ 1 −sin θ 1 sin θ 2 −tan β cos θ 2 And finally, the equation of the contact surface is dz dx = tan β = (λ r 2 2 +r 2 1 ) z λ r 2 2 x+r 2 1 (x−d) If the winds have reached the terminal velocity before they collide, λ simplifies to R. In this specific case, the on-axis momentum ratio R can be easily physically interpreted. First, it enables to find the stagnation point. In fact, in this case, the two angles φ 1 and φ 2 are zero and R directly gives the ratio of the distances between the stagnation point and the stars. Therefore, the stagnation is closer to the star with the weaker wind (see Fig. 21). In addition, R also gives the form of the shock because the half opening angle of the shock cone (in • ) was found empirically to be ∼ 120 1 − R −4/5 4 R −2/3 . The shock thus wraps around the star with the weaker wind (see Fig. 21). In the above discussion, several effects have been neglected. For example, in some tight binary systems, there will be a deflection of the shock zone because of the orbital motion. Moreover, if the system is eccentric, the colliding wind (CW) phenomenon could appear only near periastron, i.e. when the stars are closer to each other. In addition, Figure 22. Definition of the axes for tomographic analysis (see text). since the winds are driven by the scattering of UV photons, the wind of one star is affected by the presence of the radiation field of the other star. This effect is called 'radiative inhibition': the radiative pressure from a companion being able to slow the radiation-driven winds, a weaker CW shock will result. Finally, the asymmetry or clumpiness of the winds has also been neglected and only starts to be taken into account in some hydrodynamical simulations. 3.2. Signature in the Visible Range. A binary can be easily detected with spectroscopy, except if the system is seen face-on, since the lines regularly shift from the blue to the red side of the spectrum, in harmony with the orbital motion. If the moving lines of only one star are seen, one talks about an SB1 (spectroscopic binary with one component detectable); if the lines of both stars are observed, the system is called a SB2. Normally, all lines detected in the composite spectrum of the system belong to one star or the other but, in the case of a radiative CW, additional emission lines appear. Since these lines are not formed in or close to the photosphere of the stars, they do not follow the orbital motion of the system. Doppler tomography can help to better determine the properties of these peculiar emissions but to apply that technique, a good spectroscopic coverage of the orbital cycle is crucial. Two versions of the Doppler tomography are available: the simple S-wave analysis and the more sophisticated Doppler mapping (Rauw 2005a). Both request the definition of specific axes (x, y): x is the binary axis, from the primary to the secondary star; y is perpendicular to x, and in the direction of the motion of the primary star (see Fig. 22). These axes are not fixed but rather rotate with the orbital motion. If an emission component (either a discrete one or the position of the peak of a broad emission line) has fixed velocities in this reference frame, it will appear with a velocity v(φ) = V (φ) sin i = −V x cos(2πφ) sin i + V y sin(2πφ) sin i or −v x cos(2πφ) + v y sin(2πφ) throughout the orbital cycle (φ being the orbital phase). The above combination of a cosine and a sine generates a radial velocity curve that displays an S-shape in the dynamical spectrum, hence the name 'S-wave analysis'. In practice, one thus tries to fit an S-shape function to the observed radial velocities of the chosen emission component. Then a Doppler map reporting the fitted (v x , v y ) is created. This map generally also shows the star's position in the velocity space, i.e. (0, −K 1 ) for the primary and (0, K 2 ) for the secondary, considering that the system is not eccentric (the stars never get closer to or farther away from each other). The velocity equivalent of the Roche lobes can also be displayed, as is the case in Fig. 23. To analyze broad emissions, a tool first developed for medical imaging must be used: the Radon transform. It is defined as g(s, θ) = +∞ −∞ f (j, k)dt where (s, t) and (j, k) are two orthogonal reference frames with an angle θ between j and s, i.e. j = s cos(θ) − t sin(θ) and k = s sin(θ) + t cos(θ). In astronomical spectroscopy, g(s, θ) is actually I(v, φ), i.e. the intensity of a spectrum at a given velocity and a given phase, which results from the integration of the emissivity function along the line-of-sight through the (v x , v y ) space. As a consequence, one does not use the Radon transform to get Doppler maps, but the inverse Radon transform. Practically, the spectra should first be filtered to suppress the high-frequency noise which would degrade the Point Spread Function of the resulting map. Next, for each observed phase, one must find the intensity in the spectrum at a velocity corresponding to each pair (v x , v y ). Finally, the resulting intensities along the orbital cycle are added, using a weighting to take into account the different phase intervals covered by each observation (Rauw 2005a). If the emission corresponds preferentially to a specific region in the (v x , v y ) space, a peak will appear at that position of the Doppler map; on the contrary, the signal at other velocities appears with random intensities and will therefore cancel out (see Fig. 24). While Doppler tomography can be very useful, it must be reminded that the Doppler maps display the velocity field. They are NOT 'usual', spatial maps, and should thus not be interpreted as such: components close in the velocity space can actually be very distant in position. Another example is the Doppler map of an accretion disk: the higher velocities are reached closer to the star, therefore, the inner (resp. outer) part of the disk will appear on the outside (resp. inside) in the velocity space! In addition, the tomography should be applied with care to eclipsing binary systems (the relative importance of the components would then be biased). The same holds for the systems where emission arises outside the orbital plane (which is unfortunately the case for winds of massive stars). On the other hand, other signatures of the CW phenomenon can also be detected. For example, in HD 152248 (O7.5III+O7III), Sana et al. (2001) discovered that the strength and width of some emission lines was varying (see Fig. 25). In fact, these emission lines were broader and stronger at quadrature (i.e. when the stars have maximum radial velocities) than at conjunction. This can be easily explained by the presence of a planar 15 CW region just between the two stars. The emission components produced in this dense CW region would be occulted at conjunction phases: the equivalent width of the lines should be smaller then. In addition, since the shock is almost perpendicular to the axis of the system, the radial velocities of the particles escaping from the wind interaction region show a broader distribution when our line of sight is aligned with the interaction zone (i.e. at quadrature) than at conjunction. 3.3. Signature in the X-rays. In view of the high post-shock temperature, X-ray emission appears as an obvious signature of CW phenomena. Such signatures have indeed been found in many binaries, and they are generally characterized by: • Large X-ray luminosity. As CW represent an additional phenomena, the X-ray luminosity of binary systems with CW should Figure 26. L X − L BOL relation for the NGC6231 cluster. The two colliding wind binaries (circled) clearly lie above the canonical relation. Note that the X-ray emission of HD 326329 is probably contaminated. (From Sana et al. 2006a) exceed the simple combination of the two individual X-ray luminosities of the massive stars. Therefore, when comparing Xray luminosities to bolometric luminosities, CW systems appear above the 'classical' L X − L BOL relation (see Fig. 26). • High temperature. The X-ray emission from massive stars generally displays a rather low temperature kT (about 0.5keV). However, the emission from CW arises in hotter plasma and should therefore present higher kT . Non-thermal emission coming from inverse Compton scattering (see below) could also be observed in colliding wind binaries (CWB). • Modulation of the X-ray flux. As the binary system rotates, different regions come into view/the line-of-sight. A modulation of the X-ray flux is therefore expected due to changing absorption of the CW emission. In eccentric systems, the variation of the separation d between the stars induce variations in the emitted X-ray flux (L X ∝ v −3.2 d −1 , see Stevens et al. 1992). For example, in the eccentric binary system HD 93403 (O5.5I+O7V), modulations of the X-ray flux are observed in different energy ranges (see Fig. 27 and Rauw et al. 2002). The soft X-ray emission most likely arises in the outer regions of the individual stellar winds and the variability in this energy range is probably associated with opacity effects. In the medium energy band, these effects are much smaller and the observed variation is consistent with a 1/d modulation, where d is again the separation between the stars. The short-period binary HD 152248 also presents modulations of the X-ray flux that could be reproduced to first order by hydrodynamical modelling (see Fig. 28 and Sana et al. 2004). Like these two systems, the binary γ 2 Vel (WC8+O7.5III) displays phase-locked variations of its X-ray flux. However, these variations are simply due to a changing opacity: when the shock cone around the O-star is in the line-of-sight, the absorption is much smaller than at other phases when the dense wind of the Wolf-Rayet star absorbs most of the X-ray flux (Willis et al. 1995). The top arrows indicate the direction of the observer's line of sight projected on the orbital plane and the dashed and the dotted circles correspond to the surfaces of optical depth unity for the primary wind at 0.5 keV and 1.0 keV respectively. Right : X-ray lightcurve of HD 93403 in the 1.0-2.5 (medium, up), 0.5-1.0 (soft, second from above) and 2.5-10.0 keV (hard, third) energy bands with 1-σ error bars. The last panel yields the relative orbital separation between the components of the system while the lower panel provides the position angle of the binary axis (0 • corresponding to the primary star being "in front" of the secondary). (From Rauw et al. 2002) Figure 28. Left: Schematic view of HD 152248 at the time of the six XMM-Newton pointings. The primary star is in dark grey while the secondary is represented in light grey. Arrows at left-hand indicate the projection, on the orbital plane, of the line-of-sight. Right : Comparison of the observed X-ray luminosities and the results from the hydrodynamical simulations: filled squares represent the dereddened luminosities of the interaction region as predicted by the model; filled circles are the total predicted luminosities (including the expected intrinsic contribution from the two components of HD 152248); open circles show the observed dereddened luminosities. (From Sana et al. 2004) Finally, the eclipsing binary CPD-41 • 7742 (O9V+B1-1.5V) also presents a modulation of the X-ray flux (see Fig. 29 and . However, this is a peculiar case of CW since the secondary star has a very weak wind. Therefore, it is expected that the wind of the primary directly crashes onto the photosphere of the secondary or that it suffers radiative braking 17 , leading to a wind-wind interaction very close to the photosphere. Since the interaction region is close to the secondary star, its X-ray emission does not reach us when the secondary star is in front (only the back of the star is then observed) or when the primary occults its companion . . Aperture masking interferometry has enabled to detect a spiral-shape emitting region around the Wolf-Rayet star WR104 (Tuthill et al. 1999). This region corresponds to dust forming in the CW shock and its shape can be explained as a combination of the orbital rotation and the outward motion of the wind (like for a lawn sprinkler). c Keck Obs. Figure 31. VLA maps of the WR147 system (WN8+B0.5V) at 3.6 cm. The asterisks mark the position of the two stars. The northern radio emission is clearly elongated and not centered on the star: it corresponds to the radio emission from the CW region. (From Contreras & Rodriguez 1999) 3.4. Other Wavelength Ranges. Colliding winds also have an impact on the spectrum at other wavelengths. When binaries composed of certain Wolf-Rayet stars are observed in the infrared (IR), for example, the formation of dust in the CW region can be detected. In eccentric systems, this dust formation is recurrent, since it appears only at specific orbital phases, and it leads to a modulation of the infrared emission (e.g. WR140, Williams et al. 1990). Another example is WR104 (WC9+lateO-earlyB), a CW binary (CWB) surrounded by an IR pinwheel nebula (see Fig. 30) that rotates in harmony with the orbital period. Massive stars generally emit radio waves because of thermal free-free transitions in their winds. Such an emission is of the form F ν ∝ ν 0.6 . For some binaries, however, an additional, non-thermal emission can be observed. This radio emission is of the synchrotron type and is linked to the motion of relativistic electrons in the stellar magnetic field. The acceleration of electrons to relativistic velocities is probably achieved through the first order Fermi mechanism. Consider a shock in a wind moving with a velocity V . A high energy particle crossing the shock hardly notices it. But the downstream gas leaves the shock with a lower velocity (V /4) then the velocity at which the upstream gas enters the shock (V ), and the particle gains on average 0.5V /c by simply crossing the shock downwards. The same particle can then be scattered back upstream by turbulence without any energy loss. A succession of such crossings can thus accelerate the particle to relativistic energies. However, the existence of the pre-existing high energy particles, although necessary, is not yet explained theoretically. On the other hand, it has been proven recently that only shocks between two winds are strong enough to accelerate the particles. This implies that all massive nonthermal emitters are binaries, and this was also proven recently by careful multiwavelength observations (see . Note that the radio emission arises in the outer regions of the wind, contrary to what is seen in X-rays, because the inner parts of the winds are opaque to radio waves. The radio emission from CW regions has even been imaged in some cases (see Fig. 31). Finally, the relativistic electrons can also give rise to non-thermal Xand γ-rays, in addition to synchrotron radio radiation. In this case, inverse Compton scattering forces the relativistic electrons to give up part of their energy to ambient photons (e.g. the numerous stellar UV photons). These UV photons thus become high-energy radiation. In addition, relativistic protons can produce neutral pions when they interact with the ions in the densest regions of the wind: these pions subsequently decay into γ-rays. However, this type of emission has not yet been detected with certainty.
2014-10-01T00:00:00.000Z
2006-04-27T00:00:00.000
{ "year": 2006, "sha1": "eb97a7279b6e72d03e63ba3b5ffc23162aa7e98a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "eb97a7279b6e72d03e63ba3b5ffc23162aa7e98a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257933942
pes2o/s2orc
v3-fos-license
Ultrasonography in surveillance for hepatocellular carcinoma in patients with non-alcoholic fatty liver disease International guidelines recommend six monthly ultrasounds as the primary surveillance tool for patients at risk of hepatocellular carcinoma (HCC). The dominant driver of liver disease in HCC surveillance populations is shifting, particularly in Europe and the United States, from chronic viral hepatitis (B or C), towards non-alcoholic fatty liver disease (NAFLD). Today, the population requiring HCC surveillance is also characterised by a high prevalence of overweight/obesity. These patient characteristics significantly impair ultrasound quality which can impede the detection of early HCC lesions. This diagnostic limitation has significant implications considering that eligibility for curative treatment depends upon the stage at which the cancer is detected. In this narrative review, we provide a comprehensive overview of the published evidence and national/international guidelines regarding ultrasound surveillance for HCC in people with NAFLD. We examine ultrasound sensitivity in this cohort for the detection of all stage and early HCC, the impact of steatosis and abdominal obesity on ultrasound performance, evidence for the addition of serum alpha-fetoprotein measurement, optimal timing of surveillance, emerging modalities for risk stratification and screening, and outline the challenges of case finding and surveillance eligibility criteria in this INTRODUCTION Primary liver cancer (PLC) is the 11th leading cause of cancer globally and 8th leading cause of cancer death, accounting for 12.5 million disability-adjusted life years [1] . Hepatocellular carcinoma (HCC) accounts for approximately 90% of PLC cases [2] . HCC is strongly associated with age, male gender, diabetes and carriage of the patatin-like phospholipase domain-containing protein 3 (PNPLA3) rs738409 C c.444C >G minor allele for patients with non-alcoholic fatty liver disease (NAFLD) [3,4] . HCC incidence is 89% and 78% higher in males and females, respectively, in the most deprived quintile compared with the least [5] . Incidence rates in Europe and America have increased since the 1990s, although in recent years, these have plateaued, most likely as a result of the success of direct-acting antiviral therapy for hepatitis C virus (HCV) infection [6] . Modelling from the UK suggests that HCC incidence is expected to increase 40% before 2035; however, mortality rates are predicted to rise further, in stark contrast to those of other cancers [7] . The estimated increase in HCC incidence reflects the emergence of NAFLD as a leading cause of PLC, driven by the obesity and type 2 diabetes (T2D) epidemics [8,9] . Estes et al. forecast that the incidence of NAFLD HCC will increase by 137% by 2030 in the United States [10] . When one reflects on the widespread use and effectiveness of direct-acting antiviral therapy for HCV and the rise in vaccination against hepatitis B virus (HBV) infection, it is clear that NAFLD will become the dominant aetiological driver of HCC. Despite the potential for curative treatment, many patients with HCC are not eligible for this, either due to advanced tumour stage, poor liver function or low-performance status [11] . Early diagnosis of HCC is vital as the 1-yr survival is 78% (TNM stage 1) vs. 20% for those diagnosed at the latest stage (TNM stage 4) [12] . The importance of detection of small tumours translates into an increased likelihood of effective treatment and improvement in overall survival [13] . Better early detection of liver disease and HCC is a priority for health services [6,14] . Given that cirrhosis is the leading cause of HCC, international guidelines advise that all people with NAFLD-related cirrhosis are surveyed for HCC every 6 months via transabdominal ultrasonography (USS) [15][16][17] . While contrast-enhanced liver computed tomography (CT) and magnetic resonance imaging (MRI) have been shown to be more sensitive for HCC detection [18] , the cost, availability, and impact on diagnostic services would prohibit their use as a surveillance tool using the current model of six monthly screening in all at-risk patients [19,20] . Their use also requires the need for contrast agents and radiation exposure in the case of CT. In this narrative review, we explore the challenges for HCC surveillance in patients with NAFLD, the performance of USS for people with NAFLD compared to other subpopulations, and the latest evidence regarding surveillance timing, the use of alpha-fetoprotein (AFP), contrast-enhanced USS and cost-effectiveness data. We discuss emerging surveillance tools and provide a summary of best clinical practices. NON-ALCOHOLIC FATTY LIVER DISEASE NAFLD is the leading cause of chronic liver disease in Europe, with an overall prevalence significantly higher in men than in women (39.7% vs. 25.6%, P < 0.0001) [21] . NAFLD encompasses a spectrum of clinical entities progressing from simple hepatic steatosis to non-alcoholic steatohepatitis (NASH), hepatic fibrosis and cirrhosis. Movement between these stages is dynamic prior to the development of cirrhosis. Prevalence rates of NAFLD are estimated to be 10%-30% in the general population, 50%-90% in people with obesity and 56% in people with T2D [22,23] . NAFLD is a metabolic disease with insulin resistance as a principal pathophysiological defect [24] , and as such, a recent consensus group has proposed a change in nomenclature to metabolic-dysfunction associated fatty liver disease (MAFLD) characterised by liver steatosis (radiologically evident) and concomitant metabolic risk factors [25] . While excess fat accumulation in the liver itself, in many cases, may be of no long-term significance to liver health, hepatic steatosis can progress to liver fibrosis in up to 40% [26,27] . It is the presence of advancing liver fibrosis and cirrhosis that is associated with an increased risk of liver-related mortality including HCC [26,28,29] , and incident cardiovascular disease [30] . While traditionally, liver fibrosis was staged at liver biopsy, there has been a huge expansion over the last two decades in the use of non-invasive fibrosis tests [31] . These include simple algorithms (e.g., fibrosis-4 (FIB-4) score, NAFLD fibrosis score), serum biomarkers [e.g., the enhanced liver fibrosis (ELF) test], and shear wave elastography [transient elastography, TE (Fibroscan®) [32] , liver ultrasound elastography [33] , and magnetic resonance elastography] [34] . Sequential use of more than one test has been shown to reduce the need for liver biopsy [32] . In addition to the oncogenic risk posed by NAFLD, obesity and diabetes, observational studies suggest an elevated risk of HCC associated with the use of insulin and sulphonylureas [4,35] . There is also emerging evidence of an association between air pollution, NAFLD incidence [36] and HCC mortality [37] . Interestingly the molecular signature associated with NAFLD-HCC has recently been found to differ from that associated with non-NAFLD HCC, including higher rates of the ACVR2A mutations (a potential tumour suppressant) [38] . CHALLENGES PRESENTED BY THE EMERGENCE OF NON-ALCOHOLIC FATTY LIVER DISEASE AS A LEADING CAUSE OF HEPATOCELLULAR CARCINOMA The emergence of NAFLD as a leading cause of HCC presents new and unique challenges for hepatologists, metabolic, obesity and diabetes physicians, oncologists and hepatobiliary surgeons. Disease burden Current and predicted prevalence levels of NAFLD means that the overall contribution of NAFLD to the global HCC burden is likely to surpass that of HCV [39] . Estimates of global trends in the burden of liver cancer using the methodology framework of the Global Burden of Disease study have identified that NASH has the fastest-growing age-standardised death rate for HCC [40] . A large cohort study from the United States Scientific Registry of Transplant Recipients has shown that 18% of individuals listed for a liver transplant, with an indication of HCC, have NASH (the 2nd most common cause after HCV), and that NAFLD is the fastest-growing cause of HCC in liver transplant candidates [41] . Case series from the UK (n = 632) [42] and US (n = 4,406) [43] have identified that 38% and 59% of cases of HCC may be attributable to NAFLD. While HCC incidence rates are similar between patients with NAFLD cirrhosis and HCV cirrhosis, NAFLD is associated with a comparatively low HCC risk compared to HCV overall [ Table 1] [44,45] . This has implications for the cost-effectiveness of USS-based HCC surveillance in this group. Presentation of HCC outside of a surveillance programme While overall survival appears comparable to other aetiologies of HCC, patients with NAFLD-HCC have been found to have a reduced disease-free survival than patients with non-NAFLD HCC [46] . This could be explained by the finding that patients with NAFLD-HCC are more likely to have presented with cancer detected outside of a surveillance programme (67.2% vs. 44.3% according to meta-analysis data), and with larger tumours (although overall Barcelona Clinic Liver Cancer (BCLC) stage is comparable) [46] . Several factors are likely to account for this. Two meta-analyses found that 38%-39% of patients with NAFLD-HCC were not cirrhotic at presentation vs. 14%-15% for other aetiologies of chronic liver disease [46,47] . This may occur as a result of genetic and oncogenic factors related to obesity, T2D, liver steatosis and hepatic oxidative stress [48] . Furthermore, a significant proportion of patients with NAFLD presenting with HCC, Table 1 Low sensitivity of USS for the detection of early HCC As discussed below, ultrasound sensitivity for the detection of HCC in people with NAFLD is suboptimal, largely as a result of central obesity and the presence of hepatic steatosis. Reliance on this imaging modality for HCC surveillance, therefore, places patients with NAFLD at a disadvantage. PERFORMANCE OF ULTRASOUND FOR DETECTION OF HEPATOCELLULAR CARCINOMA IN PATIENTS WITH CIRRHOSIS (ALL AETIOLOGIES) AND CHRONIC HEPATITIS B VIRUS INFECTION USS has many favourable attributes as a surveillance tool: there are no associated risks, costs are moderate, and it has high levels of acceptability and achieves high sensitivities for the detection of HCC in certain patient groups (slim, non-cirrhotic). It is also possible to comment on features of cirrhosis and portal hypertension, and where Doppler is used, USS can detect portal vein thrombosis, all of which will guide the appropriate management of patients with HCC. Challenges include the detection of very early HCC, where there is a single cancerous nodule ≤ 2 cm (BCLC stage 0), and those that meet the Milan criteria (one nodule < 5 cm or three nodules each < 3 cm in diameter, without gross vascular invasion). The distinction between HCC and regenerating nodules, found in patients with cirrhosis, is also not possible on standard USS. Finally, the experience of the operator is another factor affecting the usefulness of ultrasound in this setting. The use of USS for HCC surveillance was first recommended based on a landmark randomised trial in China of 18,816 patients with hepatitis B or chronic hepatitis [50] . The intervention group received six monthly USS plus AFP and these individuals experienced a 37% reduction in mortality. There have been no randomised trials in Western countries. A meta-analysis of prospective studies published up to 2007 confirmed that USS detected the majority of tumours before they presented clinically (94% sensitivity); however, it was less effective for the detection of early HCC (63% sensitivity) [51] . Biannual surveillance increased the sensitivity to 70% for detecting early-stage HCC [51] . In these studies, there was a mixed aetiology of liver disease, although the majority of patients had chronic HCV infection [51] . A systematic review of 14 cross-sectional studies reported that USS had a high level of specificity for HCC detection, but a sensitivity of only 60% [52] . Where studies used explanted liver as the gold standard, sensitivities for USS ranged from 58% to 89% in populations from the United States [53][54][55] . An analysis of 202 patients who received a liver transplant for HCC due to mixed aetiologies reported that USS had a 46% sensitivity for HCC detection compared to 65% for CT and MRI [18] . For lesions less than 2cm, sensitivity values were just 21% (USS), 40% (CT) and 47% (MRI) [18] . In 2018, Tzartzeva et al. published an updated meta-analysis of 32 studies (1990-2016; 13,367 patients) and identified that USS detected any stage of HCC with an 84% sensitivity (95%CI: [76][77][78][79][80][81][82][83][84][85][86][87][88][89][90][91][92] and early stage HCC with a sensitivity of just 47% (95%CI: 33%-61%) [56] . The aetiology of liver disease in this group was mixed, and it is unclear what proportion of people had NAFLD or were overweight. In the absence of randomised control trials, a large number of observational studies have attempted to determine if biannual USS surveillance for HCC leads to a survival benefit. In 2022, Singal et al. performed a meta-analysis analysing the harms and benefits of ultrasound surveillance in people with cirrhosis (59 studies, 2014-2020; 145,396 participants) [57] . HCC surveillance was found to be associated with improved early-stage detection (OR 1.86, 95%CI: 1.73-1.98), receipt of curative treatment (OR 1.83, 95%CI: 1.69-1.97) and overall survival (HR 0.67, 95%CI: 0.61-0.72) [57] . However, only two studies examined whether HCC surveillance was associated with improved outcomes in people with NAFLD [57] . Four studies reported on harms associated with surveillance. These occurred in 8.8%-27.5% of patients and were mostly mild in severity [57] . PERFORMANCE OF ULTRASOUND FOR THE DETECTION OF HEPATOCELLULAR CARCINOMA IN THE CONTEXT OF NON-ALCOHOLIC FATTY LIVER DISEASE AND OBESITY The study populations from the earliest trials looking at the effectiveness of USS surveillance for HCC [50,58] are not truly representative of today's HCC surveillance population, particularly in the West. Specifically, they were based in Asian populations where chronic hepatitis B and C viruses were the predominant drivers of cancer. Patients in these studies had low rates of central obesity and liver steatosis, both of which are known to impair ultrasound image quality [59] . In addition, not all patients in these cohorts had cirrhosis [50] , which can result in difficulties differentiating between an early HCC and regenerating nodules. In terms of cost-effectiveness, the annual incidence of HCC in populations with high rates of chronic viral hepatitis is over 5%; this is greater than that for NAFLD cirrhosis (approximately 0.7%-4.0%) [60][61][62] . As a result of the factors mentioned above, over 50% of tumours detected in these cohorts were small, leading to a greater than 50% five-year survival for patients who underwent surgery [63] . Survival rates may also have been positively influenced by the fact that patients with non-NAFLD causes of HCC are generally younger with fewer comorbidities than patients with NAFLD-HCC [46] . The obesity epidemic began in the 1970s and has continued to rise exponentially; 42.4% of Americans currently have a body mass index (BMI) > 30 kg/m 2 [64] . The natural history of NAFLD, with potential progression towards cirrhosis and HCC [65] , has meant that it is only in the last two decades that end-stage complications of chronic liver disease including HCC have been widely observed in this group. The rise of obesity and NAFLD also has significant implications for HCC surveillance. Reliance on USS for HCC surveillance presents challenges in the NAFLD population, 51% of whom are living with obesity [66] . The depth of the subcutaneous fat, with the inherent acoustic attenuation, leads to poor image definition, impairing earlier HCC detection. An example of how central obesity can obscure visualisation at ultrasound is shown in Figure 1 in comparison to images from a patient who is lean [ Figure 2]. Recent cohort studies from Europe and America report that USS is inadequate (defined as detection of HCC outside of the Milan criteria) in 20.3%-32.2% of people [59,67] . In a retrospective cohort study of 941 patients, a higher BMI, NASHrelated cirrhosis, Child-Pugh B or C cirrhosis and alcohol-related cirrhosis were all independently associated with inadequate USS quality in multivariable analysis [59] . Inadequate quality was observed in 9.3%, 18.0%, 22.8%, 35.5% and 39.3% of people of normal weight, overweight, obesity class I, obesity class II, and morbid obesity, respectively [59] . USS exams were inadequate in 34.6% of patients with NASH-related cirrhosis, in comparison to 15.0% of patients with other aetiologies of cirrhosis [59] . These factors lead to the under-recognition of small or early-stage HCC nodules. These findings were confirmed by Esfeh et al., who report that USS has a sensitivity for HCC detection of 59% in people without obesity and 19% for people with obesity in a population of 116 liver transplant recipients [68] . In a similar study of 352 consecutive patients undergoing liver transplant assessment for HCC, the univariate analysis identified obesity (sensitivity 76% for BMI ≥ 30 kg/m 2 vs. 87% for BMI < 30 kg/m 2 , P = 0.01), and an aetiology of NAFLD (sensitivity 59% vs. 84%; P = 0.02) [69] . In 2021, Kim et al. performed a meta-analysis to evaluate the incidence of USS surveillance failures in detection of early-stage HCC and to determine risk factors for this [70] . [70] . The authors, therefore, concluded that the use of USS for HCC surveillance appears limited. As part of their recent meta-analysis described above, Singal et al. performed a subgroup analysis of studies stratified according to the proportion of patients with NAFLD [57] . The researchers report similar point estimates for the association between surveillance and early detection of HCC (relative risk, RR 1.86, 2.23, and 2.04, respectively) and receipt of curative treatment (RR 1.79, 2.06, and 2.02, respectively) for studies with < 10%, 10%-20% and > 20% of patients with NAFLD [57] . Only two studies out of 59 specifically examined the benefits of HCC surveillance in the NAFLD cohort. Aby [72] . SCREENING SCHEDULE FOR ULTRASOUND SURVEILLANCE The literature suggests that it takes approximately 4-12 months for an undetectable lesion to reach 2 cm in size [73,74] . This has helped inform a six-monthly surveillance interval for people with cirrhosis with the aim of detecting tumours less than 3 cm in diameter. A 12-month interval has been shown to result in reduced detection rates for early cancer and reduced survival [75] . A multi-centre randomised trial reported no benefit for earlier detection of cancer with a three-monthly vs. six-monthly regime in people with cirrhosis, although this was found to be limited by the recall procedure [76] . Unfortunately, patients with NAFLD cirrhosis did not meet the inclusion criteria for this trial. COST-EFFECTIVENESS OF ULTRASOUND SURVEILLANCE FOR HEPATOCELLULAR CARCINOMA DETECTION Markov modelling suggests that 6-monthly USS surveillance for HCC in people with cirrhosis increases quality-adjusted life expectancy by 8.6 months overall and 3.5 years in patients with small treated tumours [20] . Biannual USS surveillance had an incremental cost-effectiveness ratio of $30,700 per qualityadjusted life year (QALY) gained and was more cost-effective than annual USS, biannual USS with AFP, annual/biannual CT, and annual MRI, using a threshold of $50,000 per QALY gained [20] . Where USS sensitivity dropped below 65%, or the specificity of AFP exceeded 95%, combined USS and AFP surveillance was preferred [20] . The incremental cost-effectiveness ratio of biannual CT and annual MRI consistently exceeds $100,000/QALY [20] . ADDITION OF ALPHA-FETOPROTEIN TO ULTRASOUND SURVEILLANCE Elevated serum AFP concentration, particularly sustained high levels, can indicate a possible HCC; however, there is only weak evidence to support its use in HCC surveillance. In a study of 88 patients, the inclusion of AFP in the surveillance protocol led to the detection of an additional 6%-8% of HCC cases vs. those detected by USS alone [77] . Poor performance relates to the fact that up to 24% of early HCCs present with normal AFP levels (particularly patients without cirrhosis/normal transaminase levels, as frequently occurs with NAFLD-HCC) [78] . Indeed a greater proportion of people with NAFLD-related HCC are AFP non-secretors compared to people with other drivers of liver cancer [79,80] . Furthermore, AFP levels fluctuate with active liver inflammation leading to false positive results with viral and alcoholic hepatitis. In a metaanalysis by Singal et al. (2009, 1,116 patients), the pooled sensitivity of USS for the detection of early HCC increased from 63%-69% with the addition of AFP (although this did not reach statistical significance) [51] . A more recent meta-analysis (Tzartzeva et al., 2018, 13,367 patients) reported that AFP measurement improved sensitivity rates from 45% to 63% for early HCC; however, USS alone detected HCC with a higher specificity than USS plus AFP [56] . Few studies have looked at the use of AFP for surveillance, rather than as a diagnostic test. The only randomised trial of this type used AFP to see if this could lead to early cancer detection in a population of Chinese men who were hepatitis B surface antigen positive [81] . The study found that screening with AFP led to earlier diagnosis of HCC, but did not result in an overall reduction in mortality. There is an important evidence gap to examine the effectiveness and cost-effectiveness of AFP in addition to USS for surveillance in Western populations, including patients with cirrhosis and NAFLD. The American Association for the Study of Liver Disease (AASLD) guidelines state that it is not possible to determine whether USS alone or the combination of USS plus AFP leads to a greater improvement in survival [16] . Similarly, the European Association for the Study of the Liver (EASL) concludes that "insufficient data are available regarding the diagnostic accuracy of AFP in patients with adequate treatment of the aetiological cause of liver disease making any calculation of the cost-effectiveness impossible to date" [15] . In contrast, the Asia-Pacific practice guidelines do recommend the use of AFP in addition to USS for surveillance, with the caveat that AFP is not recommended as a confirmatory test in small HCCs and the AFP cut-off value should be 200 ng/mL, although a lower value can be used in a population with hepatitis virus suppression or eradication [17] . While no specific recommendations have been made regarding people with NAFLD, the clinical utility of AFP in this setting is thought to be low [15,16] . AFP does, however, play an important prognostic role in patients with established HCC [82,83] . CONTRAST-ENHANCED ULTRASOUND FOR HEPATOCELLULAR CARCINOMA SURVEILLANCE There has been an interest in using intravenous contrast agents (stabilised microbubbles containing air or other gases) to enhance the performance of USS for the detection of early HCC. These contrast agents are safe and are not really cleared (unlike iodinated agents used for CT or MRI). Contrast-enhanced USS (CEUS) allows real-time dynamic imaging (performed continuously for the first minute), permitting the detection of arterial neoangiogenesis. This is followed by intermittent scanning every 30-60 s for 5 min to examine washout. The degree and time of onset of the washout can help discriminate HCC (mild, late washout) from intrahepatic cholangiocarcinoma and other non-HCC tumours (marked, early washout) [84] . CEUS also benefits from the fact that arterioportal shunts seen at CT and MRI are not visible, and in the setting of cirrhosis, any lesion demonstrating arterial enhancement is likely to be malignant or premalignant [84] . CEUS does not enhance the ability of USS to detect small tumour foci, however [85] . This may be related to the fact that a comprehensive assessment of the whole liver cannot be performed during the short arterial phase. Also, not all well-differentiated HCCs show arterial enhancement [86] . CEUS is more expensive than non-contrast ultrasound, requiring expertise and specialised equipment. For the characterisation of known focal liver lesions, costs are comparable to CT, but CEUS is more cost-effective than MRI [87] . Therefore, CEUS is not recommended for surveillance but for diagnostic purposes in patients at high risk of HCC [15,16,84] . EMERGING TOOLS FOR HCC SURVEILLANCE There has been some interest in adapting existing CT and MRI protocols to improve their acceptability as surveillance tools. In a prospective single-armed study, biannual two-phase low-dose CT has been trialled for HCC surveillance, which showed significantly higher levels of sensitivity (83.3% vs. 29.2%) than USS [88] . Abbreviated MRI examination involves using a shortened MRI protocol with fewer sequences, specifically designed to detect early-stage HCC [89] . Three strategies have been trialled: (i) non-contrast MRI; (ii) diffusion-weighted imaging, dynamic contrast-enhanced; and (iii) hepatobiliary phase contrastenhanced abbreviated MRI. These techniques were evaluated in a meta-analysis which included three prospective and 12 retrospective studies: 2,807 patients, 917 with HCC. Abbreviated MRI achieved high levels of sensitivity (69% for tumours < 2 cm, 86% for tumours ≥ 2 cm; 82% overall vs. 53% for USS) [90] . Non-contrast and contrast abbreviated MRI were comparable in terms of sensitivity (86% vs. 87%) and specificity values (94% vs. 94%) [88] . Non-contrast MRI is comparable to USS in terms of cost and has the benefit of not requiring exposure to contrast or radiation; it may therefore prove to be an effective surveillance tool in patients with suboptimal USS imaging. We await the results of key trials comparing non-contrast MRI with USS for HCC surveillance (NCT02551250, MAGNUS-HCC; NCT04455932; NCT02514434, MIRACLE-HCC) to inform whether non-contrast MRI offers a benefit in terms of increased sensitivity for detection of early lesions and survival. These studies will also suggest an optimal surveillance schedule for this technique. Substantial investment is being directed towards exploring the use of biomarkers for detecting early disease. Published examples include the GALAD score (comprising age, gender, AFP, the lens culinaris agglutininreactive fraction of AFP, and Des-gamma-carboxy prothrombin (DCP) [also known as prothrombin induced by vitamin K absence II (PIVKA II)], which has been shown to detect any stage HCC in patients with NASH with an area under the curve (AUC) of 0.96, and HCC within the Milan criteria with an AUC of 0.91 (sensitivity of 68% and specificity of 95%) [91] . However, it requires specialist tests and has not yet been prospectively validated, although trials are underway (NCT05342350). DCP has been shown to have a sensitivity and specificity of 71% and 84%, respectively, for the detection of HCC [92] ; however, DCP levels are associated with more advanced tumour stage and portal vein invasion, a limitation for early detection [93] . Recent observational studies from Japan have identified that IgM-free apoptosis inhibitor of macrophage (AIM) serum levels are a sensitive diagnostic marker for NASH-HCC, and that AIM activation appears before HCC is diagnostically detectable [94,95] . Larger validation studies are required, however. Another approach is a liquid biopsy [the analysis of tumour components, particularly circulating tumour DNA (ctDNA), circulating tumour cells or extracellular vesicles]. Oncoguard Liver (a composite of three ctDNA methylation genes, AFP and gender) has demonstrated high levels of sensitivity and specificity [96] . The HelioLiver Test (a multi-analyte blood test combining ctDNA methylation panel, clinical variables, and protein tumour markers) has demonstrated an AUC of over 0.95 in a phase II study [97] and is currently undergoing prospective validation. Liquid biopsy is prone to detection errors, however, as ctDNA generally comprises < 2% of circulating DNA and less in early HCC. In terms of risk stratification, the aMAP score was developed using data from prospective studies and randomized control trials and was found to be predictive of HCC at five years in patients with different aetiologies of liver disease; however, only 5% had NAFLD [98] . The "HCC risk score" developed using the Veterans Affairs healthcare dataset identifies patients with NAFLD cirrhosis at risk of HCC, but is not validated in people without cirrhosis [99] . Given the association noted between PNPLA3 and HCC [100] , there has been an interest in whether this may serve as a useful tool for risk stratification. However, this association has been found to be less significant in patients with NAFLD [101] , and a recent analysis of the performance of a polygenic risk score incorporating PNPLA3 reported only moderate accuracy in predicting which patients with NAFLD are at greatest risk of HCC [102] . The prognostic liver signature (PLS)-NAFLD, a 133 gene signature, has been shown to be predictive of HCC risk but is currently limited by availability and cost [103] . Ultimately better tools are needed to do risk stratification of individuals at risk of HCC and to tailor testing, and perhaps the time interval for surveillance accordingly to support a move towards precision screening for HCC [104] . Given the experience and time spent on HCC surveillance in many countries, improved patient selection and testing is urgently required. Consortia, including non-invasive biomarkers of metabolic liver disease (NIMBLE), Liver Investigation: Testing Marker Utility in Steatohepatitis (LITMUS) and early detection of hepatocellular liver cancer (DeLIVER), are pursuing the discovery of novel biomarkers for use in this clinical context, and may be used with USS, or when USS performance is suboptimal. Shear wave elastography, a non-invasive marker for the prediction of liver fibrosis that uses a normal Bmode ultrasound probe, has been shown to be predictive of HCC risk in hepatitis B and C virus infection [105,106] . In addition, it may be clinically useful in distinguishing benign and malignant lesions [107] . Further validation is required, however. Finally, while newer USS devices have the additional capacity to measure grades of liver steatosis [108] , it is unclear how this would benefit the HCC surveillance population where the majority of patients will have cirrhosis. BEST CLINICAL PRACTICE International guidelines acknowledge the limited performance of USS in patients with central obesity and marked parenchymal heterogeneity. However, USS remains the primary recommended imaging technique for HCC surveillance [ Table 2], considering its high sensitivity in the absence of these factors, safety and proven cost-effectiveness. The AASLD guidelines advise clinicians to utilise CT or MRI, with or without AFP, in patients every six months where USS is documented to be inadequate [16] . The latest American Gastroenterology Association (AGA) clinical practice update on screening and surveillance for HCC in patients with NAFLD supports this approach and emphasises that the adequacy of USS should be consistently reported, including descriptions of parenchyma heterogeneity, visualization of the entire liver, and beam attenuation, as these factors may be impaired in the presence of obesity [109] . These recommendations reflect the USS LI-RADS (Liver Reporting and Data System) visualisation scores (A -No or minimal limitation; B -Moderate limitation, the examination may obscure small masses; and C -Severe limitation, the examination may miss focal liver lesions) [110] . CT or MRI surveillance is advised where USS quality is graded as C, or in some cases B [109] . However, this grading system has not been validated and there is some uncertainty around the approach to patients in category B. The cost-effectiveness of this strategy has not been investigated, and this could have significant implications for health care providers and systems, as the number of patients qualifying for cross-sectional imaging is likely to continue to grow over the coming years [99] . We propose a pragmatic approach based on practice in our centre, as described in Figure 3. We encourage documentation of the USS quality as either "satisfactory" or "suboptimal" for the detection of focal lesions [Box 1]. With suboptimal imaging, we suggest patients fit for curative treatment, instead receive an abbreviated MRI scan (we await the results of ongoing trials to determine if a 6-or 12-monthly surveillance schedule is preferable). Where MRI is contraindicated, we suggest physicians consult their radiology department regarding the use of CT and the optimal timing of this on a case-by-case basis. The effectiveness and cost-effectiveness of this approach have not been tested within a randomised control trial; the authors believe that this approach should be a priority for future research. SHOULD NON-ALCOHOLIC FATTY LIVER DISEASE PATIENTS WITHOUT CIRRHOSIS UNDERGO HEPATOCELLULAR CARCINOMA SURVEILLANCE? There is currently insufficient evidence to recommend HCC surveillance in patients with NAFLD without cirrhosis [111] . On the one hand, the overall risk of HCC is extremely low (15% HCC incidence at 10 years for people with NAFLD cirrhosis, vs. 2.7% for those without cirrhosis) [112] , on the other hand, up to 40% of people with NAFLD develop HCC in the absence of cirrhosis [46] . People with NAFLD without cirrhosis are younger, have better liver function, and have fewer comorbidities and better performance status than those with NAFLD with cirrhosis, increasing their likelihood of receiving curative treatment if their tumour was diagnosed at an early stage. They would also have a lower probability of death from decompensated liver disease, extrahepatic cancer, and cardiovascular disease during surveillance. It is [17] • Surveillance for HCC should be undertaken in high-risk groups of patients (cirrhotic hepatitis patients & chronic HBV carriers) • The combination of USS and serum AFP measurement •performed biannually should be used as a surveillance • strategy for HCC impractical to survey all patients with NAFLD for HCC; thus, more comprehensive risk stratification within this group would be beneficial. Liver fibrosis is a key risk factor for HCC in people with NAFLD. In a prospective study, the HCC incidence rate per 1,000 person-years was found to be 0.34 for advanced fibrosis vs. 0.04 for nil or minimally significant fibrosis defined histologically [113] . A similar association has been found where non-invasive serum fibrosis tests are used. Analysis of data from the Veterans Health Administration identified that a FIB-4 score > 2.67 is predictive of high incidence rates of HCC (0.39 per 1,000 person-years vs. 0.04 per 1,000 person-years in those with a persistently low FIB-4) [114] . There is also evidence of a positive relationship between liver stiffness measurement obtained at FibroScan and disease activity [115] , including HCC incidence [116] . Guidelines disagree about the merits of HCC surveillance in patients with non-cirrhotic NAFLD [ Table 2]. EASL, in contrast to the American (AASLD) and Asia-Pacific guidelines, recommends people with metabolic syndrome or NASH affected by severe fibrosis should undergo surveillance [15][16][17] . EASL state that all "non-cirrhotic fibrosis stage 3 patients, regardless of aetiology, may be considered for surveillance based on an individual risk assessment" [15] . The recent AGA update on HCC surveillance in NAFLD recommends HCC screening in those with evidence of "advanced liver fibrosis or cirrhosis" determined by combining at least two non-invasive tests: point-of-care tests (e.g., FIB-4 score), specialised blood tests (e.g., ELF test ), imaging-based tests (e.g., TE) [109] . The cut-offs selected (16.1 kPa for TE) are within the cirrhotic range, however [117] . This approach is largely driven by clinical consensus opinion as there is minimal evidence to support these cut-offs, or the use of non-invasive tests in this setting currently [104] . CONCLUSION Mortality rates from HCC are increasing, driven largely by the continued rise in the prevalence of NAFLD in many countries. Outcomes for people with HCC are strongly associated with early detection; thus, optimisation of HCC surveillance techniques including USS is a major priority for research in this field. Patients with NAFLD are under-represented in HCC surveillance compared to other aetiologies of liver disease due to high rates of undetected disease in the community, and a higher prevalence of patients presenting with HCC in the absence of cirrhosis. Better tools are needed to help identify patients with NAFLD at risk of HCC. USS may be suboptimal for early disease detection for patients with obesity and NAFLD. Guidance from the AGA on recording the image quality of USS should be instituted, and additional imaging, with abbreviated MRI (or CT where MRI is contraindicated), should be decided on a case-by-case basis. Novel approaches, including the GALAD and AMAP score, in addition to other biomarkers, still require further evaluation prior to becoming part of routine surveillance.
2023-04-05T15:09:16.050Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "8a7314a855fdc6d30f5585336a9fc9b85c65fec9", "oa_license": "CCBY", "oa_url": "https://hrjournal.net/article/download/5590", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d8eb2411144532ed9244a21fd28a4d84041b5947", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
32914625
pes2o/s2orc
v3-fos-license
ON JOINTLY ANALYZING THE PHYSICAL ACTIVITY PARTICIPATION LEVELS OF INDIVIDUALS IN A FAMILY UNIT USING A MULTIVARIATE COPULA FRAMEWORK Supported by a grant from the U.S. Department of Transportation, University Transportation Centers Program 16. Abstract The report focuses on analyzing and modeling the physical activity participation levels (in terms of the number of daily “bouts” or “episodes” of physical activity during a weekend day) of all members of a family jointly. Essentially, we consider a family as a “cluster” of individuals whose physical activity propensities may be affected by common household attributes (such as household income and household structure) as well as unobserved family-related factors (such as family life-style and health consciousness, and residential location-related factors). The proposed copula-based clustered orderedresponse model structure allows the testing of various dependency forms among the physical activity propensities of individuals of the same household (generated due to the unobserved family-related factors), including non-linear and asymmetric dependency forms. The proposed model system is applied to study physical activity participations of individuals, using data drawn from the 2000 San Francisco Bay Area Household Travel Survey (BATS). A number of individual factors, physical environment factors, and social environment factors are considered in the empirical analysis. The results indicate that reduced vehicle ownership and increased bicycle ownership are important positive determinants of weekend physical activity participation levels, though these results should be tempered by the possibility that individuals who are predisposed to physical activity may choose to own fewer motorized vehicles and more bicycles in the first place. Our results also suggest that policy interventions aimed at increasing children’s physical activity levels could potentially benefit from targeting entire family units rather than targeting only children. Finally, the results indicate strong and asymmetric dependence among the unobserved physical activity determinants of family members. In particular, the results show that unobserved factors (such as residence location-related constraints and family lifestyle preferences) result in individuals in a family having uniformly low physical activity, but there is less clustering of this kind at the high end of the physical activity propensity spectrum. 17. Key Words Introduction The potentially serious adverse mental and physical health consequences of obesity have been well documented in epidemiological studies (see, for instance, Nelson and Gordon-Larsen 2006;Ornelas et al. 2007). While there are several factors influencing obesity, it has now been established that a low level of physical activity is certainly an important contributing factor (see, Haskell et al. 2007;Steinbeck 2008). Besides, earlier studies in the literature strongly emphasize the importance of physical activity even in non-obese and non-overweight individuals from the standpoint of increasing cardiovascular fitness, improved mental health, and decreasing heart disease, diabetes, high blood pressure, and several forms of cancer (USDHHS 2008;Center for Disease Control (CDC) 2006). But, despite these well acknowledged benefits of physical activity, a high fraction of individuals in the U.S. and other developed countries lead relatively sedentary (or physically inactive) lifestyles. For instance, the 2007 Behavioral Risk Factor Surveillance System (BRFSS) survey suggests that about a third of U.S. adults are physically inactive, while the 2007 Youth Risk Behavior Surveillance survey indicates that about 65.3 percent of high school students do not meet the current physical activity guidelines. 1 The low level of physical activity participation in the U.S. population has prompted several research studies in the past decade to examine the determinants of physical activity participation, with the objective of designing appropriate intervention strategies to promote active lifestyles. However, as we discuss later, most of these studies focus on adult physical activity participation or children"s/adolescents" physical activity participation, without explicitly considering family-level interactions due to observed and unobserved factors in the physical activity participation levels of all individuals (adults and children/adolescents) of the same family. In this regard, the current paper focuses on analyzing and modeling the physical activity participation levels (in terms of the discrete choice of the number of daily "bouts" or "episodes" of physical activity) of all members of a family jointly. Essentially, we consider a family as a "cluster" of individuals whose physical activity levels may be affected by 3 common household attributes (such as household income and household structure) as well as unobserved family-related factors (such as residential location-related constraints/facilitators of physical activity and/or family life-style and health consciousness factors). Ignoring such family-specific interactions due to unobserved factors (also referred to as unobserved heterogeneity in the econometric literature) will, in general, result in inconsistent estimates regarding the influence of covariates and inconsistent probability predictions in discrete choice models (see Chamberlain 1980;Hsiao 1986). This, in turn, can lead to misinformed intervention strategies to encourage physical activity. The joint generation of physical activity episodes at the household level is also important from an activity-based travel modeling perspective. As discussed by Copperman and Bhat (2007a), much of the focus on activity generation (and scheduling) and inter-individual interactions in the activity analysis field has been on adult patterns. In contrast, few studies have explicitly considered the activity patterns of children, and the interactions of children"s patterns with those of adults" patterns, when children are present in the household. If the activity participation of children with adults is primarily driven by the activity participation needs/responsibilities of adults (such as a parent wanting to go to the gym, and tagging along her/his child for the trip), then the emphasis on adults" activity-travel patterns would be appropriate. However, in many instances, it is the children"s activity participations, and the dependency of children on adults for facilitating the participations that lead to interactions between adults" and children"s activity-travel patterns. Of course, in addition, children can also impact adults" activity-travel patterns in the form of joint activity participation in such activities as shopping, going to the park, walking together, and other social-recreational activities. The joint generation of physical activity episodes in the current paper is consistent with such an emphasis on both adults" and children"s activity-travel patterns within a household. Overview of Earlier Studies on Physical Activity Participation The body of work in the area of understanding the determinants of physical activity participation has been burgeoning in the past decade or so in many different disciplines, including child development, preventive medicine, sports medicine, public health, physical activity, and transportation. The intent here is not to provide an exhaustive review of these past studies (some good recent reviews of these works are Wendel-Vos et al. 2005;Allender et al. 2006;Gustafson and Rhodes 2006;Ferreira et al. 2007). However, one may make two general observations from past analytic studies. First, almost all of these analytic studies focus on individual physical activity without recognition that individuals are part of families and that there are potentially strong family interactions in physical activity levels. In this regard, the studies focus on either adults only or children/adolescents only. That is, they have adopted either an "adult-centric" approach focusing on adult physical activity patterns, and used children"s demographic variables (such as presence/number of children in the household) as determinant variables, or a "child-centric" approach focusing on children"s physical activity patterns, and used adults" (parents") demographic, attitudinal, and physical activity variables (such as number of adults in the household, support for children"s physical activity, and adults" physical activity levels) as determinant variables (see Sener and Bhat 2007 for more details on these approaches; examples of adult-centric studies include Collins et al. 2007; Srinivasan and Bhat 2008;Dunton et al. 2008, while examples of child-centric studies include Davison et 4 al. 2003;Trost et al. 2003;Cleland et al. 2005;Sener et al. 2008;Ornelas et al. 2007). 2 While these earlier studies provide important information on the determinants of adults" or children"s physical activity levels, they do not explicitly recognize the role of the family as a fundamental social unit for the development of overall physical activity orientations and lifestyles. This is particularly important considering parental influence on, and involvement in, children"s physical activities, as well as children"s physical activity needs/desires that may influence parents" (among other household members) physical activity patterns. Since these effects are likely to be reinforcing (either toward high physical activity levels or low physical activity levels), the appropriate way to consider these family interactions would be to model the physical activity levels of all family members jointly as a package, considering observed and unobserved covariate effects. 3 2 The works of Trost et al. (2003) and Davison et al. (2003) are particularly valuable, since they examine different mechanisms through which parents may influence their children"s physical activity pursuits. As identified by Trost et al. (2003), these may include genetics, direct modeling (i.e., parents" own physical activity involvement effects on children"s physical activity levels), provision of time and money resources to support children"s activities, rewarding desirable behaviors and punishing/ignoring undesirable behaviors, parents" own attitudes and beliefs about the importance of physical activity, and adopting authoritative parenting procedures to encourage children"s physical activity. While most studies in the literature adopt the direct modeling hypothesis, Trost et al. (2003) suggest that support-related and parenting beliefs/attitudes are perhaps more important predictors of children"s physical activity levels than direct modeling. Davison et al. (2003) indicate that both direct modeling and parental support/parenting practices influence children"s (girls") physical activity levels. 3 Note that the clustering effects in physical activity levels among individuals in a family may be due to parental influences and support (or lack of support) for physical activities of children, as discussed earlier. Since parental attitudes and beliefs are likely to impact parental influence, and attitudes/beliefs as well as support mechanisms may be unobserved to the analyst, this could generate dependence in unobserved factors affecting the physical activity levels within a family. However, there are other possible reasons for such family-level clustering. For instance, the quality of physical activity recreation facilities accessible to a family from its residence may be relatively poor, and if this lack of "quality" is difficult to measure/observe, it can be an unobserved deterrent to the physical activity participation of all individuals in a family. Also, it is not uncommon for families to undertake joint recreational activities, and some families may be more "activity-cohesive" in undertaking recreational pursuits. Such family cohesion effects, when complemented with an overall activity lifestyle orientation, have been shown in earlier qualitative psycho-social and family interaction studies to be positive determinants of the physical activity pre-dispositions of members in a family (see, for example, Ornelas et al. 2007;Springer et al. 2006;Strauss et al. 2001;Allender et al. 2006). If such qualitative indicators of family interaction are unavailable to an analyst, as in the current study, these indicators effectively serve as unobserved facilitators to the physical activity participation of all members of a family. Related to family cohesion, but also a potentially different mechanism for clustering, is family communication intensity. In families with high communication intensity, it is possible that the children affect adults through their acquired (from outside the home) interest or uninterest in physical activities (rather than a one-way impact of parental attitudes on the physical activity levels of all members of the household). This can be another source of clustering effects (see Allender et al. 2006). Overall, the clustering effects can be due to correlated constraints faced by family members (such as residential-location related factors), or correlated lifestyle preferences (such as family cohesion activities) or belief/attitude spillover effects ("rubbing off" of beliefs/attitudes among individuals in a household, moderated by family communication levels), or combinations of these. 5 The second general observation from earlier studies is that they have proposed three broad groups of determinants of individual physical activity within an ecological framework: individual or intrapersonal factors, physical environment factors, and social environment or interpersonal factors (e.g., Sallis and Owen 2002;Giles-Corti and Donovan 2002;Gordon-Larsen et al. 2005;U.S. Government Accountability Office 2006;Kelly et al. 2006;Salmon et al. 2007;Bhat and Sener 2009). The category of individual factors includes demographics (such as age, education levels, and gender), and work-related characteristics (employment status, hours of week, work schedule, work flexibility, etc.). The category of physical environment factors includes weather, season of year, transportation system attributes (level of service offered by various alternative modes for participation in out-of-home activities), and built environment characteristics (BECs). The final category of social environment factors includes family-level demographics (presence and age distribution of children in the household, household structure, and household income), residential neighborhood demographics, social and cultural mores, attitudes related to, and in support of, physical activity pursuits, and perceived friendliness of one"s residential neighborhood. Of these three groups of factors, public health researchers have focused more on the first and third categories of factors (i.e., the individual and social environment factors), particularly as they correlate to participation in such recreational physical activity as sports, walking/biking for leisure, working out at the gym, and unstructured play (see, for instance, Kelly et al. 2006;Salmon et al. 2007;Dunton et al. 2008). On the other hand, transportation and urban planning researchers have particularly focused their attention on the first and second category of factors (with limited consideration of the third category in the form of family-level demographics) as they relate to non-motorized mode use for utilitarian activity purposes (i.e. nonmotorized forms of travel to participate in an out-of-home activity episode at a specific destination, such as walking/biking to school or to work or to shop; see, for instance, Dill and Carr 2003;Cervero and Duncan 2003;Sener et al. 2009). There have been few studies that consider elements of all three groups of physical activity determinants, and that consider recreational physical activities and non-motorized travel for utilitarian purposes (but see Hoehner et al. 2005;Copperman and Bhat 2007a for a couple of exceptions). The Current Paper in Context and Paper Structure In this paper, we contribute to the earlier literature by focusing on the family as a "cluster unit" when modeling the physical activity levels of individuals. In this regard, and because earlier physical activity studies have focused only on adults or only on children, our emphasis is on analyzing physical activity levels of families with one or more parents and children in the household. That is, we examine the determinants of physical activity in the context of family households with children. In doing so, we explicitly accommodate family-level observed and unobserved effects that may influence the physical activity levels of each (and all) individual(s) in the family. Further, we consider variables belonging to all the three groups of individual factors, physical environment factors, and social environment factors. In particular, we incorporate a rich set of neighborhood physical environment variables such as land use structure and mix, population size and density, accessibility measures, demographic and housing measures, safety from crime, and highway and non-motorized mode network measures. However, in the context of social factors, we do not explicitly accommodate physical activity attitudes/beliefs and support systems of individual 6 family members as they influence the physical activity levels of others in the family. This is because our data source does not collect such information, though it is well suited to examine the influence of several other potential determinants. Future studies would benefit from including family-level attitudinal/support variables, while also adopting a family-level perspective of physical activity. The measure of physical activity we adopt in the current study is the number of out-of-home bouts or episodes (regardless of whether these bouts correspond to recreation or to walking/biking for utilitarian purposes) on a weekend day as reported in an activity survey. 4 Activity surveys typically collect information on all types of (out-of-home) episodes of all individuals in sampled households over the course of 1 or 2 days. As indicated by Dunton et al. (2008), the use of a short-term (1-2 days) selfreport reduces memory-related errors compared to other long-term methods of data collection used in the physical activity literature (such as self-reports over a week or a month). Further, survey data allow the consideration of the social context (family characteristics and physical activity levels of family members), while methods that examine the level of use of physical activity environments (such as a park or a playground) do not provide information to consider the social context in any depth. Also, for our family-level modeling of physical activity, survey data provide information on physical activity participation for all members of a family. 5 Finally, the activity survey data used here provide information on residential location, which is used to develop measures of the physical environment variables in the family"s neighborhood. Of course, a limitation of activity survey-based data is that some episodes of physical activity, such as free play, in-home physical activity, and incidental physical activity may not be identified well. Further, activity surveys do not provide a measure of the physical activity intensity level. Thus, there are strengths and limitations of using survey data, but such data are ideally suited for family-level cluster analysis of the type undertaken in the current effort. From a methodological standpoint, the daily number of physical activity episodes of each individual is represented using an ordered response structure, which is appropriate for situations where the dependent variable is ordinal (that is, the dependent variable values have a natural ordering; see Section 2.1 for a description of the ordered-response structure). The jointness between the episodes of different members of the same family is generated by common household demographic and location variables, as well as through dependency among the stochastic error terms of the random latent variables assumed to be underlying the observed discrete number of 4 The analysis focuses on weekend days because of the high prevalence and duration of participation in physical activities over the weekend days relative to weekdays (see Lockwood et al. 2005), as well as because there is much more joint activity participation within a family (and therefore interactions within a family cluster) on weekend days relative to weekdays (see Srinivasan and Bhat 2008;Copperman and Bhat 2007a). Children, in particular, participate in discretionary activities at much higher levels, and for substantially longer durations, on weekend days compared to weekdays (Stefan and Hunt 2006). 5 As we discuss later, the characterization of an activity episode as a physically active one or not is based on the activity type and the type of location (such as bowling alley, gymnasium, shopping mall, etc.). Thus, an episode involving recreation activity at a soccer stadium is designated as a physical activity episode. For travel episodes, the episode is designated as physically active if it involves walking or bicycling. 7 physical activity episodes. 6 In the current paper, we allow non-linear and asymmetric error dependencies using a copula structure, which is essentially a multivariate functional form for the joint distribution of random variables derived purely from prespecified parametric marginal distributions of each random variable. To our knowledge, this is the first formulation and application in the econometric literature of the copula approach for the case of a clustered ordered response model structure. The rest of this paper is structured as follows. The next section discusses and presents the copula-based clustered ordered-response model structure. Section 3 describes the survey-based data source and sample formation procedures for the empirical analysis. Section 4 discusses the empirical results, and presents the results of a policy-based simulation. Finally, Section 5 summarizes important findings from the study, and concludes the paper. Background This paper uses an ordered-response model for analyzing the number of physical activity episodes for each individual. The assumption in this model is that there is an underlying continuous latent variable representing the propensity to participate in physical activity whose partitioning into discrete intervals, based on thresholds on the continuous latent variable scale, maps into the observed set of count outcomes. While the traditional ordered-response model was initially developed for the case of ordinal responses, and while count outcomes are cardinal, this distinction is really irrelevant for the use of the ordered-response system for count outcomes. This is particularly the case when the count outcome takes few discrete values, as in the current empirical case, but is also not much of an issue when the count outcome takes a large number of possible values (see Herriges et al. 2008;Ferdous et al. 2010 for detailed discussions). An important issue, though, is that we have to recognize the potential dependence in the number of physical activity episodes of different members of the same family due to both observed exogenous variables as well as unobserved factors. If there is no dependence based on unobserved factors, one can accommodate the dependence due to observed factors by estimating independent ordered-response models for each individual in the family after including common exogenous variables. But the dependence due to unobserved family-related factors (such as family life-style and health consciousness, and residential location-related factors) can be accommodated only by jointly modeling the number of episodes of all family members together. This is the classic case of clusters of dependent random variables that has widely been studied and modeled in the transportation and other fields (see Bhat 2000;Bottai et al. 2006;Czado and Prokopenko 2008). In our case, the clusters correspond to family units, although the methodology we present in the current paper can be used for any situation involving clusters. An established method to deal with unobserved interactions due to cluster effects is a random effects model. In the ordered-response context, this entails adding a common cluster-based normal error term to the latent underlying propensities for each 8 individual in the cluster (see Bhat and Zhao 2002 for a detailed explanation of the mathematical formulation as well as an empirical example of this method). The main limitation of the random effects model is the restrictive assumption introduced in the dependence structure through the random normal error term. Thus, for instance, in the random effects ordered-response probit model, the joint distribution of error terms is considered multivariate normal, which assumes that the dependence (due to unobserved factors) among the physical activity propensities of family members is radially symmetric. On the other hand, it may be the case that the dependence among the propensities of family members is actually asymmetric; for instance, one may observe family members having a simultaneously low propensity for physical activity participation, but not necessarily family members having a simultaneously high propensity for physical activity participation. That is, unobserved factors that decrease physical activity propensity may "rub off" more among individuals in a family than unobserved factors that increase physical activity propensity. Alternatively, one may have the reverse asymmetry too where family members have a simultaneously high propensity for physical activity propensity, but not a simultaneously low propensity for physical activity propensity. In the current paper, rather than using the random effects approach, we use a copula approach to accommodate the dependence in physical activity propensity among family members. A copula is a device or function that generates a stochastic dependence relationship (i.e., a multivariate distribution) among random variables with pre-specified marginal distributions (see Trivedi and Zimmer 2007;Nelsen 2006). The use of a copula to generate a joint distribution of a cluster outcome is convenient and flexible for a number of reasons. First, the approach allows testing of a variety of parametric marginal distributions for individual members in a cluster and preserves these marginal distributions when developing the joint probability distribution of the cluster. Second, the copula approach separates the marginal distributions from the dependence structure, so that the dependence structure is entirely unaffected by the marginal distributions assumed. Thus, rank measures of the intra-cluster dependence of the underlying physical activity propensities for members of a family are independent of the marginal distributions used, facilitating a clear interpretation of the dependence structure regardless of the marginal distribution assumed. Third, the clustering context, wherein the level of dependence in the marginal random unobserved terms within a cluster is identical (i.e., exchangeable) across any (and all) pairs of individuals in the cluster, is ideal for the application of a group of copulas referred to as the Archimedean copulas. The Archimedean copulas are closed-form copulas that can be used to obtain the joint multivariate cumulative distribution function of any number of individuals belonging to a cluster. Further, these copulas retain the same form regardless of cluster size, and so it is straightforward to accommodate clusters of varying sizes. 7 Fourth, the Archimedean 7 Technically speaking, one may use a copula approach to allow differential dependence levels among marginal random unobserved terms within a cluster. For instance, it may be argued that the "rubbing off" effects due to unobserved factors (in the context of physical activity participation) are higher between two children in a family than between two adults in a family, or between two adults in a family than between an adult and a child. While such differential dependency patterns within a cluster can be accommodated with specific copula forms (see Bhat and Sener 2009;Bhat et al. 2010), they are, in general, quite difficult to accommodate and estimate using maximum likelihood methods. Alternatively, one can estimate models with differential dependency patterns within a cluster using pairwise copulas (i.e., a bivariate copula for each pair of individuals in a family), but such an approach may not have an equivalent 9 group of copulas allows testing a variety of radially symmetric and asymmetric joint distributions, as well as testing the assumption of within-cluster independence. Fifth, it is simple to allow the level of dependence within a cluster to vary based on cluster type. For example, the dependence among family members in their latent propensities of physical activity may vary by such family characteristics as family type or income. Finally, the closed-form nature of the model structure resulting from using the Archimedean group of copulas lends itself very nicely to the implementation of a computationally straightforward maximum likelihood procedure for parameter estimation. Copula Basics The word "copula" was coined by Sklar (1959), and is derived from the Latin word "copulare", which means to tie, bond, or connect (see Schmidt 2007). A copula is a device or function that generates a stochastic dependence relationship (i.e., a multivariate distribution) among random variables with pre-specified marginal distributions (see Nelsen, 2006;Trivedi and Zimmer 2007;Bhat and Eluru 2009). The precise definition of a copula is that it is a multivariate distribution function defined over the unit cube linking uniformly distributed marginals. Let C be an I-dimensional copula of uniformly distributed random variables U 1 , U 2 , U 3 , …, U I with support contained in [0,1] I . Then, where  is a parameter vector of the copula commonly referred to as the dependence parameter vector. A copula, once developed, allows the generation of joint multivariate distribution functions with given marginals. Consider I random variables , , , The above equation offers a vehicle to develop different dependency patterns for the random variables  based on the copula that is used as the underlying basis of construction. In the current paper, we use a class of copulas referred to as the Archimedean copulas to generate the dependency between the random variables. The multivariate distribution interpretation. The approach we propose and use here is particularly appropriate for cluster-specific effects, where there is an equal level of unobserved dependence between all pairs of entities in a cluster. Such uniform cluster-specific effects are assumed also in the traditional random effects approach discussed earlier. 8 Note that the univariate marginal distribution functions of the random variables can be different, though we use the more restrictive notation here that the univariate distributions are the same. This is the norm when developing econometric models where the random terms represent individual-level idiosyncratic effects. 10 next section briefly discusses the Archimedean class of copulas and presents some specific copulas within this broad family. Archimedean Copulas The Archimedean class of copulas is popular in empirical applications, and includes a whole suite of closed-form copulas that cover a wide range of dependency formulations (see Nelsen, 2006; Bhat and Eluru 2009 for a detailed discussion). The class is very flexible, and easy to construct, as discussed next. Archimedean copulas are constructed based on an underlying continuous convex decreasing generator function  from [0, 1] to [0, ∞] with the following properties: Further, in the discussion here, we will assume that With these preliminaries, we can generate multivariate I-dimensional Archimedean copulas as: where the dependence parameter θ is embedded within the generator function. An important characteristic of any multivariate Archimedean copula with the scalar dependence parameter  is that the marginal pairwise distributions between any two random variables (from U 1 , U 2 , U 3 , …, U I ) is bivariate Archimedean with the same copula structure as the multivariate copula. A whole variety of Archimedean copulas have been identified based on different forms of the generator function  . In this paper, we will consider four of the most popular Archimedean copulas that span the spectrum of different kinds of dependency structures. These are the Clayton, Gumbel, Frank, and Joe copulas (see Bhat and Eluru 2009 for graphical descriptions of the implied dependency structures). All these copulas, in their multivariate forms, allow only positive associations and equal dependencies among pairs of random variables, which is well-suited for cluster analysis because we expect positive and equal dependencies among elements within a cluster. The Clayton copula (Clayton 1978) has the generator function giving rise to the following I-dimensional copula function (see Huard et al. 2006): The copula is best suited for strong left tail dependence and weak right tail dependence. That is, it is best suited when individuals in a family show strong tendencies to have low physical activity levels together but not high activity levels together. 11 The Gumbel copula, first discussed by Gumbel (1960) and sometimes also referred to as the Gumbel-Hougaard copula, has a generator function given by . The form of the I-dimensional copula is provided below: Independence corresponds to 1   . This copula is well suited for the case when there is strong right tail dependence (strong correlation at high values) but weak left tail dependence (weak correlation at low values). Thus, this copula would be applicable when individuals in a family show strong tendencies to have high physical activity levels together but not low activity levels together. The Frank copula, proposed by Frank (1979), is radially symmetric in its dependence structure like the Gaussian (normal) copula. The generator function is , and the corresponding copula function is given by: Independence is attained in Frank"s copula as . 0   This copula is suitable for equal levels of dependency in the left and right tails; that is, when individuals either show low physical activity levels together or high activity levels together. The Joe copula, introduced by Joe (1993Joe ( , 1997, has a generator function and takes the following copula form: The Joe copula is similar to the Gumbel copula, but the right tail positive dependence is stronger. Independence corresponds to . 1   Model Formulation Let q be an index for clusters (family unit in the current empirical context) (q = 1, 2, …, Q), and let i be the index for individuals (i = 1, 2, …, I q , where I q denotes the total number of individuals in family q, including adults and children; in the current study I q varies between 2 and 5). Also, let k be an index for the discrete outcomes corresponding to the number of weekend day physical activity episodes (k = 0, 1, 2, 3, …, K). In the usual ordered response framework notation, we write the latent propensity ( * qi y ) of individual i in family q to participate in physical activity as a function of relevant covariates, and then relate this latent propensity to the count 12 outcome ( qi y ) representing the number of weekend physical activity episodes of individual i in family q through threshold bounds (see McKelvey and Zavoina 1975): where qi x is a (L×1) vector of exogenous variables for individual i in family q (not including a constant),  is a corresponding (L×1) vector of coefficients to be estimated, and k  is the lower bound threshold for count level k The error terms can take any parametric marginal distribution, though we confine ourselves to the normal and logistic distributions in the current paper. Due to identification considerations in the ordered-response model, we standardize the univariate distribution functions, so that they are standard normal or standard logistic distributed. However, we allow dependence in the qi  terms across individuals i in the same family unit q to allow unobserved cluster effects. This dependency is generated through the use of an Archimedean copula based on Equation (2), where the only difference now is the introduction of the index q to reflect that the dependence is confined to members of the same family: It is important to note above that the level of dependence among individuals of a family can vary across families, as reflected by the q  notation for the dependence parameter. As we indicate later, we parameterize this dependence parameter as a function of observed family characteristics in estimation, which allows us to accommodate different levels of dependency among individuals of different types of families. 10 Technically, one can also use different copula forms (i.e., dependency 9 In the empirical analysis, we allow different thresholds for children and adults. From a strict notation standpoint, this implies that the thresholds should be subscripted as ψ ki . However, for notational ease, we suppress the subscript i when writing the thresholds. 10 The use of the notation θ q assumes that the dependency due to unobserved factors is confined to (and identical across) members within a family. In reality, it is possible that the dependency extends beyond members of the same family to members of families within a certain spatial neighborhood and/or within a certain defined social network. Accommodating such generalized multi-level unobserved effects is difficult with Archimedean copulas, but may be achieved using the Gaussian copula combined with a composite marginal likelihood inference approach (see Ferdous et al. 2010;and Spissu et al. 2010). Bhat (2009) has also recently proposed a generalized Gumbel copula within the class of Archimedean copulas that may be 13 surfaces) for different families, but, in the current paper, we will maintain the same copula form across all families to keep the estimation tractable (however, note that we test for different copula forms, even if we maintain the same copula form across all families). Model Estimation Let qi m be the actual observed categorical response for qi y in the sample. Then, the probability of the observed vector of number of episodes across individuals in household q ) ,..., , , is the copula density. The integration domain M q is simply the multivariate region of the * qi y variables (i = 1, 2, …, . The dimensionality of the integration, in general, is equal to the number of individuals I q in the family. Thus, if one uses a Gaussian copula, one ends up with integrals of the order of the number of individuals in the family for the joint probability of the observed combination of the number of activity episodes across individuals in the family. This will need simulation techniques when I q is greater than three. However, in the case of a family-level cluster with identical dependencies between pairs of individuals in the family, one can gainfully employ the Archimedean copulas since they provide closed-form multivariate cumulative distribution functions. In particular, the probability in Equation (10) 14 where q C  is the one of the four Archimedean copulas discussed in Section 2.3 with an association parameter q  , and ). The number of cumulative distribution function computations increases rapidly with the number of individuals I q in family q, but this is not much of a problem when the cluster sizes are six or less because of the closed-form structures of the cumulative distribution functions. In the current empirical context, I q ≤ 5. However, in other empirical contexts when there are several individuals in a cluster, one can resort to the use of a composite marginal likelihood approach (see, for instance, the study by Bhat et al. 2010 that employs a combined copula-CML approach to accommodate spatial dependence across observational units). The association parameter q  is allowed to vary across families. where the vector  is the vector of threshold bounds: ). , , ( The likelihood function for household q may be constructed based on the probability expression in Equation (11) The likelihood function is then given by   The likelihood function above is maximized using conventional maximum likelihood procedures approach. All estimations and computations were carried out using the GAUSS programming language. Gradients of the log-likelihood function with respect to the parameters were coded. The Primary Data Source The primary source of data is the 2000 San Francisco Bay Area Travel Survey (BATS), which was designed and administered by MORPACE International, Inc. for the Bay Area Metropolitan Transportation Commission (see MORPACE International Inc. 2002). The survey collected detailed information on individual and household socio-demographic and employment-related characteristics from about 15,000 households in the Bay Area. The survey also collected information on all activity and travel episodes undertaken by individuals of the sampled households over a two-day period. For a subset of the sampled households, the two-day survey period included a Friday and a Saturday, or a Sunday and a Monday (however, no household was surveyed on both a Saturday and a Sunday). The current analysis uses the surveyed weekend day (either Saturday or Sunday) of these households. The information collected on activity episodes included the type of activity (based on a 17-category classification system), the name of the activity participation location (for example, Jewish community center, Riverpark plaza, etc.), the type of participation location (such as religious place, or shopping mall), start and end times of activity participation, and the geographic location of activity participation. As discussed earlier, we identified whether an activity episode is physically active or not based on the activity type and the type of participation location at which the episode is pursued, as reported in the survey. 11 Thus, an episode designated as "recreation" activity by a respondent and pursued at a health club (such as working out at the gym) is labeled as physically active. Similarly, an episode designated as "recreation" activity by a respondent and pursued outdoors (such as walking/running/bicycling around the neighborhood "without any specific destination") is labeled as being physically active. 12 For the current analysis, we consider only out-of-home activity episodes. In addition, travel episodes to any out-ofhome location using non-motorized forms of travel (bicycling and/or walking) are characterized as physical activity episodes. In this regard, each non-motorized travel episode ending at an activity location was characterized as a physical activity episode. For instance, if an individual goes to a grocery shopping center by bike and then returns back home, the individual is considered to have participated in two physical activity episodes. After categorizing out-of-home episodes into physically active or otherwise, the number of physically active episodes during the weekend day for each individual in each family is obtained by appropriate aggregation. This constitutes the dependent variable in our analysis. Further, while the methodology developed can be used for all types of families, we focus only on families with children in this paper to examine both adults" and children"s physical activity participations (while also accommodating family-level observed and unobserved effects). In terms of adults, we focus on parents" physical activity participations and, in terms of children, we focus on the physical activity participation of children between the ages of five to fifteen. Further, we restricted ourselves to families with three children or less as they accounted for approximately 97 percent of families with children. The Secondary Data Sources 11 A physically active episode requires regular bodily movement during the episode, while a physically passive episode involves maintaining a sedentary and stable position for the duration of the episode. For example, swimming or walking around the neighborhoods would be a physically active episode, while going to a movie is a physically passive episode. 12 A data-based limitation of the current study is that the data do not allow us to distinguish between individuals who are personally involved in the physical activity and those who are only present during the activity but not "physically" involved in the physical activity. Therefore, for instance, an episode designated as "recreation" activity by a respondent and pursued at a tennis court is labeled as physically active, regardless of whether the individual went to the tennis court to watch some other person play tennis or played tennis himself/herself. Note, however, that individuals who drop off/pick up others from the tennis courts will report their activity type as "pick-up/drop-off" and so this episode will not be considered as a physically active one, Also, there is some possibility that individuals who go to a tennis court and not play tennis will report their activity type as "social" or "resting/relaxing", in which case these episodes will also not be characterized as "physically active" in our taxonomy. In addition to the 2000 BATS survey data set, several other secondary data sets were used to obtain transportation system attributes and built environment characteristics (within the broad group of physical environment factors discussed in Section 1.1), as well as residential neighborhood demographics (within the broad group of social environment factors in Section 1.1). All these variables were computed at the level of the residential traffic analysis zone (TAZ) of each household. 13 The secondary data sources included land use/demographic coverage data, the 2000 Census of population and household summary files, a Geographic Information System (GIS) layer of bicycle facilities, a GIS layer of highways and local roadways, and GIS layers of businesses. Among the secondary data sets indicated above, the land use/demographic coverage data, LOS data, and the GIS layer of bicycle facilities were obtained from the Metropolitan Transportation Commission (MTC). The GIS layers of highways and local roadways were obtained from the 2000 Census Tiger Files. The GIS layers of businesses were obtained from the InfoUSA business directory. The transportation system and built environment measures constructed from the secondary data sources include: 1. Zonal land use structure variables, including housing type measures (fractions of single family, multiple family, duplex and other dwelling units), land use composition measures (fractions of zonal area in residential, commercial, and other land uses), and a land use mix diversity index computed as a fraction based on the land use composition measures with values between zero and one (zones with a value closer to one have a richer land use mix than zones with a value closer to zero; see Bhat and Guo 2007 for a detailed explanation on the formulation of this index). 2. Regional accessibility measures, which include Hansen-type (Fotheringham 1983) employment, shopping, and recreational accessibility indices that are computed separately for the drive and transit modes. 3. Zonal activity opportunity variables, characterizing the composition of zones in terms of the intensity or the density of various types of activity centers. The typology used for activity centers includes five categories: (a) maintenance centers, such as grocery stores, gas stations, food stores, car wash, automotive businesses, banks, medical facilities, (b) physically active recreation centers, such as fitness centers, sports centers, dance and yoga studios, (c) physically passive recreational centers, such as theatres, amusement centers, and arcades, (d) natural recreational centers such as parks and gardens, and (e) restaurants and eat-out places. 4. Zonal transportation network measures, including highway density (miles of highway facilities per square mile), local roadway density (miles of roadway density per square mile), bikeway density (miles of bikeway facilities per square mile), street block density (number of blocks per square mile), non-motorized distance between zones (i.e., the distance in miles along walk and bicycle paths between zones), and transit availability. The non-motorized distance between zones was used to develop an accessibility measure by non-motorized modes, computed as the number of zones (a proxy for activity opportunities) within "x" non-motorized mode miles of the teenager"s residence zone. Several variables with different thresholds for "x" were formulated and tested. The residential neighborhood demographics constructed from the secondary data sources include: Sample Characteristics The Variable Specification Several different variables within the three broad variable categories of individual factors, physical environment correlates, and social environment determinants were considered in our model specifications. The individual factors included demographics (age, sex, race, driver"s license holding, physical disability status, etc.) and workrelated characteristics (employment status, hours of week, work schedule, and work flexibility, etc.); the physical environment factors included weather, season of year, transportation system attributes, and built environment characteristics; and the social environment factors included family-level demographics (household composition and family structure, household income, dwelling type, whether the house is owned or rented, etc.) and residential neighborhood demographics (see Section 3.2 for details). The final model specification was based on a systematic process of eliminating variables found to be statistically insignificant, intuitive considerations, parsimony in specification, and results from earlier studies. Several different variable specifications, functional forms of variables as well as interaction variables were considered for the x qi vector (that determines exogenous variables affecting physical activity propensity) as well as for the s q vector (that captures variations in the level of dependency based on observed family characteristics). The final specification includes some variables that are not highly statistically significant, because of their intuitive effects and potential to guide future research efforts in the field. Model Specification and Data Fit The empirical analysis involved estimating models with two different univariate distribution assumptions (normal and logistic) for the random error term ε qi , and four different copula structures (Clayton, Gumbel, Frank and Joe) for specifying the dependency between the ε qi terms across individuals in each family to represent the family cluster effect. Thus, a total of eight copula-based models were estimated: (1) Normal-Clayton, (2) Normal-Gumbel, (3) Normal-Frank, (4) Normal-Joe, (5) Logistic-Clayton, (6) Logistic-Gumbel, (7) Logistic-Frank, and (8) Logistic-Joe. In addition, we also estimated two models (one with a normal marginal error term and the other with a logistic marginal error term) that assume independence in physical activity propensity among family members, as well as two models based on the more common methodological approach to accommodate clusters through a family-specific normal mixing error term. To allow a fair comparison between such random-effects models and the copula models, we specified the variance of the random error term in the random-effects models to vary across families based on observed family characteristics (see Zhao 2002, andBhat 2000 for such specifications in the past). Such a formulation accommodates heterogeneity across families in the level of association between family members, akin to parameterizing the θ q dependence term in the copula models as a function of the vector s q of observed family variables. To conserve on space, we will only provide the data fit results for the best copula model, the best independent model (from the logistic and the normal distributions for the ε qi terms), and the best random-effects model (again from the logistic and normal distributions for the ε qi terms). Note that the maximum likelihood estimation of the models with different copulas leads to a case of non-nested models. The most widely used approach to select among competing non-nested copula models is the Bayesian Information Criterion (or BIC; see Quinn 2007;Genius and Strazzera 2008;Trivedi and Zimmer 2007, page 65). The BIC for a given copula model is equal to 2ln( ) ln( ) L B N  , where ) ln(L is the log-likelihood value at convergence, B is the number of parameters, and N is the number of observations. The copula that results in the lowest BIC value is the preferred copula. But, if all the competing models have the same exogenous variables and the same number of thresholds, as in our empirical case, the BIC information selection procedure measure is equivalent to selection based on the largest value of the log-likelihood function at convergence. Among the copula models, our results indicated that the Logistic-Clayton (LC) model provides the best data fit with a likelihood value of -732.844. 14 Thus, based on the BIC measure, the LC model provides the best fit. However, the BIC measure does not indicate whether the LC model is statistically significantly better than its competitors. But, since all the copula models have the same value of the loglikelihood at sample shares (that is, when only the thresholds are included in the model), the alternative copula models can be statistically tested using a non-nested likelihood ratio test. In this regard, the difference in the adjusted rho-bar squared ( 2 c  ) values between the LC model and its closest competitor (which is the Logistic-Frank or LF model) is 0.0006. 15 The probability that this difference could have occurred by chance is less than This value, with L(C) = -3022.698, is almost zero, indicating that the difference in adjusted rho-bar squared values between the LC and the LF models is statistically significant and that the LC model is significantly superior to the LF model. However, note also that, in all the copula models, the dependency parameters were highly statistically significant, with the family-level dependency in unobserved factors varying based on family structure. Specifically, the family-level dependency was different among the three family types of (1) family with both parents, (2) single father family, and (3) single mother family. Between the two independent models, the logistic error term distribution for the margins (i.e., the ordered-response logit or ORL) provided a marginally better fit than the normal error term distribution for the margins (i.e., the ordered-response probit). The log-likelihood value at convergence for the ordered-response logit is -916.748. Also, between the random effects ordered-response logit (RORL) and the randomeffects ordered-response probit (RORP) models, the former (i.e., the RORL model) provided a superior data fit with a convergent log-likelihood value of -738.602. In both these random-effects models, we also considered variations in the family-level correlation levels across families, and found once again that there was variation based on the same family structure grouping as in the LC model. The likelihood ratio test for testing the LC model in this paper with the ORL model is 367.81, which is substantially larger than the critical χ 2 value with three degrees of freedom (corresponding to the three dependency parameters) at any reasonable level of significance, confirming the importance of accommodating dependence in physical activity propensity among family members. The likelihood ratio test for testing the RORL model with the ORL model is 356.29, which again is larger than the critical χ 2 value with three degrees of freedom. The LC and RORL models are non-nested, and may be compared using a non-nested likelihood ratio test (both the LC and RORL models have the same exogenous variables and the same number of thresholds, while differing in the surface shape of the dependency among the error terms of different individuals in a family). Specifically, the difference in the adjusted rho-bar squared ( 2 c  ) values between the two models is 0.00191. The probability that this difference could have occurred by chance is less than This value, with L(C) = -3022.698, is almost zero, indicating that the difference in adjusted rho-bar squared values between the copula-based LC and the RORL models is highly statistically significant and that the copula model is to be preferred over the more traditional random effects model in terms of model fit. Specifically, as we discuss later, the results indicate a clear asymmetry in the dependence relationship among the physical activity propensities of individuals of the same family, an issue that cannot be handled by the random effects approach. 15 The adjusted rho-bar squared value 20 In addition to the model fit on the overall estimation sample, we also evaluated the performance of the ORL, RORL, and LC models on various market segments of the estimation sample (Ben-Akiva and Lerman 1985 refer to such predictive fit tests as market segment prediction tests). The intent of using such predictive tests is to examine the performance of different models on sub-samples that do not correspond to the overall sample used in estimation. Effectively, the sub-samples serve a similar role as an out-of-sample for validation. The advantage of using the sub-sample approach rather than an out-of-sample approach to validation is that there is no reduction in the size of the sample for estimation. This is particularly an issue in our case because we have only 517 households for estimation. If a model shows superior performance in the subsamples in addition to the overall estimation sample, it is indication that the model indeed provides a better data fit. To evaluate performance of different models within each sub-sample, we use both aggregate and disaggregate measures of fit. At the aggregate level, we compare the mean predicted and actual (observed) number of household-level number of physical activity episodes per weekend day, using the absolute percentage error (APE) for each of the subsamples. At the disaggregate level, we compute an "out-of-sample" log-likelihood function (OSLLF) approach. The OSLLF is computed by plugging in the sub-sample observations into the loglikelihood function, while retaining the estimated parameters from the overall estimation sample. As indicated by Norwood et al. (2001), the model with the highest value of OSLLF is the preferred one, since it is most likely to generate the set of subsample observations. The results are provided in Table 1 for segments formed based on three variables: (1) Family income (three market segments), (2) Household bicycle ownership level (six market segments), and (3) Family type (three market segment). The third column provides the mean observed number of household-level physical activity episodes, while the next main column entitled "Aggregate-level fit statistics" provides the mean predicted number of household-level physical activity episodes (and the absolute percentage error or APE in parenthesis) from each of the ORL, RORL, and LC models. The mean predicted number of episodes from the LC model is closer to the true mean for nine of the twelve segments, as evidenced by the APE statistics. Finally, at the disaggregate level, the OSLLF value of the LC model is better than those of the other two models for nine of the twelve segments. All in all, the LC model outperforms the other two models in terms of data fit on the estimation sample as well as on sub-samples of the estimation sample. Besides the data fit superiority of the LC model, our results also show that the LC model provides more efficient estimates. In particular, the average of the trace of the covariance matrix of parameter estimates is 0.00136 for the LC model, 0.00664 for the RORL model estimated coefficients, and 0.00377 for the ORL model, indicating the higher standard errors (by 175-390 percent) from the RORL and the ORL models relative to the preferred LC model. 16 That is, the recognition of family dependence leads to substantially improved econometric efficiency. 16 The covariance matrix of the RORL model will provide higher values just because the coefficients estimated from the RORL model are larger in magnitude compared to the ORL and LC models (because the random effects in the RORL model increases the total error variance to a value beyond one, while the ORL and LC models normalize the error term variance to one). However, we normalized the coefficients in the RORL model by taking the weighted mean (across family types based on the shares of each family type) of the error variance, and computed the trace value of the implied covariance matrix of the normalized RORL coefficients. This allows an apples-to-apples comparison of the trace values across the ORL, RORL, and LC models. In the following presentation of the empirical results, we focus our attention on the results of the LC model that provides the best data fit. Table 2 presents the estimation results for the LC model. The coefficients provide the effects of variables on the latent propensity of an individual to participate in weekend out-of-home physically active episodes. For ease in presentation, we indicate the effects of independent variables separately on adults (i.e., parents) and children, though the estimation is undertaken for all individuals together, while also accommodating unobserved dependencies in the physical activity propensities of individuals within a family. 17 The first main row of Table 2 provides estimates of the threshold values (for parents and children). These do not have any substantive interpretation; rather, they simply serve to translate the latent propensity into the observed ordered categories of the number of physical activity participations. Individual Factors The effects of individual characteristics indicate the influence of the parents" age on both parents" and children"s physical activity propensities. In particular, we find important interaction effects of sex and age in the physical activity propensity of adults. This is interesting, since many earlier studies examine the impact of sex and age as two separate variables or focus only on women (see, for example, Weuve et al. 2004;King et al. 2005). However, our results suggest that there are important interaction effects between age and sex in adults" physical activity propensity. 18 In particular, our results indicate no statistically significant differences in weekend day physical activity propensity between male and female adults until the age of 35 years. On the other hand, most earlier studies indicate that male adults tend to be more physically active compared to female adults at almost any age (see, for example, Schulz and Schoeller 1994;Azevedo et al. 2007;Troiano et al. 2008). Further, according to our results, the propensity for weekend physical activity is lower for males who are 35 years of age or more relative to their younger counterparts (less than 35 years of age), while, for females in family households, the propensity is higher for individuals who are 35 years or more relative to their younger counterparts (less than 35 years of age). Hawkins et al. (2009) find a similar result of increased physical activity among women in middle ages (40-59 years) relative to their younger peers, but this holds only for Hispanic women in their sample. As importantly, the implication of our results is that women who are 35 years of age or over have a higher 17 In the rest of this paper, we will use the terms adults and parents interchangeably, based on the context of the discussion. 18 Note that we tried various threshold age values to capture the age-related effects in our specification, but the thresholds of 35 years and 45 years provided the best fit. This dummy variable specification was better than a continuous age specification and a specification that considered non-linear spline effects. For male adults, there was literally no difference in the coefficients for the "35-45" years and "over 45 years" age categories. So, we have a single coefficient for these two categories for males. For females, there were larger differences in the two age categories. Thus, even though not statistically different at the 0.05 level of significance, we retained different coefficients on the two age categories for females. Presence of physically inactive recreation centers (such as theaters, amusement parks, inactive clubs (e.g. video games or cards)) ---0.387 -1.39 24 propensity to participate in physically active episodes relative to their male counterparts. Of course, one should keep in mind that the measure of physical activity in our study (as in Dunton et al. 2008;Sener et al. 2009) is the number of physical activity bouts on a weekend day as reported in a general activity survey, while several earlier studies have considered time expended in physical activity over longer stretches of time (such as a week or a longer period of time) using focused physical activity surveys or objective measurements of physical activity. Overall, there is a clear need for a joint analysis of different dimensions of physical activity, including types of physical activity bouts, time investments and number of bouts, where bouts occurred and time-of-day of bouts, weekend day versus weekday patterns, as well as with-whom bouts occurred. Understanding the role of demographics and other variables on each and all of these physical activity dimensions can provide important information for effective intervention strategies. While the field is moving toward such comprehensive analyses of physical activity (see, for example, Dunton et al. 2008;Sener et al. 2008), the challenge is to obtain reliable data and develop methods to support the analysis of all these dimensions jointly. This is an important direction for future research in the physical activity area. Parental age also has an important effect on children"s physical activity propensity, though, once again, the effect is different for mothers and fathers. Children in families with young fathers (less than 35 years of age) have a higher physical activity propensity relative to children in families with older fathers, while children in families with young mothers have a lower physical activity propensity relative to children in families with older mothers. Taken together with the impact of parental age on parental physical activity, these results perhaps suggest that children explicitly model their parents" physical activity participation so that children in households with one or both physically active parents are more likely to be physically active. Overall, the results indicate that the highest levels of physical activity across all individuals in a family (parents and children) tend to be in two-parent families with young fathers (less then 35 years of age) and older mothers (35 years of age or more), while the lowest levels of physical activity are in two-parent families with the father over 35 years of age and the mother less than 35 years of age. Previous studies (see, for example, Davison et al. 2003) have suggested that mothers and fathers support and shape the physical activity participation of children in quite different ways, with fathers taking more of an explicit modeling role (a more hands-on physical activity-embracing role) and mothers taking more of a logistics support role (driving children to coaching camps and related physical activity opportunity locations). It would be interesting in future studies to examine if such differential support roles of parents in influencing children"s physical activity participation are somehow being manifested in the parental age-based effects found in this study. In any case, the results suggest that policy interventions aimed at increasing children"s physical activity levels could potentially benefit from targeting entire family units rather than targeting only children. The effect of the child"s age variable in Table 2 indicates that older children have a lower propensity to partake in physical activities. This is a result that is consistent with the findings of earlier studies (see, for example, Sallis et al. 2000;Sener et al. 2008). While there may be several reasons for this result, one reason may be that, as children get older, they gravitate more toward unstructured social activities rather than structured sports activities and unstructured free play (Copperman and Bhat 2007b). It is interesting to note here that we did not find any statistically significant effect of the child"s age on parents" physical activity propensity. Family-level demographics Finally, within the category of individual characteristics, adults who use the internet during the weekend day are less likely to partake in physical activity compared to adults who do not use the internet. 19 This result may be a reflection of overall sedentary inclinations or lesser time availability for physically active pursuits in the day (due to getting "sucked up" in social conversations or internet browsing or e-mail checking). While only marginally significant, this result emphasizes the need to balance the positive aspects of internet connectivity with the potentially detrimental effect on physical activity lifestyles (see also Kennedy et al. 2008). In addition to the variables discussed above, we also examined the effects of workrelated factors on physical activity propensity of family members. But we did not find any statistically significant impacts even at the 15 percent level. Physical Environment Factors In the group of physical environment factors, the first set of variables corresponds to season and activity day variables. The season variables suggest a lower propensity among adults to participate in weekend physical activities during the cold winter months relative to other times of the year (though this effect is not significant at the 0.05 significance level). Such seasonal variations have been found in other studies of adult physical activity participation (see Tucker and Gilliland 2007;Sener and Bhat 2007;Pivarnik et al. 2003). This may be attributed to the discomfort in participating in outdoor physically active pursuits during the winter season in the San Francisco Bay area, though this result is perhaps not transferable to areas with a rich set of winter sports activities such as skiing or skating. Interestingly, we did not find such similar season effects for children"s physical activity participation. The activity day variable indicates lower physical activity propensity among both parents and children on Sundays compared to Saturdays, presumably because of the time investment in religious and social activities on Sundays. Further, as indicated in some other studies, Sundays serve the purpose of "rest" days at home before the transition to school or work the next day (see, for instance, Bhat and Gossen 2004). We tested several transportation system and built environment variables, though most of these did not turn out to be statistically significant even at the 15 percent level of significance. 20 However, as shown under "Transportation system and built environment characteristics" in Table 2, both adults and children in households residing in areas with high bicycle facility density (as measured by miles of bicycle lanes per square mile in the residential traffic analysis zone) are more likely to participate in physically active pursuits relative to individuals in other households. Of course, this result (and the rest of the effects in the transportation system/built environment variable category) should be viewed with some caution since we have not considered potential residential self selection effects. That is, it is possible that highly physically active families self-select themselves into zones with built environment measures that support their active lifestyles (see Bhat and Guo 2007;Bhat and Eluru 2009 for methodologies to accommodate such self selection effects; combining such methodologies with the copula methodology proposed here for accounting for family clustering effects is left for future research). The "fraction of multifamily dwelling units" 26 variable effect reveals a higher level of physical activity among children residing in zones with a high percent of multifamily dwelling units. This may be a reflection of more opportunities for joint physical activity participation with peers and other individuals in neighborhoods with a high share of multifamily units, Finally, the presence of physically inactive recreation centers in a zone reduces the physical activity propensity of children residing in that zone (though this effect is only marginally significant). Social Environment Factors The family demographics effects in Table 2 (within the category of social environment factors) show that adults in two-parent families have a higher propensity to participate in physically active episodes over the weekend day relative to families with only one parent, perhaps because of increased opportunities for joint participation in out-of-home adult physical activity participation or because one of the parents can look after children at home while the other participates in physical activity. The results also indicate the higher physical activity propensity of parents with young children (less than five years of age) relative to parents of older children (five years or more). This may be related to the increased demands and reliance of older children on their parents for logistics and related support to participate in activities based on their own independent needs (see Stefan and Hunt 2006;CDC 2005;Eccles 1999), leaving less time for parents to pursue physical activities. Both parents and children in high income families (with an annual income of more than $90,000) have a higher propensity (than low income families) for physical activities, presumably due to fewer financial restrictions to travel to, and participate in, physical activities (see Parks et al. 2003;Day 2006). On the other hand, the results in Table 2 indicate a lower weekend physical activity participation propensity among individuals (adults and children) residing in their own houses relative to individuals residing in non-owned houses. Finally, as the number of motorized vehicles in the family increases, adults (but not children) are less likely to engage in physical activity episodes, while, as the number of bicycles in the household increases, children (but not adults) are more likely to engage in physical activity episodes. Of course, a caution here is that this may be an associative effect rather than a causal effect. That is, rather than fewer cars/more bicycles engendering more physical activity, it may be that households with physically active individuals choose to own fewer cars/more bicycles. The neighborhood race composition effects under neighborhood residential demographics do show a general trend of higher (lower) physical activity propensity among adults (children) residing in neighborhoods with a high share of Caucasian-American households (African-American households) relative to adults (children) residing in other neighborhoods. As indicated by Rai and Finch (1997), physical activity in the population has generally been a "white" domain. Gordon-Larsen et al. (2005 also suggest that the lower physical activity propensity among children in predominantly African-American neighborhoods may be because of poor neighborhood quality and lack of good recreational centers. Dependence Effects The estimated copula-based clustered ordered response model incorporates the jointness between physical activity episodes of family members not only through observed factors but also based on unobserved factors. As indicated earlier, the Clayton copula turned out to provide the best fit. The association parameter is parameterized in the Clayton copula as ) exp( q q s     , where the δ vector is estimated. As indicated earlier, in our estimations, the s q vector included three dummy variables: (1) family with both parents, (2) single mother family, and (3) single father family. The implied Clayton association parameter θ q for these three family types and their corresponding standard errors (computed using the familiar delta method; see Greene 2003, page 70) are as follows: Family with both parents: 1.866 (0.155), single mother family: 2.158 (0.467), and single father family: 1.413 (0.478). All of these parameters are very highly statistically significant (relative to the value of "zero", which corresponds to independence), indicating the strong dependence among the unobserved physical activity determinants of family members. Another common way to quantify the dependence in the copula literature is to compute the Kendall"s measure of dependence. 21 For the estimated association parameters, the values of the Kendall"s  are (standard errors are in parenthesis): Family with both parents: 0.483 (0.021), single mother family: 0.519 (0.054), and single father family: 0.414 (0.082). The dependence form of the Clayton copula implies that the dependency in unobserved components across family members in the propensity to participate in physically active episodes is strong at the left tail, but not at the right tail. Figure 1 plots the dependency scatterplot of the relationship between the unobserved components ε qi of physical activity propensity for any two individuals in the same family q, based on family type. 22 As can be observed, the results indicate that individuals in a family tend to have uniformly low physical activity (tighter clustering of data points at the low end of the physical activity spectrum), but there is lesser clustering of individuals in a family toward the high physical activity propensity spectrum. In other words, the dependence among the physical activity propensities of family members is asymmetric, with a stronger tendency of family members to simultaneously have low physical activity levels than to simultaneously have high physical activity levels. Equivalently, it is easier for a family to lapse into a sedentary lifestyle because of the sedentary lifestyle of one of its members, while families do not come out of a sedentary lifestyle as easily just because of the active lifestyle of one of its members. From an education-based intervention standpoint to promote physical activity, the result that there is strong clustering within individuals in a family at the low physical activity spectrum end is encouraging. It suggests that a cost effective strategy would be to identify individuals who have a low physical activity level, then trace the individual back to her/his household, and target the entire family unit, all of whose members are likely to have low physical activity levels. Such a strategy constitutes a good "capture" mechanism to bring educational campaigns to those who may benefit most from such campaigns. 23 More 21 See Bhat and Eluru (2009) for a description of this dependency measure. The traditional dependence concept of correlation coefficient ρ is not informative for asymmetric distributions, and has led statisticians to use concordance measures. Basically, two random variables are labeled as being concordant (discordant) if large values of one variable are associated with large (small) values of the other, and small values of one variable are associated with small (large) values of the other. This concordance concept has led to the use of the Kendall"s τ, which is in the range between zero and one, assumes the value of zero under independence, and is not dependent on the margins. For the Clayton copula, τ = θ / (θ + 2). 22 For instance, Figure 1(a) represents the dependency scatterplot of the relationship between the unobserved components (ε qi ) of physical activity propensity of two individuals (represented by each axis) residing in the same two-parent family. Note that the physical activity propensities * qi y are latent; thus, the scatterplots of ε qi are based on the implied copula dependence shape that leads to the best model fit to the observed data. In our case, this is the Clayton copula, with the shapes being a function of the estimated Kendall"s τ value. The dependency relationships presented in Figure 1 will be the same for any two individuals within the same family, since the association parameter θ q varies across families, not between members of the same family. 23 The statement here is not intended to be patronizing in any way to those who have low physically active levels. In fact, many individuals with low physically active levels may already know a 28 generally, the asymmetric "spillover" or "rubbing off" effect suggests that family-level information dissemination and targeting strategies to move away from sedentary lifestyles may be more effective than individual-level strategies to promote active lifestyles. The figures also show the higher (lower) dependency (especially at the lower end of the physical activity spectrum) for single mother (single father) families relative to two-parent families. This suggests a need to focus particularly on single mother households, and provide such families information regarding the potentially adverse effects of sedentary lifestyles. To summarize, the discussion above illustrates that the dependency effects within a family (in the propensity to participate in physical activity) are asymmetric and statistically significant. A model that does not consider dependence between individuals in a family (i.e., the simple ordered response model) and a model that accommodates only a restrictive normal dependency form are unable to consider flexible and asymmetric dependence patterns, while the copula-based approach is able to do so. These models also provide inconsistent estimates, as we discuss in the next section. Aggregate Impacts of Variables The parameters on the exogenous variables in Table 2 do not directly provide the magnitude of the effects of the variables on the number of out-of-home weekend physical activity participations. To do so, we compute the aggregate-level "elasticity effects" of each variable. In particular, to compute the aggregate-level elasticity of a dummy exogenous variable (such as the "male adult (father) between 35-45 years" variable), we compute the expected aggregate share of individuals participating in each number of activity episodes in the "base case" and the corresponding share in the "scenario case" after increasing the number of male individuals between 35-45 years by 10 percent (with an appropriate decrease in the base category of male individuals younger than 35 years). We then compute an effective percentage change in the expected aggregate share of individuals participating in each number of activity episodes due to a change from the base case to the scenario case. On the other hand, to compute the aggregate level elasticity effect of an ordinal variable (such as number of motorized vehicles), we increase (or decrease) the value of the variable by 1 and compute a percentage change in the expected aggregate share of individuals participating in each number of activity episodes. Finally, the aggregate-level "arc" elasticity effect of a continuous exogenous variable (such as fraction of African-American population) is obtained by increasing the value of the corresponding variable by 10 percent for each individual in the sample, and computing a percentage change in the expected aggregate share of individuals participating in each number of activity episodes. While the aggregate level elasticity effects are not strictly comparable across the three different types of independent variables (dummy, ordinal, and continuous), they do provide order of magnitude effects. substantial amount of statistics about the potential benefits of regular physical activity (to themselves and to society as a whole), and may be making informed choices. But, as in all promotional campaigns of services/products, one of the important tasks is to efficiently identify the population groups who are current "non-consumers" (i.e., those who do not partake much in physical activity levels in the empirical context of the current paper) and attempt to "convert" them. The statement should be viewed in this light. The results are presented in Table 3 for the standard ordered-response logit (ORL) model, the random effects ordered-response model (RORL) and the LC models. To reduce clutter, we simplify the effects from the ordered models to a simple binary effect of variables on the share of adults (parents) and children participating in physical activity episodes. Also, to obtain standard deviations of the estimated magnitude effects, we undertake a bootstrap procedure using 26 draws of the coefficients (on the exogenous variables) based on their estimated sampling distributions. The mean magnitude effect across these 26 draws is in the column labeled "Mean" and the standard deviation of the magnitude effect is in the column labeled "Std. Dev.". The numbers in the "mean" and "std. dev." columns may be interpreted as the mean and standard deviation estimates, respectively, of the percentage change in the share of adults and children participating in one or more physically active recreational episodes during the weekend day. For instance, the first number "-11.94" with a standard deviation of "1.83" corresponding to the "male adult (father) between 35-45 years" variable in the ORL model indicates that the share of adults participating in active recreation decreases by about 12 percent (with a standard deviation of this effect being 1.83 percent) if the percentage of male adults between 35-45 years increases by 10 percent (with a corresponding decrease in the percentage of male adults below 35 years of age). On the other hand, the number "-13.51" with a standard deviation of "1.5" (under the "children" column for the ORL model) implies that the share of children participating in active recreation decreases by about 13.5 percent (with a standard deviation of 1.5 percent) if the percentage of male adults between 35-45 years increases by 10 percent. Similarly, the number "-2.21" with a standard deviation of "0.59" corresponding to the "child"s age" variable in the ORL model reflects that an increase by one year for all children leads to about a 2.2 percent decrease (with a standard deviation of 0.59 percent) in the share of children participating in physically active recreation, while the number "2.43" (standard deviation of 0.29) for the effect of the "Bicycling facility density" implies that the share of adults participating in active recreation increases by 2.43 percent due to a 10 percent increase in the miles of bicycle lanes per square mile in each residence zone. Several important observations may be made from Table 3. First, the physical environment variables (middle rows of the table) have a smaller (and inelastic) effect on physical activity participation relative to sociodemographic variables (the top and bottom rows of the table). This is consistent with other studies in the literature that indicate that, while the built environment may be engineered to increase physical activity, the ability to do so is rather limited (see, for instance, Copperman and Bhat 2007a;Goodell and Williams 2007;TRB 2005). Among the individual factors, the age of the father and mother have a substantial impact on the physical activity levels of all members of a family. In the group of family-level demographics, the presence of very young children and the number of motorized vehicles are important determinants of the physical activity levels of adults in a family, while the number of bicycles is an important determinant of the physical activity levels of children in a family. The important effects of vehicle ownership (for adults) and bicycle ownership (for children) catapults policies aimed at reducing motorized vehicle ownership and increasing bicycle ownership as potentially important ones to consider not only from the standpoint of reducing traffic congestion and greenhouse gas emissions, but also from the perspective of improving public health. However, the caveat mentioned earlier needs to be emphasized again; that is, this relationship of motorized vehicle ownership and bicycle ownership with physical activity may be an associative one rather than a causal one. Second, there is an impact of the fraction of Caucasian-American population in a zone on the physical activity levels of adults in that zone, though the reasons for this finding are not obvious. Is it that recreational opportunities and facilities (some of which are not captured in the built environment variables considered in this study) are better in zones with a high Caucasian-American population, as suggested by Gordon-Larsen et al. (2005, or are there other reasons for the differences? Additional qualitative investigation into this finding should provide valuable insights. Third, adding bicycle lanes and increasing bicycle facility density does increase physical activity levels in both adults and children, even though the usual caveat has to be added that the directionality of this influence needs to be examined carefully. In particular, whether this influence is a causal effect of bicycle facility density on physical activity levels or simply a self-selection effect of highly physically active-oriented individuals locating themselves in areas with good bicycle facilities is an open question (see Bhat and Guo 2007;Pinjari et al. 2008 for additional discussions of this issue). Finally, there are differences in the effects of variables between the ORL, RORL, and LC models. In the column corresponding to the LC model results, we identify those magnitude estimates from the LC model that are statistically different from the corresponding magnitude estimates from the ORL model (identified by a "+" next to the LC coefficient) and from the RORL model (identified by a "*" next to the LC coefficient). A 90 percent level of confidence is used to determine statistically significant differences. The bootstrap-based standard deviation estimates of coefficient estimates are used in the computation. As one can notice, there are eight variable effects that are statistically different between the LC and ORL models, and nine variable effects that are statistically different between the LC and the RORL models. This, combined with the better data fit of the LC model, points to the inconsistent effects from the ORL and RORL models. Overall, the results underscore the importance of testing different copula structures for accommodating family dependencies to avoid the risks of inappropriate covariate influences and inconsistent predictions of the number of out-of-home weekend physically active activity episodes. Interestingly, our results suggest that it is possible that not accommodating clustering effects at all (that is, ignoring dependency) could be better from the standpoint of estimating consistent variable elasticity effects relative to accommodating clustering effects using an inappropriate dependency surface. This observation is based on the fewer mean estimates in Table 3 that are significantly different between the LC and ORL models compared to between the LC and RORL models. Conclusion This paper presents a copula-based model to examine the physical activity participation levels of individuals, while also explicitly accommodating dependencies due to observed and unobserved factors within individuals belonging to the same family unit. In the copulabased approach, the model structure allows the testing of various dependency forms, including non-linear and asymmetric dependencies among family members. For instance, family members may be likely to have simultaneously low propensities for physical activity but not simultaneously high propensities, or high propensities together but not low propensities together. In the current paper, we focus on the Archimedean class of copulas, a class that is ideally suited to the clustering context where the level of dependence in the marginal random unobserved terms within a cluster is identical (i.e., exchangeable) across any (and all) pairs of individuals in the cluster. The measure of physical activity we adopt in the current study is the number of out-ofhome physical activity bouts or episodes (regardless of whether these bouts correspond to recreation or to walking/biking for utilitarian purposes) on a weekend day as reported by respondents in the 2000 San Francisco Bay Area Survey. Accordingly, we use an orderedresponse structure to analyze physical activity levels, while testing various multivariate copulas. The empirical results indicate that the Logistic-Clayton (LC) model specification provides the best data fit. That is, individuals in a family tend to have uniformly low physical activity, but there is lesser clustering of individuals in a family toward the high physical activity propensity spectrum. This result suggests that a cost effective "capture" mechanism to bring educational campaigns to those who may benefit most from such campaigns would be to identify individuals who have a low physical activity level, then trace the individual back to her/his household, and target the entire family unit, all of whose members are likely to have low physical activity levels. A number of individual factors, physical environment factors, and social environment factors are considered in the empirical analysis. The results indicate that physical environment factors are not as important in determining physical activity levels as individual and social environment factors. Also, decreased vehicle ownership (for adults) and increased bicycle ownership (for children) are important positive determinants of weekend physical activity participation. These results should be carefully examined as they might be useful in developing policies aimed at not only reducing traffic congestion (and its consequent benefits), but also increasing physical activity levels. In addition, individual factors (demographics, work characteristics, internet use at home), physical environment variables (season and activity-day variables, as well as built environment measures), and social environment factors (family-level demographics and residential neighborhood demographics) are other important determinants of physical activity participation levels. In closing, we have proposed a copula structure to accommodate clustering effects in ordinal response models, and applied the methodology to a study of physical activity participation levels of individuals as part of their families. A rich set of potential determinants of the number of out-of-home weekend day physical activity episodes is considered. However, we do not accommodate physical activity attitudes/beliefs and support systems of individual family members as they influence the physical activity levels of others in the family. This is because our data source does not collect such information. Future studies would benefit from including such family-level attitudinal/support variables, while also adopting a family-level perspective of physical activity as in the current study.
2017-09-07T06:43:31.595Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "eb09cc17f75d6f51506024715b3d8122943b7a8f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1016/s1755-5345(13)70012-5", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "06e89da0f27ab096b00b6c807329a7614e80b4dd", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Psychology", "Geography" ] }
225613451
pes2o/s2orc
v3-fos-license
Use of hypochlorite solution as disinfectant during COVID-19 outbreak in India: From the perspective of human health and atmospheric chemistry The current situation in India regarding the COVID-19 pandemic is the worst since its first detection, in terms of the number of new cases per day, and it is now more than 10000 (as of June 16, 2020). In addition to several precautionary steps being taken (social distancing, use of masks, sanitizing hands etc.), spraying disinfectants (NaOCl solution) over several residential, official and commercial buildings, open areas, markets, public road transports, railways etchas been occurring on a regular basis. It has also come to the world’s attention that spraying of disinfectants has been especially used on people who are migrating from one part of the country to another. In this letter, I have made an attempt to discuss some major impacts of NaOCl on human health as well as atmospheric chemistry. NaOCl once emitted into the air reacts easily with the water vapor to form HOCl that further gets photo-dissociated into various reactive species. These reactive species have significant potentials to participate in various tropospheric chemistry of chlorine radical, ozone, S (IV) oxidation, hydrocarbon oxidation, modification of chloride salts etc. I have also recommended some important steps to be taken if spraying of NaOCl is deemed essential. INTRODUCTION The first Corona virus disease 2019 case was detected in India on 30 th January 2020. Subsequently, COVID-19 was declared a pandemic by the World Health Organization (WHO) on 11 th March 2020 At present (as of 16 th June 2020) the total number of active cases and deaths due to COVID-19 is 3,448,186 and 439,577 worldwide. In Indian context, the active cases and deaths are 153,178 and 9,900 respectively as of 16 th June 2020 (Ministry of Health and Family Welfare, Government of India). Government of India called a complete lockdown on 25 th March 2020 and its fourth phase completed on 31 st May 2020.The Government of India has now called the fifth-phase lockdown from 1 st June 2020 with the relaxations in several sectors and has named it Unlock-1. While the entire official and commercial activities were completely shutdown (other than the emergency services) in the first phase (till 15 th April 2020), the services started resuming slowly sector-wise during the later phases. It was observed that the local administration in different states of India started spraying disinfectants over various commercial and residential buildings on either side of the roads especially in the urban/sub-urban regions including the metro cities. The chemical used as the disinfectant is the alkaline solution of sodium hypochlorite or NaOCl. Surprisingly it was used to spray over people too. Several disinfectant tunnels were installed in many places whereby people were asked to walk through. National and regional newspapers also published news which told us that such spraying was done over the people including children when they migrated from one part of the country to another. However, later on, Directorate General of Health Services, Ministry of Health & Family Welfare, Government of India issued an advisory against spraying the disinfectant on people. But such spraying of NaOCl is being continued on a large scale over several official, residential and commercial buildings, streets, open areas, markets, shops, road transport, railways etc. The major concern is the concentration of such hypochlorite solution being used. The concentration has not been fixed and regulated by any administrative/regulatory boards and therefore it varies over a wide range. Based on a personal survey, NaOCl solutions of 5-10% are being used over most of the parts of the country, however highly concentrated solutions (> 10%) are also in use over some of the cities. Through the present letter, I have made an attempt to highlight the possible impacts of such excessive use of highly concentrated NaOCl solution/spray on human health. Although the spraying of NaOCl on people stopped by order of the Indian Government, we are still exposed to its vapor and are inhaling as the spraying over buildings, markets, transports etc. are still on (as of June 16, 2020). This could have adverse effect too. In addition, such high emissions of NaOCl into the air could also have various changes in terms of tropospheric chemistry. In this note, I discuss the probable effects of NaOCl spraying on human health and the atmospheric chemistry in urban areas. Health Effect of NaOCl NaOCl and its by-products HOCl and Cl 2 gas are well known as the respiratory irritants. The severe damage of the respiratory tract by NaOCl vapors could cause acute respiratory distress syndrome (ARDS) (Kuiper et al., 2005). Severe dermal injury caused by the high concentration of NaOCl solution (> 5%) has been reported by the studies performed on animals (Pashley et al., 1985). Concentrated NaOCl could severely damage the body tissues causing Necrosis (death of tissues). High concentration of NaOCl also causes the breakdown of muscle tissue, known as Rhabdomyolysis. Rhabdomyolysis releases a protein called myglobin into the blood affecting kidneys leading to acute kidney injury (AKI) (Bosch et al., 2009). HOCl and Cl 2 vapors cause the burning sensation in the esophagus (the tube connecting the throat and the stomach) and the swelling of mucous membrane medically known as edema of mucosa (Zwischenberger et al., 2002). The direct inhalation of HOCl or the breaking down of NaOCl into HOCl when mixed with plasma destroys the red blood cell causing Hemolysis (Vissers et al., 1998). HOCl reacts with the proteins and the lipids of our body and generates reactive oxygen species like superoxide and OH radicals. These species severely damage the renal epithelial cells causing AKI and other renal diseases (Nath and Norby, 2000). Role of NaOCl on Atmospheric Chemistry Reaction with H 2 O Vapor and Formation of Chlorine Radical NaOCl once emitted as aqueous droplets reacts with the atmospheric H 2 O vapor to form HOCl or hypochlorous acid. NaOCl + H 2 O = HOCl + NaOH HOCl is a weak acid and very unstable. It readily dissociates in the presence of sunlight. The high daytime maximum temperature (> 35°C) and intense solar insolation (> 500 watt m -2 ) in the country (India Meteorological Department) during the month of April and May 2020 could facilitate the photo-dissociation of HOCl. However, the dissociation in water depends on its pH too (Luke et al., 1992). The photo-dissociation of HOCl is one of the major routes to global tropospheric Cl radical production (Faxon and Allen, 2013). HOCl is photolyzed to form Cl radicals through the following reaction: HOCl + hʋ = Cl . + OH . Thus with the high concentrations of HOCl, photolysis reactions are the major sources of Cl radicals in the urban atmosphere. Chang and Allen (2006) have reported an HOCl emission flux of 10 4 kg day -1 from the use of hypochlorite solutions in the swimming pools, cooling towers and industrial point sources over the Houston area. The photolysis rates of HOCl under 30°, 50° and 70° solar zenith angles are 18600, 14100 and 5200 min -1 (Carter, 2010). Wong et al. (2017) studied the impact of use of commercial NaOCl solution on indoor air quality. They reported significant emissions of gaseous Cl 2 , HOCl, ClNO 2 , Cl 2 O, Chloramines (NHCl 2 , NCl 3 ) along with particulate chlorine. They also observed that the indoor illumination governed the formation and the concentrations of OH, Cl and ClO radicals from HOCl. Reactions of Cl Radical with Hydrocarbons The Cl radicals produced from the photolysis of HOCl can easily oxidize the hydrocarbons (mainly the volatile organic compounds (VOC)) forming alkyl radical (Finlayson-Pitts, 1993;Atkinson et al., 2007). Cl . + RH (hydrocarbon) = R . + HCl The behavior of Cl radicals towards VOC oxidation is different from that of OH radicals. It was experimentally established that Cl radicals with the concentration of more than one order of magnitude than OH radicals bear equivalent potential to oxidize VOCs (Wingenter et al., 1999). They have studied several n-alkanes, alkynes, chloro and bromo alkanes, alkenes etc and shown that the ratios of OH and Cl rate constants (k OH /k Cl ) ranged from < 1.0 (for methyl chloroform; 100% loss by OH) to > 300 (for ethane, tetrachloroethene; 70-75% loss by OH and 25-30% loss by Cl). Such oxidation of VOCs could in turn form secondary organic aerosols (SOA) enhancing the loading of total carbonaceous aerosols. The anthropogenic VOCs could be expected to be very low in the atmosphere under the COVID-19 lockdown period. However, biogenic VOCs should not experience any impact of lockdown and hence could produce SOA significantly. Chatterjee,Aerosol and Air Quality Research,xxxx 3 Reaction of Cl Radical with Tropospheric Ozone Cl radicals produced in the atmosphere can readily react with O 3 to form ClO radicals. The following reaction is considered to be the major removal pathway of tropospheric O 3 in absence of NO x . Cl· + O 3 = ClO . + O 2 The above reaction between Cl radicals and ozone is of immense importance for the regions where NO x level is low. Under low NO x conditions, O 3 is destroyed by Cl radicals (Simpson et al., 2015). Under the lockdown period due to COVID-19 outbreak, the anthropogenic emissions have been limited. Especially the major sources of NO x , e.g., vehicular emissions. Therefore, we expect that under the low NO x conditions, O 3 will be reduced by Cl radicals. The Central Pollution Control Board of India as well as several other ongoing studies (unpublished) is reporting very low NO x concentrations as well as high O 3 over several places across the country. However, the regions with high use of hypochlorites (hence high Cl radicals) could have higher surface O 3 depletion. The ClO radicals formed through the reaction shown above could combine with each other either to form Cl 2 or regenerate Cl radicals (Simpson et al., 2015). Oxidation of S (IV) Compounds to form Sulphate Aerosols The oxidation of S (IV) compounds (SO 2 .H 2 O or HSO 3or SO 3 2-) by H 2 O 2 or O 3 to form SO 4 2aerosols is well known (Finlayson-Pitts and Pitts, 2000). Recent studies (though started by Vogt et al., 1996) have also established the crucial role of HOCl in S (IV) oxidation. von Glasow et al. (2002) have shown that HOCl could contribute 30 % to the total SO 4 2aerosol production over the marine ecosystem. The HOCl oxidation of S (IV) compounds takes place through the following reactions: HSO 3 -+ HOCl = 2H + + SO 4 2-+ Cl -SO 3 2-+ HOCl = H + + SO 4 2-+ Cl -The very low SO 2 concentrations during the lockdown period (as reported by Central Pollution Control Board of India) could not only be due to the low emissions but also for high SO 2 (gas)-to-SO 4 2-(particle) conversion favored by HOCl. RECOMMENDATIONS • Spraying hypochlorite solution over people should be strictly prohibited. • Proper cautions should be taken during spraying of hypochlorite solution e.g., use of masks for the people who would spray as well as the residents where the spraying would be done. Masking of eye, nose and mouth could help protect from immediate irritations, however, it is difficult to mask the effect of Cl 2 and HOCl. • A public announcement needs to be made well before spraying the hypochlorite solution so that the residents of the concerned regions could stay at safe place (at home) and mingling of the people on the streets could be stopped. • A regulatory board should be established to restrict the use of hypochlorite solution and adhere to the safety regulations set by WHO. • If at all needed, spraying during evening or after the sunset could be a better option so that the photolysis of HOCl could be inhibited to further generate the toxic and reactive species that affects human health as well as changes atmospheric chemistry. However, the emission of Cl 2 by the surface reactions of NaOCl solution does not depend on the time of the day but depends on the material the spraying is done on.
2020-06-25T09:06:34.376Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "bbb5b067d332655e59274b3c7c1dd87d63766a21", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4209/aaqr.2020.05.0253", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "46a0d56d239c43bdb338d3c0089436bec24b6f59", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
7126771
pes2o/s2orc
v3-fos-license
A space–time spectral collocation algorithm for the variable order fractional wave equation The variable order wave equation plays a major role in acoustics, electromagnetics, and fluid dynamics. In this paper, we consider the space–time variable order fractional wave equation with variable coefficients. We propose an effective numerical method for solving the aforementioned problem in a bounded domain. The shifted Jacobi polynomials are used as basis functions, and the variable-order fractional derivative is described in the Caputo sense. The proposed method is a combination of shifted Jacobi–Gauss–Lobatto collocation scheme for the spatial discretization and the shifted Jacobi–Gauss–Radau collocation scheme for temporal discretization. The aforementioned problem is then reduced to a problem consists of a system of easily solvable algebraic equations. Finally, numerical examples are presented to show the effectiveness of the proposed numerical method. Spectral methods (Canuto et al. 2006;Saadatmandi and Dehghan 2011;Doha and Bhrawy 2012;Bhrawy and Zaky 2015b, c;Bhrawy et al. 2016a) have been widely used in many fields in the last four decades. In the early times, the spectral method based on Fourier expansion has been used in few fields such as a simple geometric field and periodic boundary conditions. Recently, they have been developed theoretically and used as powerful techniques to solve various kinds of problems. Based on the accuracy and exponential rates of convergence, spectral methods have an excellent reputation when compared with others numerical methods. The expression of the problem solution as a finite series of polynomials/functions is the major step of all types of spectral methods. Then, the coefficients of this expansion will be chosen such that the absolute error is diminished as well as possible. The spectral collocation method (Canuto et al. 2006;Bhrawy and Alofi 2013;Gu and Chen 2014;Bhrawy and Abdelkawy 2015;Bhrawy 2016a) is a specific type of spectral methods, that is more applicable and widely used to solve almost types of differential Tatari and Haghighi 2014), integral (Bhrawy et al. 2016c; Rahmoune 2013), integro-differential (Jiang and Ma 2013;Ma and Huang 2014) and delay differential Reutskiy 2015) equations. While, the numerical solution will be enforced to almost satisfy the partial differential equations (PDEs) in spectral collocation method. In other words, the residuals may be permitting to be zero at chosen points. Wei and Chen (2012) proposed Legendre spectral collocation methods for pantograph Volterra delay-integro-differential equations. Bhrawy and Alofi (2012) introduced the spectral shifted Jacobi-Gauss collocation method for solving the Lane-Emden type equation. proposed the spectral collocation algorithm to solve numerically some wave equations subject to initial-boundary nonlocal conservation conditions in one and two space dimensions. Bhrawy (2016b) proposed Jacobi spectral collocation method for solving multi-dimensional nonlinear fractional sub-diffusion equations. The aim of this paper is to find the numerical solution of the space-time variable order fractional wave equation subject to initial-boundary conditions. The wave equation is an important second-order partial differential equation for the description of waves as they occur in physics such as sound waves, light waves and water waves. Variable order wave equation appears in areas such as acoustics, electromagnetics, and fluid dynamics. This paper extends the SJ-GL-C and SJ-GR-C schemes in order to solve the space-time variable order fractional wave equation. The proposed collocation scheme is investigated for both temporal and spatial discretizations. The SJ-GL-C and SJ-GR-C are proposed, with a suitable modification for treating the boundary and initial conditions, for spatial and temporal discretizations. This treatment, for the conditions, improves the accuracy of the scheme greatly. Therefore, the space-time variable order fractional wave equation with its conditions is reduced to system of algebraic equations which is far easier to be solved. Finally, numerical examples with comparisons lighting the high accuracy and effectiveness of the proposed algorithm are presented. The present paper is presented as follows. The definitions of the fractional calculus and some properties of Jacobi polynomials are introduced in "Preliminaries" section. The spectral collocation methods for the space-time variable order fractional wave problem subject to initial-boundary conditions are presented in "Jacobi collocation method" section and then illustrated with two examples in "Numerical examples" section. The "Conclusion" is included in the last section. Preliminaries We first recall some definitions and preliminaries of the variable-order fractional differential and integral operators and some knowledge of orthogonal shifted Jacobi polynomials that are most relevant to spectral approximations. Definition 1 The Riemann-Liouville and Caputo differential operators of constant order γ , when n − 1 ≤ γ < n, of f(t) are given respectively by, where Ŵ(.) represents the Euler gamma function. Definition 3 The Caputo variable-order fractional differential operator is given by It is important to note here that the constant-order fractional derivative can be seen as a special case of the variable-order fractional derivative. These two definitions are related by the following relation: The Jacobi polynomials, denoted by P (θ,ϑ) j (x)(j = 0, 1 . . .); θ > −1, ϑ > −1 and defined on the interval [−1, 1] are generated from the three-term recurrence relation: The formula that relates Jacobi polynomials and their derivatives is The orthogonality condition is Let the shifted Jacobi polynomials P , then they can be obtained with the aid of the following recurrence formula: The analytic form of the shifted Jacobi polynomials P (θ,ϑ) L,i (x) of degree i is given by and the orthogonality condition is The shifted Jacobi-Gauss quadrature is commonly used to evaluate the previous integrals accurately. For any φ ∈ S 2N +1 [0, L], we have where S N [0, L] is the set of polynomials of degree less than or equal to N , x (θ,ϑ) G,L,j (0 ≤ j ≤ N ) and ̟ (θ,ϑ) G,L,j (0 ≤ j ≤ N ) are used as usual the nodes and the corresponding Christoffel numbers in the interval [0, L], respectively. are the zeros of P (θ,ϑ) L,N +1 (x) and the weights where while the nodes and the corresponding Christoffel numbers in the shifted Jacobi Gauss-Radau (SJ-GR) quadrature are given by , may be expressed in terms of shifted Jacobi polynomials as where the coefficients c j are given by The qth derivative of P (θ,ϑ) L,k (x) can be written as Accordingly, we can calculate the Caputo variable order derivative of shifted Jacobi polynomials from Jacobi collocation method In this section, we introduce a numerical algorithm extends the SJ-GL-C and SJ-GR-C schemes in order to solve the space-time variable order fractional wave equation. The collocation points are selected at the SJ-GR and SJ-GL interpolation nodes for temporal and spatial variables, respectively. The core of the proposed method consists of discretizing the space-time variable order fractional wave equation to create a system of algebraic equations of the unknown coefficients. This system can be then easily solved with a standard numerical scheme. In particular, we consider the following space-time variable order fractional wave equation with the initial conditions and the boundary conditions where B(x, t) > 0, g 0 (x), g 1 (x), g 2 (t) and g 3 (t) are given functions, while f(u, x, t) is a source term. We choose the approximate solution to be of the form . The approximation of the temporal partial derivative D t u(x, t) can be easily computed as follows A straightforward calculation shows that the fractional derivative of variable order of the approximate solution can be computed by where Now, adopting (18)-(21), enable one to write (15) in the form: while the numerical treatments of initial and boundary conditions are In the proposed shifted Jacobi collocation method, the residual of (15) is set to be zero at (N − 1) 2 of collocation points. Moreover, the initial-boundary conditions in (23) will be collocated at collocation points. Firstly, we have (N − 1) 2 algebraic equations for (N + 1) 2 unknowns of û i,j where and also we have 2(N − 1) algebraic equations which will be obtained due to the initial conditions (21) Furthermore, using the boundary conditions, we have 2(N + 1) algebraic equations Combining Eqs. (24), (26) and (27), we obtain The previous system of nonlinear algebraic equations can be easily solved. After the coefficients a i,j are determined, it is straightforward to compute the approximate solution u N ,M (x, t) at any value of (x, t) in the given domain from the following equation Numerical examples This section reports two numerical examples to demonstrate the high accuracy and applicability of the proposed method. We also compare the results given from our scheme and those reported in the literature. The comparisons reveal that our method is very effective and convenient. with the initial-boundary conditions The exact solution of this problem when α(x, t) = β(x, t) = 2 is given by Sweilam and Assiri (2015) proposed the non-standard finite difference (NSFD) method to solve this problem with choices of N = 1000 and M = 125. In Table 1, we contrast our numerical results based on absolute errors obtained using the proposed algorithm for three choices of the shifted Jacobi parameters at N = 8 with the corresponding results of NSFD method (Sweilam and Assiri 2015). In Table 2, we contrast our results based on maximum absolute errors (MAEs) obtained by the present method for three choices the shifted Jacobi parameters at N = 8. From the results of this example, 4.86067 × 10 −13 6.82121 × 10 −13 7.95808 × 10 −13 3.4818 × 10 −3 8 2.00448 × 10 −11 5.00222 × 10 −12 6.36646 × 10 −12 9.0641 × 10 −5 we observe that the approximate solution obtained by our method is more better than those obtained in Sweilam and Assiri (2015). Figure 1 displays the space-graph of the numerical solution of problem (1) with N = 8, and θ 1 = θ 2 = ϑ 1 = ϑ 2 = 0. While, Fig. 2 compares graphically the curves of numerical and exact solutions of problem (1) for the different values of t at N = 8, and θ 1 = θ 2 = ϑ 1 = ϑ 2 = 1 2 . Moreover, we represent in Figs. 3 and 4 the absolute error curves obtained by the present method at t = 0.5 and x = 5 with N = 8, and θ 1 = θ 2 = ϑ 1 = ϑ 2 = 0, respectively. This demonstrates that the proposed method leads to an accurate approximation and yields exponential convergence rates. Conclusions We presented a collocation method to achieve an accurate numerical solution for variable-order fractional wave problem subject to initial-boundary conditions. One of the most advantages of the present technique is that a fully spectral method was implemented for the time and space variables by using SJ-GR-C and SJ-G-C approximations respectively. The problem with its conditions was then reduced to an algebraic system. The greatest feature of the present scheme is, adding few terms of the SJ-G and SJ-GR collocation points, a full agreement between the approximate and exact solutions was achieved. Through the numerical examples and specially the comparison between the obtained approximate solution and those obtained by other approximations, we demonstrate the validity and high accuracy of the present method. (2)
2018-04-03T03:46:45.293Z
2016-08-02T00:00:00.000
{ "year": 2016, "sha1": "c66d874dbccd90f4742c9bb1645774e3e5b1bed0", "oa_license": "CCBY", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/s40064-016-2899-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c66d874dbccd90f4742c9bb1645774e3e5b1bed0", "s2fieldsofstudy": [ "Physics", "Mathematics", "Engineering" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
211126669
pes2o/s2orc
v3-fos-license
All-electrical spectroscopy of topological phases in semiconductor-superconductor heterostructures Semiconductors in the proximity of superconductors have been proposed to support phases hosting Majorana bound states. When the systems undergo a topological phase transition towards the Majorana phase, the spectral gap closes, then reopens, and the quasiparticle band spin polarization is inverted. We focus on two paradigmatic semiconductor-superconductor heterostructures and propose an all-electrical spectroscopic probe sensitive to the spin inversion at the topological transition. Our proposal relies on the indirect coupling of a time-dependent electric field to the electronic spin due to the strong Rashba spin-orbit coupling in the semiconductor. We analyze within linear response theory the dynamical correlation functions and demonstrate that some components of the susceptibility can be used to detect the nontrivial topological phases. I. INTRODUCTION There has been a growing interest in the condensed matter scientific community in exploring topological superconducting phases supporting Majorana bound states, partially motivated by the prospects of a quantum computer. Majorana bound states are quasiparticles in condensed matter theory which are their own antiparticle and possess non-Abelian statistics. They may appear unpaired as zero-energy excitations, energetically separated from the quasiparticle continuum by a superconducting gap. 1 Within the restricted subspace formed by a collection of such Majorana bound states, a quantum computer would perform calculations through braiding operations. 2 So far, several physical platforms have been proposed to realize Majorana bound states: topological insulatorsuperconductor, 3 semiconductor-superconductor (SM-SC) heterostructures, [4][5][6] or magnetic atom chains. [7][8][9] The growing number of theoretical proposals and avenues investigated in experiments was surveyed in several recent reviews. [10][11][12][13][14][15][16] The focus in our paper is on the, maybe, most promising condensed matter candidate for the realization of Majorana bound states: the SM-SC heterostructures. [17][18][19] The first experimental signatures of Majorana bound states were obtained by measuring a zero-bias peak in the tunneling conductance. 20 Other proposed measurements are to detect Majoranas using fractional Josephson effect, 1,21-24 and in current correlations. [25][26][27][28] The detection of Majorana states remains still open to debate as the signal sought from them may be due to low-energy Andreev bound states trapped in the hetereostructure due to smooth confining potentials. [29][30][31][32][33][34][35] Another fruitful alternative is to detect the topological phases in SM-SC heterostructures by indirect means, using, for example, bulk measurements. Signatures of the topological transitions have been theoretically shown to arise in the electromagnetic response of the system to weak time-dependent magnetic fields 36,37 or in its entanglement spectrum. 38 More recently, it was realized that at the topological phase transition the spin polarization of electronic bands is inverted, a feature that might be exploited as a reliable marker to discriminate the topological phases. 39 Further studies have sought to make use of this observation to devise detection methods using the local measurement of spin in the electronic bands at the transition, 40 in the generation of supercurrents, 41 or using spin-selective measurements via quantum dots connected to the heterostructure. 42,43 In this paper, we propose an alternative detection method which relies on all-electrical probes of the system's bulk electronic structure, coupled with optical detection. Our analysis carries on two SM-SC heterostructures, where the existence of Majorana phases has been proposed-a one-dimensional model 4,5 and a twodimensional one 6 (see Fig. 1). Both setups aim to realize arXiv:2002.06092v1 [cond-mat.mes-hall] 14 Feb 2020 an effective spinless p-wave superconductor with topological properties 1 in the low-energy sector, near the Fermi energy. The basic ingredients are a magnetic field, which removes the Kramers degeneracy of the electronic states, while a strong spin-orbit coupling breaks the spin conservation to allow tunneling of Cooper pairs from a neighboring s-wave superconductor into the semiconductor. This induces a superconducting gap in the semiconductor, creating an effective topological superconductor. The magnetic field modifies the spectrum, acting against the induced superconducting gap, allowing to close the spectral gap for a critical magnetic field, at zero wave vector in the proximitized semiconductor. Above the critical field, the gap reopens, and the system enters a topological nontrivial phase where Majorana bound states are expected to form. This basic physical picture readily allows one to understand the band spin inversion at the topological transition. 39 At the zero wave vector, near the topological transition, the spin-orbit term is dominated by the Zeeman field which sets the band spin orientation either parallel or antiparallel to it. Due to low-energy particlehole symmetry, opposite-energy quasiparticle bands have opposite spin polarization. Since at the topological transition the gap closes and the bands cross each other, while remaining spin polarized, the spin polarization of these bands is inverted between the trivial and nontrivial phase (see e.g. Fig. 2). This picture is limited to a region around k = 0, since spin-orbit coupling acts at finite momenta to rotate the electronic spins. The challenge of the present paper is to find signatures of the electronic spin inversion at the transition. We propose an all-electrical detection of spin polarization in the electronic bands in semiconductorsuperconductor heterostructures which is capable to discriminate the phases near the topological transition. Our proposal is to use techniques similar to the electronic spin resonance (ESR) spectroscopy, where spin relaxation is measured using microwave-frequency magnetic fields. 44 However, in the SM-SC heterostructures, the proximity to a superconductor renders such methods not ideal. The present all-electrical scheme relies on the indirect coupling of the electric field to the electronic spin due to the strong spin-orbit coupling present in semiconductors such as InSb or InAs, which are regularly used in building the SM-SC heterostructures. The electric fields have been shown to control the Landé g-factor in semiconductor devices 45 and, moreover, time-varying electric field may be used to dynamically modulate the g-factors as a means to control quantum spins. 46-53 Moreover, a time-dependent spin-orbit coupling has been predicted to generate spin currents. [54][55][56][57] In our proposal, the electric fields modulate the strength of the Rashba spin-orbit coupling in the material. Then, under the electric field, resonant transitions are induced between the low-energy quasiparticle bands, leading to an increase in the spin polarization in either trivial or nontrivial phases. Nevertheless, since the spin polarization is opposite in the two phases, longitudinal spin-relaxation processes near k = 0 are either favored or unfavored by the external magnetic field. We show that in the topological nontrivial phase the quasi-electrons have spins aligned with the magnetic field, and therefore they relax by emitting photons, while in the trivial phase, they relax by absorbing photons. This allows the use of optical spectroscopic probes to detect the topological phases. The associated response function χ(ω), defined below in Eq. (8) and in particular its imaginary part χ (ω), which is related to spin relaxation processes, encodes these features and distinguishes on which side of the topological transition is the system. We call such measurement Rashba spectroscopy, sharing ideas from a larger group of experimental methods developed under the name of g-tensor modulated resonance spectroscopy. 47 The paper is organized as follows. Sec. II introduces the two Hamiltonian models for the semiconductorsuperconductor heterostructures, in one (1D) and two dimensions (2D). Sec. III discusses the detection of band spin polarization using electrical modulation of the Rashba spin-orbit coupling. The section defines a response function modeling the experiment and further determined within this paper. Sec. IV analyzes the system response at vanishing chemical potential, where a complete analytical solution is available. The results are extended in the next Section V in a perturbation theory in small spin-orbit coupling near the topological transition. The perturbation theory yields also the spin-polarization of the electronic bands. Sec. VI generalizes the above results for any system parameters in the low-frequency regime, while an arbitrary frequency formula for the response function is relegated to Appendix A. Finally, Sec. VII sums up the conclusions of our study. II. MODELS In this paper we investigate two paradigmatic models, originally proposed in Refs. 4-6, as condensed matter platforms for the realization of Majorana bound states. Because of its relative simplicity, the 1D model has been the subject of intense experimental scrutiny. 20,[58][59][60][61][62][63][64] To treat the models on equal footing, we assume in both cases that the semiconductor is deposited on a superconductor in xy plane (see Fig. 1). The proximity to the superconductor induces superconducting correlations in the semiconductor, characterized by the order parameter ∆. The semiconductors are also characterized by a strong Rashba spin-orbit coupling due to broken inversion symmetry along the z axis. Finally, the time-reversal symmetry is broken by an (effective) magnetic field which gives rise to a Zeeman spin-splitting between the electronic bands in the semiconductor. In the 1D model, the external magnetic field is applied along the semiconducting wire. In the 2D setup there is only an effective Zeeman field perpendicular to the semiconducting plane induced by a magnetic insulator placed under the semiconductor. The effective Hamiltonian for both semiconductors reads with the Nambu field operator defined as Ψ † (k) = (ψ † k↑ , ψ † k↓ , ψ −k↓ , −ψ −k↑ ). We use the convention that The Bogoliubov-de Gennes Hamiltonian for the 1D model reads 4,5 where, without the loss of generality, we choose an uniform induced order parameter ∆ > 0. In the two-dimensional model, the order parameter ∆ has a vortex structure, and goes to zero in the middle of the annular structure shown in Fig. 1(b). Our focus is on the bulk excitation spectrum which is determined far away from the vortex, where the order parameter is assumed to have an uniform amplitude ∆ > 0. The Hamiltonian for the system under the above approximation reads 6,65 In both models, E Z denotes the Zeeman energy, α, the Rashba spin-orbit coupling strength, and µ the chemical potential. The Pauli matrices σ i act in spin space, while τ i , with i = x, y, z, in particle-hole space. We use the convention that τ i σ j ≡ τ i ⊗ σ j , and absence of a Pauli matrix in the Hamiltonian implies the presence of the identity matrix in the respective space. Despite the somewhat different physical realization, the models share many attributes, allowing throughout a parallel treatment and leading to similar conclusions. Formally, the 2D model reduces to the 1D model under a rotation in spin space and confinement of electron motion along the x axis. Since the Rashba spin-orbit vector is orthogonal to the effective magnetic field, the energy spectrum is determined analytically. In both models there are two positive-energy quasiparticle bands with two negative-energy bands −E ± (k), with ξ denoting the kinetic energy ξ = 2 k 2 /2m − µ. The band structures undergo a topological transition when the topological gap at k = 0, closes and reopens under variation of system parameters. The topological nontrivial phases are realized for with zero energy Majorana bound states localized either at the 1D wire edges or, in 2D model, in the superconductor vortex. Near the topological phase transition at k = 0, the spin-orbit coupling term is dominated by the Zeeman field which polarizes the quasiparticle bands parallel or antiparallel to it. Due to particle-hole symmetry, bands with opposite energies display opposite spin polarization. A more detailed discussion of the spin polarization near the transition is presented in Sec. V. To get a sense of the units involved, we take throughout an InSb semiconductor with the material parameters: 20 gfactor ∼ 50, effective mass ∼ 0.015 m e , α = 20 nm · meV, and induced superconducting gap ∆ = 0.25 meV. We investigate systems that exhibit spectral gaps on the order of ∆ top ∼ 0.05 meV, which puts the frequency in the range ω ∼ 75 GHz. Therefore the systems could be probed in the microwave regime. For the sake of simplicity we take throughout a similar set of parameters in the 2D model and present all energies in units of ∆. III. RASHBA SPECTROSCOPY AND THE RESPONSE FUNCTION To probe the system, a time-dependent electric field is generated perpendicular to the superconductor δE(t)ẑ, for example, by laser pulses and microwaves exciting a voltage gate connected to the proximitized semiconductor. 47,50 Alternatively, one can imagine modulating a perpendicular electric field applied directly to the system. The electric field generates an effective in-plane magnetic field which couples with the spins in the semiconductor. This yields a time-dependent modulation of the Rashba spin-orbit coupling of the form, In general, since the spin-orbit coupling strength depends linearly on the external electric field in Rasbha nanowires, 66 its time modulation remains linear in the electric field, δα(t) κδE(t). This allows us to investigate the system using perturbative approaches. Near the topological transition, i.e. ∆ top = 0, the effect of spin-orbit coupling is small and the Zeeman field polarizes the quasiparticle bands along its direction. The time-dependent perturbation creates quasiparticle excitations, which change their spin polarization. In linear response theory, the change in the polarization is measured by the susceptibility χ jR (t), with j ∈ {x, y, z}. More exactly, the response function χ jR (t) measures the indirect coupling of the external electric field to the electronic spin σ j due to the strong Rashba coupling present in the semiconductor. The expectation values are computed in a basis of the Hamiltonian eigenstates H(k)|nk = E n (k)|nk , with n a band index. The spin polarization of a k state is therefore denoted σ j ≡ nk|σ j |nk The energy provided by the electric field causes resonant optical (momentum-conserved) transitions for electrons between the quasiparticle bands, and it should be in the microwave range, according to our estimates. These transitions are detected in what we call Rashba spectroscopy, by some of the components of χ jR (ω), in analogy to the ESR spectroscopy. The latter involves measuring spin-spin correlations functions, due to direct coupling of external ac-magnetic fields to the electronic spin. In contrast, Rashba spectroscopy measures the response caused by the coupling between the modulated Rashba spin-orbit term to the electronic spin. Therefore the dynamical long-wavelength response function reads: with k defined accordingly either to the 1D or the 2D model, and [·, ·] is a commutator. Similar ideas, in the context of quantum dot spin control in semiconductor quantum wells, have been experimentally put forward under the name of g-tensor modulation resonance spectroscopy. 47 The response function is invariant under time translations, and therefore a Fourier transform yields readily its frequency-dependent expression × mk|σ j |nk nk|H so /α|mk ω + E m (k) − E n (k) + iδ with δ/∆ → 0 + . The summation is over the four quasiparticle bands, and the momentum integration carries over the available momentum states. The Fermi-Dirac function is f n (k) = [e βEn(k) + 1] −1 , with β = 1/k B T , the inverse temperature. Alternatively, the dynamical correlator χ jR (ω) is calculated within the Matsubara Green's function formalism with G(k, iν) = 1/(iν − H), the superconducting Green's function at fermionic Matsubara frequencies ν. While the response function is in general complex, χ = χ + iχ , we focus on its imaginary part χ (ω), which carries information about the spin relaxation processes, and is sensitive to the spin polarization of the quasiparticle bands. In particular, we show that only the components of χ jR (ω) along the Zeeman field discriminate between trivial and nontrivial superconducting phases at the topological transition. According to the effective magnetic field orientation chosen in Eqs. (2) and (3), they are denoted (11) The other components are vanishingly small near the phase transition, for frequencies on the scale of the topological gap ω ∼ |∆ top |, since in this limit, the bands are almost completely polarized by the Zeeman field. IV. ANALYTICAL SOLUTION FOR µ = 0 To analyze the response function χ d (ω), it is useful to investigate the limit µ = 0 where closed-form solutions are possible. Later, we demonstrate that the main features captured in this limit carry over to more general choices of parameters. Let us focus on the low-energy physics near the Fermi energy at E = 0. The lower band E − (k), given in Eq. (4), displays minima both at k = 0 and k 2mα/ 2 (for strong spin-orbit strength). The induced superconducting correlations open a superconducting gap ∼ ∆ at finite momenta k. In contrast, the spectrum at k = 0 is defined by the topological gap Eq. (5). The closing and reopening of ∆ top marks a transition from the topological trivial phase (∆ top < 0) to the nontrivial phase supporting Majorana bound states (∆ top > 0). Our analysis is concerned in the parameter regime around the phase transition point, where |∆ top | ∆. Under this approximation only momenta near k = 0 are relevant and quadratic terms in momentum are neglected. Moreover, we work at vanishing chemical potential, µ = 0, and therefore ξ 0 and At k = 0, the lowest energy band and its particle-hole partner are eigenstates of σ x in 1D, and σ z in 2D. Due to particle-hole symmetry the two bands have opposite polarization σ j (see Fig. 2). At the transition point the 15) and (17). The panels share the legend. two eigenstates cross and there is a change in the polarization of the crossing bands. This change in polarization is detected by the imaginary part of the response function χ d (ω), defined in Eq. (11). In the zero-temperature limit, the sum over the Matsubara frequencies ν in Eq. (10) may be replaced by an integral. The susceptibility χ d (ω) for both 1D and 2D models follows after performing the trace over particlehole and spin degrees of freedom, with E − (k) = (α 2 k 2 + ∆ 2 top ) 1/2 . The second term in χ d (ω) contributes to the imaginary part of the susceptibility only at higher frequency, equal or larger than the separation between the lowest and highest bands ∼ 2(E Z + ∆). Therefore it can be neglected when probing the system at smaller frequencies, ω ∼ 2|∆ top |. The first term in Eq. (13) gives the low-frequency contribution which is, as expected, proportional to ∆ top , and, furthermore, is changing sign at the topological transition. We note that, in contrast, the static susceptibility χ d (ω = 0) ∝ dω χ d (ω )/ω , is an unreliable marker of the topological transition, since it includes the information from the high-frequency transitions. In 1D, the low-frequency dynamical susceptibility for transitions between the low-energy bands follows after performing the integral over the Matsubara frequency ν and the analytical continuation iω → ω + iδ/ : whose imaginary part is where Θ is the Heaviside step function. In 2D, an additional trivial angular integration in the 2D plane is required, which yields Therefore the imaginary susceptibility in 2D reads As expected, χ d (ω) is odd in frequency and, due to vanishing density of states in the spectral gap, is zero below the gap. At a threshold 2|∆ top |, which is the energy gap between the lowest bands ±E − (k), the 1D response develops a square-root dependence on the frequency, while in 2D, the susceptibility displays a linear dependence. A comparison between the analytical predictions and numerical integration of χ d either using Eq. (9) or (10) is presented in Fig. 3. The response functions change sign at the topological transition, an observation that can be validated experimentally. Moreover, in experiments it is also possible to keep the frequency fixed, but to vary the Zeeman field to bring the system across the topological transition. Near the transition at ∆ top = 0 the response is linear in ∆ top as indicated by Eqs. (15) and (17). At larger Zeeman field, above the fixed electric-field frequency ω, the transitions between the bands are energetically unfavored, leading to a decay of the signal. The dependence of χ d (ω) on ∆ top , when increasing E Z , is displayed in Fig. 4, showing the expected sign change at the transition. We also note that with increasing frequency, additional transitions to higher energy bands are also possible, but the low-frequency response close to the topological transition is insensitive to them. V. PERTURBATION THEORY IN THE SPIN-ORBIT COUPLING The results of the previous section are extended here to finite chemical potential µ using a perturbation theory in the spin-orbit coupling strength near the topological transition. This allows an intuitive understanding of the processes modeled in the Rashba spectroscopy response function. The perturbation theory is justified close to the topological transition at k = 0 where the spin-orbit coupling term, which is linear in momentum, is dominated by the other terms in the Hamiltonian. The kinetic term ∼ k 2 /2m remains neglected, since it is quadratic in momentum. To be more specific, in this section, we focus on the 1D model, described by a simplified Hamiltonian with the Rashba spin-orbit term H so = −αk x τ y σ y as a perturbation on H 0 . Our goal is to determine χ 1 (ω) ≡ χ xR (ω), proving that it changes sign at the topological transition ∆ top = 0, with The response function (9) in the zero-temperature limit follows readily using the eigenstates of the Hamiltonian, determined within the perturbation theory. Let us perform a π/2-rotation around x axis in particlehole space only for notational simplicity. The Hamiltonian changes accordingly, H →H, with tilde denoting the effect of the unitary transformation. TheH 0 eigenstates are momentum independent and may be indexed as |τ σ with τ = ± and σ = ±. Since |τ σ are eigenstates of σ x , it follows immediately that correlations χ zR = χ yR = 0 and only the response along the magnetic field may be relevant. The four energy bands of either H 0 orH 0 are with normalized eigenstates |τ σ : and At the topological transition ∆ top = 0, the bands |+− and |−+ cross each other. Note that in the trivial phase ∆ top < 0, the "conduction" bands are |+± , while |−± are "valence" bands. Let us analyze the matrix elements in the susceptibility from Eq. (9) using the first-order perturbed eigenstates, linear in α, To first order, the only finite matrix elements ofH so are those between the valence and the conduction bands. The modulated Rashba termH so /α, which couples to the time-varying electric field, excites quasiparticles from the lower to the upper bands. Its matrix elements, are to lowest order independent on α, This leads in either topological phases to an increase in the spin polarization for the upper band. Relaxation processes are determined by the matrix elements of spins along the Zeeman field. To linear order in the spin-orbit coupling, they are given by: The second matrix element in Eq. (26) describes transitions between the highest and lowest energy bands (ε ++ and ε −− ). The corresponding transition frequency is on order of 2(E Z + ∆), which is much larger than the topological gap ∆ top , and it is therefore irrelevant for our analysis. Here we focus on the last matrix element in Eq. (26), which is relevant for transitions between the two quasiparticle bands closest to the Fermi energy since the associated transition frequency is on the order of ω ∼ 2|∆ top |. Crucially, the matrix element behaves as 1/∆ top , so it changes sign at the topological transition. This central result shows that the relaxation processes are dependent on whether the quasiparticles in the lowest conduction band are aligned to the magnetic field, as in the topological nontrivial phase (for ε −+ ), or antialigned, as in the trivial phase (for ε +− ). Note also that the spin-spin correlation functions, which model the conventional ESR experiments, would have in the present case a dependence on the absolute value of the spectral gap ∼ 1/∆ 2 top , and therefore cannot discriminate the topological phases. The intraband terms in the susceptibility are neglected since they are all real and do not contribute to the imaginary susceptibility χ 1 (ω). The transitions between highest and lowest energy bands are also neglected since they occur for higher frequencies than the ones comparable to the topological gap. Therefore χ 1 (ω) at low frequencies is determined only by the energy difference between the two quasiparticle bands closest to the Fermi energy. To lowest order in α, in a second-order perturbation theory, the energy difference reads The last term in Eq. (27) may also be neglected, since it barely shifts the transition frequency due to the large value of the Zeeman energy E Z |∆ top | (and it vanishes at µ = 0 or sin θ = 0). Then, the energy difference reads Using the Eqs. (25)(26)(27) in Eq. (9) yields the susceptibility Again, the overall dependence on the sign of ∆ top indicates that the susceptibility is a reliable marker for the topological transition. This result translates the fact that in the nontrivial phase excited quasi-electrons relax by emitting photons at frequencies comparable to the topological gap since they are aligned with the effective magnetic field, while in the trivial region, they absorb photons, since they are antialigned with it. Integrating over the momentum in Eq. (29) and using the definition for cos θ from Eq. (23) we obtain The approximation of Eq. (28) yields an alternative result for the susceptibility which reduces to the previous one at small α and ω ∼ 2|∆ top |. The susceptibility is odd in frequency, and changes sign with the topological gap ∆ top . A quick check shows that Eq. (31) recovers the µ = 0 case from Eq. (15), near the transition, with a frequency ω ∼ 2|∆ top |. Both analytical and numerical calculations show again that in 1D the susceptibility has a square-root dependence on frequency χ 1 (ω) ∼ |ω| near the topological transition [see Fig. 3(a)]. Although the complete perturbation theory of the 2D case is not performed here, a simple scaling analysis shows that χ 2 (ω) has in general a linear dependence on frequency, similar to the µ = 0 case displayed in Fig. 3(b). This behavior is understood by noticing that the matrix elements of the spin σ z and spin-orbit term are linear in k. Moreover, the susceptibility has poles at momenta k 0 ∼ |ω|, and therefore from Eq. (9) it follows that where the dimensional effects (d = 1 or 2) enter only from the integral measure. Then indeed, in 2D, χ 2 ∼ k 2 0 ∼ |ω|, as in the µ = 0 case. Finally, the perturbation theory also yields the spin polarization in the two low-energy bands near the topological transtion: which generalizes at finite µ the results in Ref. 39. Due to particle-hole symmetry, the eigenstate with opposite energy and momentum have also opposite polarization, and, since the polarization is even in k, The energy of the state | −+ (1) is below the Fermi energy in trivial region ∆ top < 0 and above in the nontrivial region ∆ top > 0. Therefore there is an inversion in band polarization at the transition, as seen in Fig. 2. The energy scale where the spin polarization first vanishes in the band sets a natural scale for the frequencies that one may use to probe the system. At larger momenta, the spin-orbit starts to dominate and reverts the polarization, such that at higher frequencies, the susceptibility may show no sign change. In the approximation that E Z ∆ top , we use Eqs. (28) and (32) to estimate that a reasonable frequency window to probe the system is |ω| 6|∆ top |. A few remarks are in order. As the spectral gap in the system increases, non-linear effects distort the band structure and the band minimum is no longer guaranteed at k = 0. The bending of the electronic bands lowers the energy of higher momentum states (of opposite spin polarization compared to the same-band k ∼ 0 states), thus diminishing or reversing again the spin polarization of the band in a frequency window characteristic for 2E − (k = 0). Therefore the detection method proposed here is expected generally to work whenever the minimum of the band is at the Γ point, with frequencies tuned near the resonance condition, or, in particular, if the system is close to transition (E Z , ∆ |∆ top |), with frequencies |ω| 6|∆ top |. VI. THE GENERAL RESPONSE FUNCTION In this section we present an analysis of the dynamical susceptibility valid for arbitrary driving frequency and choice of material parameters. The particular limits, discussed before, are recovered from the more general expression presented here. The response functions follows from Eq. (10). The full Matsubara Green's function is a 4 × 4 matrix that can be inverted analytically to give with the energies E ± (k) from Eq. (4). The susceptibility follows from Eq. (10) after performing the trace over spin and particle-hole degrees of freedom and integrating over the Matsubara frequency. The general result is quite lengthy, and it is relegated to Appendix A. Nevertheless, it is further simplified near the transition by keeping in mind that the energy E + (k) is always much larger than the spectral gap [set by E − (k)], namely E + (k) E − (k). Considering dynamics on the scale of twice the topological gap ∆ top , allows us to neglect terms from high-frequency transitions, corresponding to |ω| > 2E + and |ω| > E + + E − . In the 2D model there is an additional angular integral which, due to the rotation symmetry of the Hamiltonian, is trivial and yields 2π. Therefore in both the 1D and 2D models, the response function reduces to a simple form involving a single integral over momenta. After analytical continuation iω → ω + iδ/ , it reads The remaining integral over momentum is performed numerically, usually with δ/∆ = 0.004. The susceptibility recovers Eqs. (14) and (16), which were obtained in the approximation ξ → 0. Therefore it recovers near the topological transition the squareroot scaling with frequency for χ 1 (ω) and, the linear one, for χ 2 (ω). A density plot for the susceptibility is shown in Fig. 5 in the (µ, E Z ) parameter space, with the frequencies tuned at the resonance condition. As expected the dynamical susceptibility maps exactly the position of the topological phase transition ∆ top = 0 and changes sign across it. This confirms that the topological nontrivial phases could be identified by measuring χ d (ω). VII. CONCLUSION In this paper, we have studied theoretically two model systems of semiconductor-superconductor heterostructures which support Majorana bound states. We have proposed an all-electric spectroscopic method to discriminate the topological phases in such materials by exploiting the bulk spin inversion at the topological transition. Our proposal uses time-varying electric fields, which dynamically modulate the Rashba spin-orbit coupling strength of the semiconductor, and cause resonant transitions between the electronic bands. Relaxation processes are then measured by optical spectroscopy at microwave frequencies using, for example, techniques developed in electron spin resonance spectroscopy. The above protocol is modeled within linear response theory by a modified susceptibility. We have shown that its imaginary part, χ (ω), can be used to discriminate the topological phases, since spin relaxation processes depend on the sign of the topological gap. Such measurements may be used to detect the topological nontrivial phases without the need to access information about the localized Majorana modes hosted in them. After analytical continuation of ω, we have checked that the above expression gives the same results as those of Eq. (9), where the Brillouin zone is discretized, Hamiltonian eigenstates are obtained at each momentum, and all integrals are carried out numerically.
2020-02-17T02:00:25.439Z
2020-02-14T00:00:00.000
{ "year": 2020, "sha1": "2d4e032c24ca1d5b6ef06f5c34bcb3e3a2c3c695", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2002.06092", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2d4e032c24ca1d5b6ef06f5c34bcb3e3a2c3c695", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55776353
pes2o/s2orc
v3-fos-license
Comparative Insilico Analysis of Ascorbate Peroxidase Protein Sequences Aerobic life has developed by exploiting the abundance of environmental oxygen (O2) in the atmosphere to oxidize organic compounds, thus obtaining chemical energy in a highly efficient manner. Paradoxically, the univalent reduction of molecular oxygen in metabolic reactions produces a plethora of partially reduced intermediates, commonly known as reactive oxygen species (ROS). If their levels are not tightly controlled, these chemical species can react with the majority of biological molecules and cause serious cellular damages [1-2]. ROS are byproducts of aerobic metabolism and are produced in excess within plant cells under abiotic and biotic stresses [3-4]. However, ROS are also important in many physiological processes and their balance is of the utmost importance. As a result, a complex system, comprising enzymatic and nonenzymatic mechanisms, maintains the delicate balance between oxidant and antioxidant compounds in the cell [5]. Ascorbate peroxidase (APX) is known play the most essential role in scavenging ROS and protecting cells against these toxic effects in higher plants, algae, euglena and other organisms [6,9]. In plants, ascorbate peroxidases (EC, 1.11.1.11) catalyze the conversion of H2O2 to H2O2 using ascorbate as the specific electron donor in this enzymatic reaction [9]. APX is the largest class of the nonanimal peroxidase superfamily, and its members are found in all living organisms except Diplomonads, Parabasalids, Apicomplexa, Amoebozoa, and animals [10]. Increased activity of different APX isoforms in response to environmental stresses such as salinity and drought has been reported in different plant species, indicating possible functional specialization of the respective isoenzymes in eliminating H2O2 in cells [11-12]. APX in higher plants are encoded by small multigene families and different isoforms are classified according to their subcellular localization. Soluble isoforms are found in cytosol and chloroplast stroma, while membrane-bound isoforms are found in peroxisomes and chloroplast thylakoids. The final subcellular localization of the isozyme is determined by the presence of organelle specific targeting peptides and transmembrane domains that are found in the protein N-terminal and C-terminal [13], APXs purified from different plant species and tissues, such as tea leaves, maize (Zea mays) seedlings and leaves, and potato (Solanum tuberosum) tubers, have been isolated in both monomeric and dimeric forms [14]. Expression of this gene has been reported to be enhanced in plants by drought and salt [15-16]. Introduction Aerobic life has developed by exploiting the abundance of environmental oxygen (O 2 ) in the atmosphere to oxidize organic compounds, thus obtaining chemical energy in a highly efficient manner. Paradoxically, the univalent reduction of molecular oxygen in metabolic reactions produces a plethora of partially reduced intermediates, commonly known as reactive oxygen species (ROS). If their levels are not tightly controlled, these chemical species can react with the majority of biological molecules and cause serious cellular damages [1][2]. ROS are byproducts of aerobic metabolism and are produced in excess within plant cells under abiotic and biotic stresses [3][4]. However, ROS are also important in many physiological processes and their balance is of the utmost importance. As a result, a complex system, comprising enzymatic and nonenzymatic mechanisms, maintains the delicate balance between oxidant and antioxidant compounds in the cell [5]. Ascorbate peroxidase (APX) is known play the most essential role in scavenging ROS and protecting cells against these toxic effects in higher plants, algae, euglena and other organisms [6,9]. In plants, ascorbate peroxidases (EC, 1.11.1.11) catalyze the conversion of H 2 O 2 to H 2 O 2 using ascorbate as the specific electron donor in this enzymatic reaction [9]. APX is the largest class of the nonanimal peroxidase superfamily, and its members are found in all living organisms except Diplomonads, Parabasalids, Apicomplexa, Amoebozoa, and animals [10]. Increased activity of different APX isoforms in response to environmental stresses such as salinity and drought has been reported in different plant species, indicating possible functional specialization of the respective isoenzymes in eliminating H 2 O 2 in cells [11][12]. APX in higher plants are encoded by small multigene families and different isoforms are classified according to their subcellular localization. Soluble isoforms are found in cytosol and chloroplast stroma, while membrane-bound isoforms are found in peroxisomes and chloroplast thylakoids. The final subcellular localization of the isozyme is determined by the presence of organelle specific targeting peptides and transmembrane domains that are found in the protein N-terminal and C-terminal [13], APXs purified from different plant species and tissues, such as tea leaves, maize (Zea mays) seedlings and leaves, and potato (Solanum tuberosum) tubers, have been isolated in both monomeric and dimeric forms [14]. Expression of this gene has been reported to be enhanced in plants by drought and salt [15][16]. This paper reports in silico characterization of amino acid sequence of heme binding peroxidase from different plants for homology search, multiple sequence alignment, phylogenetic tree construction, and motif analysis using various bioinformatics tools proposing new strategies for plant and crop improvement to combat stressful condition. Retrieval of ascorbate peroxidase protein sequences For the identification of APX in various plants, the homology search *Corresponding author: Yogesh Kumar Negi, SBS P.G. Institute of Biomedical Science and Research, Balawala-248161 Dehradun, India, E-mail: yknegi@ rediffmail.com, plantstress@gmail.com of the APX proteins was done through Blast search tool of NCBI (http:// www.ncbi.nlm.nih.gov/BLAST/) using Blastp and tblastn algorithm and their amino acid sequence of different source organism available in GenBank were downloaded from NCBI (http://www.ncbi.nlm.nih. gov/). Only reference sequences were retrieved while non reference sequences were removed. Multiple sequence alignment All the sequences of APX were aligned using ClustalW [18] to find out the similarity present among the sequences of the same family. Phylogenetic analysis Phylogenetic analysis of the sequences was done by Molecular Evolutionary Genetic Analysis (MEGA) software (version 4.0.02) [19], using UPGMA method. Each node was tested using the bootstrap approach by taking 1,000 replications and a random seeding of 64,238 to ascertain the reliability of nodes. The number is indicated in percentages against each node. The branch lengths were drawn to scale indicated. Motif analysis Analysis of conserved motifs was performed by means of the online MEME (Multiple Expectation Maximization for Motif Elicitation) tool version 3.5.7 [20] using minimum and maximum motif width of 20 and 50 residues respectively and maximum number of 10 motifs, keeping rest of the parameters at default. Multiple sequence alignment A total of 64 full-length amino acid sequences of Ascorbate peroxidase (APX) enzyme from different plants were considered for comparative In Silico analysis (Table 1). To investigate the APX sequence features among various plants we performed multiple sequence alignments of the 64 amino acid sequences of APX. Conserved region of all proteins are shown in Figure 1 (shown as suppelmentary). Multiple sequence alignment highlighted the sequence conservation of amino acid residues among different members of APX families in the species. This conservation however, is concomitant with differences sufficient enough to support variations which are subsequently reflected at the structural and functional levels. Phylogenetic analysis To examine the phylogenetic relationship among APX from different plants a rooted tree was constructed from alignments of their amino acid sequences (Figure 2). The phylogenetic analysis of APX across all plant species clearly reveals four clusters: cluster A, cluster B, cluster S.No. Plant Total No. Motif analysis An extensive search of the motifs and their positions was done by MEME software which identified several conserved motifs in the protein sequences of APX (Table 2 and Figure 3). Motif analysis also communicated the same fundamental necessity for the development of this gene family. Motifs which contain the signature sequences are either well conserved or are having substitutions which do not change their activity, while the ones which do not have a direct impact on the active site contain altered residues and are clearly the outcome of accumulation of mutations or have been subjected to rearrangements. A total of ten motifs labelled as 1, 2, 3, 4, 5, 6, 7, 8, 9 and 10 were observed in all 64 sequences when subjected to MEME [21][22][23]. In all plant heme peroxidase, motif-1 was most commonly observed which is functionally related to its detoxication of H 2 O 2 or reactive oxygen species both cytosolic and chloroplast cell compartment as well as having heme binding peroxidase properties. While Motif-4, which also have similar function as Motif-1 was present in all APX isoforms. Motif 2 contains Casein kinase II phosphorylation site and signature of chloroplastic and cytosolic ascorbate peroxidase [24]. Beside this, Motif-3 & 5, 7, and 9 also most frequently present in APX isoforms which are functionally releted with chloroplastic and cytosolic and non animal peroxidase [25]. Motif 6 is present in all APX isoforms Table 3. Conclusion In silico analysis of ascorbate peroxidase protein sequences and its comparison with other APX has revealed the sequence-based similarity existed among different APX isoforms and clustering in distinct groups based on its source among different plants and nature of the mechanism of enzymatic activity against the antioxidant defense mechanism in plants. In silico domain analysis confirms the existence of the different groups of ascorbate peroxidase based on the presence of unique domains, a heme binding domain found in all isoforms of APX. The presence or absence of specific domains was directly in relation with the structural and functional organization of different isoforms of ascorbate peroxidase. Amino acid sequence similarity specific for different groups could be utilized for designing strategy for cloning the putative genes based on PCR amplification using degenerate primers and potentially useful for the development of transgenic crop plants tolerant to abiotic stresses.
2019-03-30T13:02:48.182Z
2011-10-19T00:00:00.000
{ "year": 2011, "sha1": "5e3743ac9c8952057e4634fee306fc546dfb26c2", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/comparative-insilico-analysis-of-ascorbate-peroxidase-protein-sequences-from-different-plant-species-2155-9538.1000103.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "42697cf881155ad35450ae3f8ca9b102bbeba540", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
270281521
pes2o/s2orc
v3-fos-license
Mechanistic prediction and validation of Brevilin A Therapeutic effects in Lung Cancer Background Traditional Chinese medicine (TCM) has been found widespread application in neoplasm treatment, yielding promising therapeutic candidates. Previous studies have revealed the anti-cancer properties of Brevilin A, a naturally occurring sesquiterpene lactone derived from Centipeda minima (L.) A.Br. (C. minima), a TCM herb, specifically against lung cancer. However, the underlying mechanisms of its effects remain elusive. This study employs network pharmacology and experimental analyses to unravel the molecular mechanisms of Brevilin A in lung cancer. Methods The Batman-TCM, Swiss Target Prediction, Pharmmapper, SuperPred, and BindingDB databases were screened to identify Brevilin A targets. Lung cancer-related targets were sourced from GEO, Genecards, OMIM, TTD, and Drugbank databases. Utilizing Cytoscape software, a protein-protein interaction (PPI) network was established. Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG), Gene set enrichment analysis (GSEA), and gene-pathway correlation analysis were conducted using R software. To validate network pharmacology results, molecular docking, molecular dynamics simulations, and in vitro experiments were performed. Results We identified 599 Brevilin A-associated targets and 3864 lung cancer-related targets, with 155 overlapping genes considered as candidate targets for Brevilin A against lung cancer. The PPI network highlighted STAT3, TNF, HIF1A, PTEN, ESR1, and MTOR as potential therapeutic targets. GO and KEGG analyses revealed 2893 enriched GO terms and 157 enriched KEGG pathways, including the PI3K-Akt signaling pathway, FoxO signaling pathway, and HIF-1 signaling pathway. GSEA demonstrated a close association between hub genes and lung cancer. Gene-pathway correlation analysis indicated significant associations between hub genes and the cellular response to hypoxia pathway. Molecular docking and dynamics simulations confirmed Brevilin A’s interaction with PTEN and HIF1A, respectively. In vitro experiments demonstrated Brevilin A-induced dose- and time-dependent cell death in A549 cells. Notably, Brevilin A treatment significantly reduced HIF-1α mRNA expression while increasing PTEN mRNA levels. Conclusions This study demonstrates that Brevilin A exerts anti-cancer effects in treating lung cancer through a multi-target and multi-pathway manner, with the HIF pathway potentially being involved. These results lay a theoretical foundation for the prospective clinical application of Brevilin A. Introduction Lung cancer is a common malignancy with high incidence and mortality worldwide [1].Non-small cell lung cancer (NSCLC) accounts for approximately 80-85% of all lung cancer cases [2].Despite considerable advancements in the prevention, early diagnosis, and treatment of NSCLC, the clinical outcomes for advanced NSCLC remain suboptimal [3,4].Therefore, there exists an imperative requirement to innovate novel therapeutic strategies for the effective treatment of NSCLC [5]. Traditional Chinese medicine (TCM) has garnered attention as a promising avenue in cancer treatment, attributed to its commendable therapeutic efficacy and minimal side effects [6].TCM products have demonstrated anticancer effects through diverse pathways and mechanisms [7].Brevilin A, a sesquiterpene lactone derived from C. minima, exhibits a spectrum of pharmacological activities encompassing anti-cancer [8,9], anti-oxidative [10], anti-inflammatory [11,12], and immune-enhancing effects [13].Previous studies have shown the potential of Brevilin A in combating various human malignancies, including lung cancer [14], nasopharyngeal carcinoma [15], multiple myeloma [16], gastric cancer [17], breast cancer [18], and prostate cancer [19].A published study delineated that Brevilin A induces apoptosis in lung cancer cells by promoting reactive oxygen species (ROS) generation and inhibiting STAT3 activation [20].However, the precise molecular mechanisms underlying Brevilin A's action against lung cancer remain elusive.Moreover, Brevilin A exhibits a favorable pharmacokinetic profile and remarkable bioavailability, with no discernible acute toxicity observed in mice administered a substantial dosage of Brevilin A [16].This implies the safety of Brevilin A, thereby encouraging further exploration of its therapeutic potential in the context of lung cancer. Network pharmacology is a robust bioinformatics tool for comprehensively identifying candidate targets, functions, and mechanisms of TCM in disease treatment [21].In this study, we employed network pharmacology to predict the potential mechanisms of Brevilin A in lung cancer.Molecular docking and in vitro experiments were conducted to validate the obtained results.The workflow of this study is elucidated in Fig. 1. Retrieval of potential targets of Brevilin A The identification of targets associated with the compound denoted as "Brevilin A" was accomplished through the application of the Bioinformatics Analysis Tool for the Molecular Mechanism of Traditional Chinese Medicine database, herein referred to as Batman-TCM (http:// bionet.ncpsb.org.cn/batman-tcm/index.php), whereby targets achieving a score exceeding 5 were considered.The 3D structures and Isomeric SMILES of the aforementioned compound, Brevilin A, were procured from the PubChem database (https://pubchem.ncbi.nlm.nih.gov/).Utilizing the structural information of Brevilin A, prospective targets were elucidated through computational analyses conducted on multiple platforms, namely Swiss Target Prediction (http://swisstargetprediction.ch/), Pharmmapper (http://lilab-ecust.cn/pharmmapper/ index.html),SuperPred (https://prediction.charite.de/),and BindingDB (https://www.bindingdb.org/bind/index.jsp) databases.The resultant target predictions underwent standardization procedures utilizing the UniProt database. Screening for lung cancer-related targets The keyword "Lung cancer" guided our exploration in the Gene Expression Database (GEO, https://www.ncbi.nlm.nih.gov/geo/), resulting in the retrieval of the GSE136043 dataset which contains mRNA lung tissue microarray data from five patients with lung cancer and five healthy volunteers.Gene differential expressions (DEGs) in the GSE136043 dataset were then analyzed using the R package limma, wherein genes exhibiting log2 (fold change) > 1 or < -1, coupled with a p-value < 0.05, were designated as differentially expressed.Concurrently, targets associated with lung cancer were ascertained through a comprehensive inquiry encompassing the Omim (https://www.omim.org/),GeneCards (https:// www.genecards.org/),TTD (https://db.idrblab.net/ttd/),and Drugbank (https://go.drugbank.com/)databases.The outcomes from these diverse databases were amalgamated, and duplicates were expunged, and the resulting targets underwent standardization employing the UniProt database.The identification of candidate targets linked to both the pharmaceutical agent denoted as "Brevilin A" and lung cancer was achieved through exploration of the Xiantao Academic website (https://www.xiantaozi.com). Construction of protein-protein interaction (PPI) network To ascertain information regarding interactions among proteins, data pertaining to candidate target genes were submitted to the esteemed STRING database (https:// string-db.org/)[22].The designated species for analysis was "Homo sapiens, " and a minimum interaction score threshold of 0.400, denoting medium confidence, was applied.The outcomes were obtained in TSV format and imported into Cytoscape 3.8.0 for the purpose of visualizing the PPI network interactions.The utilization of the exceptional CytoHubba Cytoscape plugin facilitated the identification of pivotal genes within the PPI network. The Maximum Correlation Clique (MCC) for each node in the PPI network was calculated using the CytoHubba plugin, wherein larger and darker nodes indicated higherscoring genes. GO and KEGG pathway enrichment analysis The "clusterProfiler" package in R 4.3.1 software was employed to perform Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses, utilizing a P-value<0.05and Q-value<1 as the criteria for selection.The GO enrichment analysis encompassed cellular components (CC), biological processes (BP), and molecular functions (MF).The eight most notable terms for each category have been delineated, and bubble diagrams were generated using the R 4.3.1 software.The KEGG enrichment analysis aimed to elucidate the potential mechanisms by which Brevilin A engages with lung cancer.Subsequently, bubble charts were generated to visually represent the top 20 significant pathways, employing the R 4.3.1 software. GSEA enrichment analysis To ascertain the association between key targets and potential mechanisms in lung cancer, Gene Set Enrichment Analysis (GSEA) was performed on the GSE136043 dataset using the R package "clusterProfiler, " with a P-value < 0.05 employed as the filtering criterion. Component-target molecular docking and molecular dynamics simulation Molecular docking and molecular dynamics simulation represent computational techniques frequently utilized for the preliminary investigation of mechanisms and drug discovery.Their efficacy lies in their capacity to predict potential binding orientations and affinities of proteinligand complexes [23,24]. Molecular docking In the preparatory phase, protein crystal structures were initially acquired from the Protein Data Bank (PDB, https://www.rcsb.org/(accessed on 12 September 2023)).Subsequently, homology modeling was employed for the reconstruction of missing residue structures, utilizing a previously established template with reference to the SWISS-MODEL website [25].The molecular structure of Brevilin A was obtained from the PubChem database (https://pubchem.ncbi.nlm.nih.gov/(accessed on 12 September 2023)).Subsequent to acquisition, the molecular structure underwent geometry optimization employing the B3LYP approach and the 6-311 + + G (d, p) basis set, utilizing Gaussian 09 W and GaussView 5.0 software.The standard restrained electrostatic potential (RESP) of Brevilin A was calculated and applied using the Multiwfn program [26].Molecular docking was performed on the SwissDock website, employing default settings [27].The SwissDock website generated output clusters from each docking run.These clusters were then prioritized based on the FullFitness (FF) scoring function, a specific algorithm integrated into SwissDock.Subsequently, the individual conformers within each cluster were ranked by their FF scores, enabling us to select the conformer with the most favorable FF score for further assessment.The resultant docking sites between the ligand and the protein were visualized using PyMOL and Discovery Studio 2019 software [28]. Molecular dynamics simulation The optimal conformations derived from molecular docking underwent comprehensive evaluation of binding stability through molecular dynamics simulation, utilizing Gromacs 2020.06 software [29].The simulations employed the AMBER99SB-ILDN/GAFF force field, and the initial systems were established in a cubic box featuring a 1.0 nm layer, populated with the TIP3P water model.Energy minimizations were performed using the steepest descent algorithm.Subsequently, the systems were equilibrated with the canonical (NVT) and isothermal-isobaric (NPT) ensembles for 100 ps prior to the commencement of the molecular dynamics simulation.The equilibrium system was configured to maintain a temperature of 310 K and a standard pressure of 1.0 bar.The ensuing molecular dynamics simulations spanned a duration of 50 ns to evaluate the stability of the complex.Trajectory files were employed to calculate the root mean square deviation (RMSD), root mean square fluctuation (RMSF), Radius of gyration (Rg) value, and solvent accessible surface area (SASA).These parameters were selected for their capacity to offer insights into the structural states of the complex.To ascertain the binding free energies (BFE) of the complex, the molecular mechanics/Poisson-Boltzmann surface area (MM/PBSA) approach was applied.The BFE in an aqueous solvent (ΔGbind) is typically expressed as the sum of three components: (1) ΔE MM , signifying the change in gas-phase molecular mechanics energy; (2) ΔG PB , indicating the change in polar solvation energy; and (3) ΔG SA , denoting the change in non-polar solvation energy.Additionally, the alteration in conformational entropy (-TΔS) was estimated using the interaction entropy (IE) method [30].These calculations were performed using trajectory files at 1 ns intervals for the final 20 ns, during which the RMSD remained stable. Gene-pathway correlation analysis RNA-sequencing expression profiles (level 3) and pertinent clinical data for lung cancer were acquired from the TCGA dataset (https://portal.gdc.com).Analysis was conducted utilizing the GSVA package in R software, with the parameter method='ssgsea' being chosen.The examination of the relationship between genes and pathway scores was carried out using Spearman correlation.All analytical procedures and R packages were implemented using R version 4.3.1.A p-value less than 0.05 was considered statistically significant. Cell culture A549 cells were procured from the American Type Culture Collection (ATCC, Manassas, VA, USA) and subsequently cultivated in Dulbecco's Modified Eagle Medium (DMEM) supplemented with 10% fetal bovine serum (FBS).All aforementioned reagents were acquired from Gibco (CA, USA).The cells were maintained in a humidified incubator with 5% CO2 at a temperature of 37℃. Cell viability assay A549 cells were cultured in 96-well plates with a density of 4000 cells per well.After 24 h of incubation, the cells were exposed to Brevilin A at specified concentrations for 24 and 48 h.Subsequently, the cells were treated with CCK-8 solution for an additional hour at 37℃, and the absorbance was quantified at 450 nm using a microplate reader (ThermoFisher, Waltham, MA). Quantitative RT-PCR (qRT-PCR) RNA was isolated from cells using the NucleoSpin RNA isolation Kit (Macherey-Nagel, Düren, Germany) and TRIzol™ reagent, respectively.Subsequently, reverse transcription-PCR was conducted using the RevertAid First Strand cDNA Synthesis Kit (Thermo Scientific, Waltham, USA) following the manufacturer's instructions.Quantitative PCR (qPCR) analysis was performed on the ABI 7500 Fast Real-time PCR System using the Taq pro Universal SYBR qPCR Master Mix (Vazyme Biotech, Nanjing, China).Relative gene expression was determined using the ΔΔCt method, and the primer sequences are available upon request. Statistical analysis Statistical analyses were conducted using GraphPad Prism 8.0.The data presented in this study are derived from a minimum of three independent experiments and are expressed as the mean ± standard error of the mean (SEM).To assess differences, unpaired t-tests and oneway analysis of variance (ANOVA) were employed.P-values less than 0.05 were deemed statistically significant. PPI network analyses A total of 2554 DEGs from GSE136043 dataset, comprising 1214 upregulated and 1340 downregulated genes, were identified and visually represented using a volcano plot (Fig. 2A) and a heatmap of the top 10 up-and down-regulated genes expression(Fig.2B).Subsequently, we gathered 599 Brevilin A-related targets and 3864 lung cancer-related targets, resulting in 155 overlapping genes selected as Brevilin A candidate targets against lung cancer (Fig. 2C).A PPI network of the 155 overlapping targets was constructed, consisting of 151 nodes and 2134 edges (Fig. 3A, B).Nodes with higher degrees were considered more pivotal in the network.The top 30 genes, exhibiting the highest degree of connectivity, are presented in Table 1.Identified through MCC scores, STAT3, TNF, HIF1A, PTEN, ESR1, and MTOR were identified as potential hub genes (Fig. 3C). GO and KEGG function enrichment analysis We conducted GO and KEGG function enrichment analyses on the 155 overlapping targets, resulting in the identification of 2893 GO terms and 157 KEGG pathways.The top 8 significant GO terms from each category are depicted in Fig. 4A.In the CC category, enrichments were observed in transferase complexes facilitating the transfer of phosphorus-containing groups, secretory granule lumen, and cytoplasmic vesicle lumen.In the BP category, enrichments included epithelial cell proliferation, positive regulation of kinase activity, and gland development.The MF category exhibited enrichments in DNA-binding transcription factor binding, RNA polymerase II-specific DNA-binding transcription factor binding, and protein tyrosine kinase activity (Fig. 4A).The top 20 significant pathways, such as the PI3K-Akt signaling pathway, FoxO signaling pathway, and HIF-1 signaling pathway, are presented in Fig. 4B; Table 2.A chord diagram was employed to visually depict the relationship between enriched KEGG pathways and genes (Fig. 5A), while Fig. 5B illustrates the distribution of key targets in the HIF-1 signaling pathway. GSEA enrichment analysis To further elucidate the pathway analysis of DEGs, GSEA analysis was performed on both low and high expression of hub genes.The GSEA results are shown in Fig. 6, revealing that signaling pathways associated with the high expression phenotype of STAT3 encompass phagosome, protein processing in the endoplasmic reticulum, and viral carcinogenesis.Conversely, pathways linked to the low expression of STAT3 include neuroactive ligand − receptor interaction and olfactory transduction (Fig. 6A).For TNF, the high expression phenotype is correlated with pathways such as phagosome, protein processing in the endoplasmic reticulum, and lysosome, while the low expression is associated with neuroactive ligand − receptor interaction and olfactory transduction (Fig. 6B).HIF1A's high expression phenotype is linked to Epstein − Barr virus infection, lysosome, phagosome, and protein processing in the endoplasmic reticulum, while its low expression is tied to olfactory transduction (Fig. 6C).The pathways associated with high PTEN expression include focal adhesion, neuroactive ligand − receptor interaction, olfactory transduction, and the rap1 signaling pathway, while low PTEN expression is connected to the biosynthesis of amino acids (Fig. 6D).ESR1's high expression is associated with neuroactive ligand − receptor interaction, olfactory transduction, and the rap1 signaling pathway, while low ESR1 expression is linked to the biosynthesis of amino acids and protein processing in the endoplasmic reticulum (Fig. 6E).MTOR's high expression is connected to human T − cell leukemia virus 1 infection, protein processing in the endoplasmic reticulum, and viral carcinogenesis, whereas low MTOR expression is associated with neuroactive ligand − receptor interaction and olfactory transduction (Fig. 6F).These results underscore that the significantly enriched pathways associated with core targets align closely with those implicated in lung cancer.Notably, biosynthesis of amino acids, neuroactive ligand − receptor interaction, olfactory transduction, and protein processing in the endoplasmic reticulum emerge as pathways intricately associated with lung cancer (Fig. 7). Gene-pathway correlation analysis The Network Pharmacology results presented above indicate that HIF-1 A stands out as one of the top three hub genes, and the HIF-1 signaling pathway emerges prominently as one of the most enriched pathways.The widely acknowledged centrality of the HIF-1 pathway in orchestrating cellular responses under conditions of hypoxia is highlighted [31].In light of this recognition, we postulate that the hypoxic response and the hypoxiarelated HIF-1 signaling pathway constitute crucial molecular mechanisms through which Brevilin A exerts its anti-lung cancer effects.Our exploration aimed to investigate the correlations between the hub genes and the cellular response to the hypoxia pathway.Remarkably, our findings align perfectly with our initial assumptions.All of the top six hub genes (STAT3, TNF, HIF1A, PTEN, and ESR1), with the exception of MTOR, demonstrated significant associations with the cellular response to the hypoxia pathway (Fig. 8). Validation of molecular docking and molecular dynamics simulation Building upon insights gleaned from prior investigations, the present study employed molecular docking and molecular dynamics simulation techniques to assess the binding mode and affinity of Brevilin A with HIF1A and PTEN targets.The outcomes of molecular docking are visually depicted in Fig. 9, encompassing both 2D and 3D representations.Additionally, Table 3 provides a succinct summary of fundamental information regarding binding mode and affinity.The results indicate robust bindings between Brevilin A and the respective targets, with binding affinities of -8.08 (HIF1A) and − 7.46 (PTEN).Following this, optimal conformations underwent a 50 ns molecular dynamics simulation to evaluate the stability of the binding.As illustrated in Fig. 10A, the RMSD of the complexes achieved stability at 20 ns, displaying only limited fluctuations, indicative of secure ligand binding to the target pockets.Subsequently, RMSF was computed to assess atom deviations within the proteins (Fig. 10B). The findings indicate that fluctuations primarily occurred at the terminals of the proteins, without compromising the integrity of the binding pocket.Furthermore, the Rg value was employed to elucidate the conformational state of the proteins.As shown in Fig. 10C, proteins underwent a sequence of swelling and recovery prior to 20 ns, maintaining stability during the subsequent simulation period.An analysis of the SASA was conducted to assess the proteins' interaction capability with surrounding solvents throughout the simulations (Fig. 10D).The findings demonstrated a reduction in SASA, suggesting a gradual enhancement in the binding affinity between the ligand and proteins [32].The MM/PBSA approach, a predominant method for reassessing binding affinity, facilitated the calculation of the BFE between the ligand and protein [33].Analysis of the results revealed that Brevilian-A-HIF1A and Brevilian-A-PTEN exhibited BFEs of -40.431 kJ/mol and − 80.088 kJ/mol, respectively (Table 4), indicative of a robust binding affinity. Brevilin A-induced NSCLC cell death via targeting HIF-α pathway To assess the impact of Brevilin A (Fig. 11A) on lung carcinoma cells, we conducted a CCK-8 assay.Our results revealed that the administration of Brevilin A resulted in a dose-and time-dependent reduction in the viability of A549 cells (Fig. 11B, C).Following exposure to Brevilin A, there was a noticeable decrease in the quantity of adherent cells, accompanied by a morphological transformation characterized by a rounded appearance (Fig. 11D).These results unequivocally demonstrate the pronounced cytotoxicity of Brevilin A towards NSCLC cells, consistent with the outcomes derived from network pharmacology.To further validate the outcomes obtained through network pharmacology, we examined the mRNA expression level of HIF-1α in A549 cells.Remarkably, Brevilin A treatment significantly reduced HIF-1α expression in a dose-dependent manner (p < 0.05) (Fig. 11F).Additionally, exposure to Brevilin A led to a significant increase in PTEN mRNA levels (p < 0.05) (Fig. 11E).These findings indicate that Brevilin A potentially induced NSCLC cell death by targeting the HIF-α pathway, aligning with the network pharmacology results. Disscussion Sesquiterpene lactones, derived from plants, are widely employed in TCM for their anti-inflammatory and anticancer properties [34].These compounds exhibit reactivity with functional groups, notably the thiol group on proteins and enzymes.They demonstrate selectivity towards tumor and cancer stem cells by targeting specific signaling pathways, making them noteworthy agents in cancer clinical trials [35,36].Previous studies from our research group have established the neuroprotective effects of Brevilin A against lipopolysaccharide-induced neuroinflammation both in vitro and in vivo.In the present study, we elucidate the therapeutic efficacy of Brevilin A in the context of lung cancer, operating through multitarget, multi-biological processes, and multi-pathway mechanisms. Our results highlight STAT3, TNF, HIF1A, PTEN, ESR1, and MTOR as potential therapeutic targets for Brevilin A's anticancer activity.STAT3, a transcription factor integral to diverse biological processes, including cell proliferation, survival, differentiation, and angiogenesis [37], has been implicated in various human cancers, such as head and neck tumors, cervical cancer, gastric carcinoma, and colon cancer [38][39][40][41].Notably, exosomemediated transfer of specific microRNAs has been associated with the activation of STAT3 signaling-induced epithelial-mesenchymal transition in lung cancer cells [42].TNF-α, a member of the tumor necrosis factor superfamily, exhibits a spectrum of biological activities [43] and has been implicated in numerous human cancers, influencing processes such as growth, invasion, and metastasis [44,45].In NSCLC patients, elevated levels of IL-1, IL-6, and TNF-α have been linked to cancer pain and prognosis [46].HIF (hypoxia-inducible factor) [47], a transcription factor crucial for tumor angiogenesis, cell survival, proliferation, apoptosis, metastasis, infiltration, and metabolism [48], plays a pivotal role in promoting lung cancer cell proliferation under conditions of chronic intermittent hypoxia [49].Studies also demonstrate that certain formulations in TCM can inhibit NSCLC cell proliferation by downregulating HIF-1α expression [50].Phosphatase and tensin homolog deleted on chromosome ten (PTEN), encoding the classical PTEN protein with phosphatase activity, acts as a tumor suppressor by antagonizing the activity of tyrosine kinases and other phosphorylases.Meta-analyses indicate a correlation between PTEN and poor prognosis in lung cancer [51], and clinical studies confirm abnormal expression of EGFR, TGF-α, P-AKT, and PTEN in NSCLC patients [52], potentially contributing to NSCLC pathogenesis.In the present study, we utilized SwissDock website to investigate Brevilin A's potential binding sites with these proteins, revealing robust binding activity.GSEA analysis further validated the strong association of these targets with lung cancer. The GO and KEGG analyses revealed 2893 enriched GO terms and 157 enriched KEGG pathways, encompassing notable pathways such as the PI3K-Akt signaling pathway, FoxO signaling pathway, and HIF-1 signaling pathway.The PI3K-Akt signaling pathway, governing various cellular functions including growth, differentiation, proliferation, survival, motility, invasion, and intracellular trafficking, plays a pivotal role in tumorigenesis [53].Studies have reported the induction of apoptosis and inhibition of invasion in NSCLC through the PI3K/Akt/ mTOR signaling pathway by compounds like Aloperine The binding mode of Brevilin A-PTEN complex.The 3D visualization is on the left, and the 2D visualization is on the right [54].Additionally, CAF-derived exosomes have been identified to promote NSCLC cellular proliferation and chemoresistance through regulation of the PTEN/PI3K-AKT signaling axis [55].The FOXO signaling pathway, triggered by the PI3K/AKT pathway, is instrumental in mediating cell proliferation, differentiation, and tumorigenesis [56,57].Inhibition of CCCTC-binding factor (CTCF) has been shown to regulate the FoxO signaling pathway, impeding tumor growth in vivo [58].Notably, our study is the first to unveil that Brevilin A exerts antilung cancer effects by targeting the HIF-1 signaling pathway.Gene-pathway correlation analysis further revealed significant associations between most hub genes and the cellular response to hypoxia pathway.Hypoxia, influencing tumor signaling pathways through hypoxia-inducible factors (HIFs) and reducing free radical production, holds significance in tumor progression.Studies have demonstrated the role of hypoxia in activating EGFR and inducing resistance to gefitinib in EGFR-mutant nonsmall cell lung cancer [59].Silencing HIF-1α expression has been shown to significantly reduce the invasive ability of lung cancer cells under hypoxic conditions [60].Molecular docking analysis and molecular dynamics simulation affirmed the robust interaction of Brevilin A with HIF1A and PTEN, respectively.In vitro experiments demonstrated that Brevilin A induces dose-and time-dependent cell death in A549 cells, concomitant with decreased HIF-1α mRNA expression and increased PTEN mRNA levels.These results suggest the potential of the HIF-1 signaling pathway as a therapeutic target for Brevilin A in lung cancer treatment.In summary, our study delineated the core targets and key pathways of Brevilin A in lung cancer through an integrated approach involving network pharmacology, molecular docking analysis, and experimental validation.The therapeutic effects of Brevilin A in lung cancer were demonstrated to involve a multi-target, multi-biological process, and multi-pathway mechanism, with noteworthy inhibition of the HIF-1 signaling pathway.These results lay a theoretical foundation for the prospective clinical application of Brevilin A. Nevertheless, it is imperative to acknowledge certain limitations in this study.Firstly, the utilization of more comprehensive databases would enhance the reliability of results.Secondly, further experimental validations are imperative to consolidate the Fig. 1 Fig. 1 Mechanistic insights into Brevilin A action against lung cancer.Schematic diagram summarizing the mechanisms underlying Brevilin A action against lung cancer using network pharmacology, molecular docking, and experimental validation Fig. 2 Fig. 2 Differential gene expression analysis.(A) GEO Volcano Map and (B) GEO heatmap of the top 10 up-and down-regulated genes expression.Red and blue dots indicate up-regulated and down-regulated genes, respectively.(C) Venn diagram showing the overlap of Brevilin A-associated targets and lung cancer-related genes Fig. 3 Fig. 3 Protein-protein interaction (PPI) network analysis.(A) The PPI network.(B) Interaction between these genes.(C) Hub genes identified using the MCC method Fig. 4 Fig. 4 Functional enrichment analysis.Bubble chart of GO (A) and KEGG (B) function enrichment analysis Fig. 5 Fig. 5 Brevilin A target pathway in lung cancer.(A) Brevilin A target-major pathway-lung cancer.(B) Distribution of key targets in the HIF-1 signaling pathway Fig. 9 Fig. 9 Molecular docking results.(A) The binding mode of Brevilin A-HIF1A complex.(B) The binding mode of Brevilin A-PTEN complex.The 3D visualization is on the left, and the 2D visualization is on the right Fig. 10 Fig. 10 MD simulation analysis.(A) RMSD quantifying the deviation of complexes coordinates from the initial frame.(B) RMSF of individual protein atoms.(C) Rg for visualization of protein compactness.(D) SASA analysis of protein contact area with surrounding solvents Fig. 11 Fig. 11 Brevilin A-induced A549 cell death via targeting HIF-1α pathway.(A) The chemical structure of the Brevilin A. CCK8 assay in A549 cells treated with Brevilin A for 24 h (B) and 48 h (C).(D) Photographs of A549 cells after treatment for 24 h and 48 h. mRNA expression levels of PTEN (E) and HIF-1α (F) in A549 cells treated with Brevilin A for 24 h.* p < 0.05, ** p < 0.01, **** p < 0.0001 versus vehicle control group Table 1 Degree of hub regulatory genes analyzed by Cytoscape Table 2 KEGG pathway enrichment analysis Table 3 The binding pose and energy between Brevilin A and the targets
2024-06-06T20:20:50.042Z
2024-06-05T00:00:00.000
{ "year": 2024, "sha1": "99330ad5c396f8d414fa6dc6bda68c92e19aee2d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12906-024-04516-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "99330ad5c396f8d414fa6dc6bda68c92e19aee2d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
61888791
pes2o/s2orc
v3-fos-license
Intensive Reading and Necessity to Integrate Learning Strategies Instruction For years great emphasis has been placed on Intensive Reading (IR) course. IR has dominated English language curriculum in the teacher-dominated class, students do nothing but just read, listen, write, translate, imitate, memorize. IR has incurred criticisms which point out disadvantages stemming from IR approach. A number of learning strategies, presumably relevant to IR course in Chinese context, are suggested for Chinese teachers of English. They may integrate instruction on the use of the suggested learning strategies with regular classroom activities. Intensive Reading (IR) is a hair-splitting analysis of vocabulary and sentence structures, which dominates the ELT course throughout the three stages (elementary, secondary, and tertiary) of learning in China.Its dominance manifests itself in both contact class hours it takes (2/3 of the total) as well as the time and effort it draws from teachers and student alike.This domination has been enhanced by the introduction of Teachers' Books, of which the dominating feature is the detailed explanation of the text.This gives learners a false impression as if the only way to learn English were by analyzing the languages in a hair-splitting manner (Shu, 2010). The term Intensive Reading per se may seem to be misleading to the ELT world outside China; it is not an intensive course in its usual sense such as a summer course which is relatively short with emphasis on reinforcement and high frequency of lessons. Intensive Reading course at tertiary level in China drags on for four semesters (two semesters a year) for non-English-major students.IR is not a reading course designed to improve reading comprehension and speed, but is the core course in ELT which covers such contents as pronunciation, spelling, grammar, vocabulary, reading, composition, and translation and it is taught through a textbook. In IR course teacher goes through each text in a linear fashion, word by word, and sentence by sentence in order that students may understand everything about the text in terms of grammatical structure, usage, vocabulary, and sentence understanding.IR class relies heavily on strong teacher control and apportions a major part of total talking to the teacher.The text is removed from its total context of meaning and examined as an object for analysis.Emphasis is placed on the rote memorization chosen from text.Besides, a lot of words elicited from each lesson have to be committed to memory on the part of students. People usually read for information, it is, therefore, important to teach the student to read faster and read extensively, so that he/she can enhance his/her feel for the target language and develop unconscious language habits thereby.But it is equally important for a Chinese student of English to make a conscious and intensive study of the language and its fundamentals so as to overcome the vast gulf between his/her own language and the target language.When a student is still in the dark about the fundamentals of the English language, to let him/her indulge in extensive reading without proper guidance is to leave him/her feeling about blindly in a maze without getting anywhere.Therefore, at the initial stage and intermediate stage a guided intensive course has proved indispensable to systematic learning of basic grammar, structure, phonetics, basic vocabulary and useful idiomatic expressions.It has proved essential, especially when it is combined with writing, speaking and even listening, thus becoming a comprehensive language course.At these two stages, IR did have proved effective in enabling the student of English to acquire a sound knowledge of the language, to develop his/her five basic skills, that is, listening, speaking, reading, writing, and translating, to develop correct methods of study and ultimately to cultivate a capacity to work on his/her own.On such a solid foundation, a magnificent edifice of language proficiency can rise to an important height. However, the importance of laying a solid foundation in language learning should by no means be overemphasized when the student has passed through the primary and secondary stages and successfully entered the advanced stage of his/her language learning.There should be accordingly a shift of stress as the learner moves from initial stage through intermediate stage to the advanced stage of learning.So far as we know, at the stage of post-graduate teaching, most universities and colleges continue to give top priority to intensive reading at the expense of other practical courses.Graduate students have to devote more than half their time and efforts to intensive reading courses. Four Centers in IR Courses With regard to the scope and content IR covers, it is a multi-purpose comprehensive course in Chinese context featured by four-centredness. Teacher-centredness The teacher is an authority, an intellectual, and a model.Teacher is regarded as a source of knowledge. Textbook-centredness. Each lesson consists of a model text supplemented by exercises surrounding phonetics, grammar, vocabulary, composition, translation, and reading comprehension.In fact the text itself serves as the basis for oral English practice (pronunciation, dialogues, question, composition, and translation). Grammar-centredness Texts are syntactically concerned with sentence patterns, and grammar analysis of the structures.Grammar-translation approaches are commonplace and the teacher and textbook transmit this subject knowledge to students. Vocabulary-centredness Vocabulary is a focus of instruction.On average 50-60 new words appear in each new lesson (16-18 texts per semester).Science students (not English majors) at tertiary level are required by the National Foreign Language Syllabus and exam system to learn 4000 words in two years' formal language learning. Criticisms Leveled at IR Approach IR approaches characterized by a hair-splitting analysis of text, emphasis on mimicry-memorization drills, overdoing mechanical exercises, and spoon-feeding, have incurred criticisms since early 1980s by some leading ELT professionals and researchers. An American professor, who has achieved outstanding results in teaching English at Beijing University stated in a lecture, "The greatest harm done to students of English in China is IR.IR is not reading at all; it is deciphering, analyzing.It teaches students not to read but use dictionaries and grammar books.It teaches very bad study habits, which are very hard to break.It does more harm than good."(Dai, 2009: 22) Other criticisms can be summarized as follows: 1. "The habitual way of advanced IR teaching hardly focus on reading skills but, trying to cover everything, has to a large extent, failed to achieve its wishful desire.The translation method and lecturing method, often adopted in the advanced IR teaching, are rather ineffective and, in the final analysis, a way of cramming".(Xiao,2004) 2. It encourages slow reading.Stop-and-go reading destroys the continuity--continuous thought pattern."It is a bar to the comprehension of particular text.It takes a student so long to reach the meanings and connotations of each phrase that he often can not understand or appreciate the meaning and significance of the text as a whole" (Short, 1994).The student may have forgotten what the beginning is about by the time he has reached the end of a passage or a text.It has resulted in what a Chinese saying calls "failure to see the wood for trees".It is interesting to note that there are many students who even incorporate some intensive reading in their extensive reading, pausing after covering several sentences or paragraphs to go back and re-read a selected sentence or paragraph with great care. " The IR approach tends to increase student dependency on the teacher and the dictionary" (ibid).It discourages students from reading in an adequate, enquiring, active, hypothesizing manner.It can not train the students to learn how to stand on their own "thinking feet" as much as possible. 4. It is a bar to processing lots of different texts "since the emphasis of IR is on intensiveness and since whatever is taught is supposed to be mastered, a great restriction is placed on the quantity of language materials to be taught.Even if they have mastered everything in the text, what they have learned is limited in quantity.This greatly restricts their ability to understand and use the language" (Yue, 2005). Master Teacher and Apprentice Student China cherishes its education tradition.China's Confucian education system (Confucius, one of the greatest thinkers in ancient China, who laid the philosophical foundation for education in China) emphasizes teaching by strict model.Teachers are expected to be the model for people to follow.Teaching is viewed in China as a "sacred" occupation.Throughout the country, teachers are respected and regarded as authorities in the classroom. Teachers are believed to be the authoritative source of knowledge.They are, therefore, obliged to impart knowledge to their students.They are expected to provide background knowledge, elaborate the text, lecture on the subject they teach, and give answers to controversial questions.As an authority in the classroom, teacher tells students what he thinks they ought to learn. Students, on the other hand, see themselves as apprentice: their study is strongly based on the imitation of the teacher as "master" or "model".From childhood they have got used to learning things from the teacher and expecting him to do his job of clearing up all the perplexities in addition to passing on information.Because a long term tradition in Chinese pedagogy requires students to commit large amounts of information and "text" to memory, they have to internalize knowledge through close attention and mimicry-memorization. Since students have got accustomed to spoon-deeding method, even at tertiary level they still "expect the teacher to structure the learning situation for them, telling them what to learn and how to learn".(Nunan,1996) Educated by the traditional education system, the majority of students lack the initiative to seek their own learning strategies, as they believe that they can learn everything they want from the teacher.Another belief that restricts their range of learning strategies is that proficiency can be attained solely through such traditional means as grammar translation, and rote memorization. Importance of Learning Strategy Instruction There should not be an excessive focus on IR and a concomitant lack of emphasis on communicative activities and use of target language.The ultimate goal for our students is to be able to use the language they are learning for their own purpose, to express their own meanings, that is to create their own formulations to express their intentions.To a learner of advanced learning (many students at this stage have passed Band 4 Examination administered by National Education Ministry to Non-English major students nationwide), he or she is faced with a world of ideas.At this stage he or she needs something like a compass to guide him/her through this confusing world of ideas.Therefore we ought to train and prepare the students 1. to willingly practice the English of their own accord 2. to constantly attach importance to their own use of target language and to their performance 3. to have a positive attitude towards the language in question and to its speakers 4. to learn to infer and make guesses about language data and how language works 5. to involve themselves as real people in the activities they are asked to undertake both inside and outside the classroom and to be independent as learners 6. to learn more effectively in a continuing way In order to help students become efficient and independent learners-able, eventually to manage their own learning, Chinese teachers need to equip students through instruction with appropriate learning strategies which will allow students to take more responsibility for their own learning by enhancing their autonomy, independence, and self-direction. Rather than assuming that the students will develop appropriate learning strategies on their own, instruction on the use of learning strategies should become a part of language teaching process in foreign language classroom.The ESL training study, conducted be O' Malley & Chamot (1990), demonstrated that learning strategies instruction can be effectively implemented in real classroom settings. The findings reported by Hashim & Sshil (1994) also suggest that it is very important to incorporate language learning strategies into language course in order to provide learners with greater opportunity to make language learning an autonomous process.The focus of learning strategies woven into regular teaching process should be on helping students to learn how to learn by equipping them with tools they can use on their own. Suggested Learning Strategies Based on the research findings reported by O' Malley & Chamot (1990) on learning strategies, a number of learning strategies, presumably relevant to IR course in Chinese context, are suggested for Chinese teachers of English.They may integrate instruction on the use of these suggested learning strategies with regular classroom activities (Xue, 2011). 1. Advance Organization Jones (1957) recommends that reader should not punctuate his reading with excursions to the dictionary; but suggests that he should read the whole text first.In this view students should be taught to overcome the habit of going directly to the dictionary as soon as they identify an unknown item, they should be encouraged to identify the main ideas and concepts of the text by skim-reading first. Selective Attention Teach students how to locate in advance key words, concepts, and linguistic points in a new text that are to be the focus of a forthcoming language task.ISSN 1925-4768 E-ISSN 1925-4776 116 3. Self-monitoring Teach students to foster a habit of checking, verifying, or correcting their understanding of the ongoing language task. Problem Identification Assign students after class to identify the prior language task-related problems that hinder understanding and need resolution in the next classroom activities. Resourcing Rather than resorting to pocket English-Chinese dictionary and Chinese versions of the text, students should be taught to use the target language reference materials such as monolingual, bilingual dictionaries, encyclopedia, and related prior work. Grouping First teach students how to classify words, terminology, or concepts taught in the previous texts according to their attributes or meaning, then encourage students do the classifying for the new text on their own. Deduction/Induction Teacher elicits from students application of grammatical rules to identify the forms of unknown words of the text, which leads to guesses about the type of words it would be (e.g.verb, noun, etc.). Elaboration Elaboration refers to the mental process of relating new knowledge to existing information in long-term memory.It has also been described as a process of making meaningful connection between different parts of new textual information.In light of it, teacher may point out what students have already learned and suggest how they can use this linguistic or world knowledge to an intelligent inference about the meaning of an unknown item.In reading comprehension, for instance, teacher may encourage students' use of prior knowledge, both academic and real world to make decisions about probable meaning. Transfer Teach students to learn to use previously acquired linguistic knowledge or prior skills to facilitate the understanding or production of the present language task. Inferencing "Help student develop strategies and knowledge to use internal and external contexts to infer meaning is a major step towards helping them become independent learners."(Kang & Golden, 1994).In this view teacher should teach students to learn how to use the internal context of words, such as root stems, affixes, to infer their meaning.In reading comprehension, teach students to use immediate and extended context to guess new words. Summarizing Teach students to learn to foster a healthy habit of summarizing the previously learned paragraph or text in their own words. Discussion and Conclusion Developed since the early 1950s, IR has been the dominant ELT course in most institutions of higher learning of China.Looking back in its historical perspective it was the product of a particular social, economic, and linguistic situation in China.Even at present exposure to English in the second and third-tier cities of China is still limited and the range of English-medium activities is very narrow.Original English teaching and learning related materials are relatively scarce in most libraries at tertiary level in the second and third-tier cities of China, and opportunity for interaction with native speakers even more so. Given this situation it is predicted that with its comprehensive training and rich exercises, IR will continue to be an indispensable ELT course in China, despite its problems as in methodology and with textbooks. In order to enable students to cultivate a capacity to be ultimately an autonomous reader and work independently Chinese teachers in IR course should go beyond their traditional role of knowledge provider.They need to weave learning strategies instruction into regular language task-related activities. Class instruction on learning strategies can help students gain awareness of learning strategies."The greater the strategy awareness of learners, the more likely they will be to use task-appropriate learning strategies that help them overcome their general learning style limitations, and the more likely that these strategies will assist in processing, retrieving, and using new language information" (Nyikos & Oxford,1993). In this view teachers in IR course should create circumstances in which students can be informed of and apply strategies that are appropriate for the type of language task-related activities being presented. Further, teachers should encourage and help students apply the strategies to an expanded range of language activities and materials so that the strategies transfer to new activities and are used by students independently of the teachers' support (Hu, 2011).
2017-09-08T06:21:42.739Z
2012-03-01T00:00:00.000
{ "year": 2012, "sha1": "4a6d1f1b1db8388ed496725f7dd082dc7de3a9d2", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ells/article/download/15235/10304", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4a6d1f1b1db8388ed496725f7dd082dc7de3a9d2", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
30451180
pes2o/s2orc
v3-fos-license
Diversity and microevolution of CRISPR loci in Helicobacter cinaedi Helicobacter cinaedi is associated with nosocomial infections. The CRISPR-Cas system provides adaptive immunity against foreign genetic elements. We investigated the CRISPR-Cas system in H. cinaedi to assess the potential of the CRISPR-based microevolution of H. cinaedi strains. A genotyping method based on CRISPR spacer organization was carried out using 42 H. cinaedi strains. The results of sequence analysis showed that the H. cinaedi strains used in this study had two CRISPR loci (CRISPR1 and CRISPR2). The lengths of the consensus direct repeat sequences in CRISPR1 and CRISPR2 were both 36 bp-long, and 224 spacers were found in the 42 H. cinaedi strains. Analysis of the organization and sequence similarity of the spacers of the H. cinaedi strains showed that CRISPR arrays could be divided into 7 different genotypes. Each genotype had a different ancestral spacer, and spacer acquisition/deletion events occurred while isolates were spreading. Spacer polymorphisms of conserved arrays across the strains were instrumental for differentiating closely-related strains collected from the same hospital. MLST had little variability, while the CRISPR sequences showed remarkable diversity. Our data revealed the structural features of H. cinaedi CRISPR loci for the first time. CRISPR sequences constitute a valuable basis for genotyping, provide insights into the divergence and relatedness between closely-related strains, and reflect the microevolutionary process of H. cinaedi. Introduction Helicobacter cinaedi is a gram-negative, motile, spiral, and microaerophilic bacterium, belonging to the family Helicobacteriaceae. It was first isolated in rectal swabs obtained from homosexual men in the 1980s [1]. Since 2000, the number of reports of H. cinaedi infections have been increasing. Examples of the diverse range of infections caused by H. cinaedi include proctocolitis, gastroenteritis, neonatal meningitis, localized pain, rash, and bacteremia [2]. This organism is difficult to culture and therefore difficult to isolate compared with other Helicobacter spp. and as a result its biological and clinical characteristics are less well understood [3]. In a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 and diversity has not been explored in this species. The CRISPR-Cas system should provide useful information about strain characterization, lineage identification, and epidemiology. Multilocus sequence typing (MLST) is a genotyping method based on the nucleotide sequences of seven housekeeping genes, which are used to assign different alleles to sequence types (ST) and clonal complexes. MLST has been widely used in molecular epidemiology and population biology in Helicobacter species [29,30,31], and has been proven useful for typing other H. cinaedi strains [32]. Genotyping analysis is crucial in terms of understanding the epidemiology of transmission; thus, the aim of the present work was to systematically investigate the prevalence and diversity of CRISPR loci in H. cinaedi. In this study, we developed a CRISPR sequence analysis for H. cinaedi and compared the results with MLST analysis. CRISPR loci analysis Primer pairs were designed to amplify the full CRISPR loci, respectively CRISPR1_Forward (5'-CAATTTAGAAAACGCAGAGCC-3') and CRISPR1_Reverse (5'-GATATGATTTACC CTGCGGAAG-3'), and CRISPR2_Forward (5'-TGTCATACTGAGACTTTTGCC-3') and CRISPR2_Reverse (5'-GCTACCCAAAGTCGCCAAAAC-3'). Other primers used for sequencing are listed in S2 Table. Amplification parameters consisted of 35 cycles of denaturation at 94˚C for 15 s, annealing at 55˚C for 15 s, and extension at 72˚C for 2 min. PCR products were sequenced using PCR primers and sequencing primers, designed based on the spacer sequences. Sequence assembly and editing were performed with the DNASIS Pro Version 3.02 (Hitachi Solutions) and MEGA 6. The information pertaining to the CRISPR locus including position, length, and content were acquired from the CRISPR web server (http://crispr.i2bc. paris-saclay.fr/) [35]. Clustal X software was used to investigate the homology of the sequences of the CRISPR region possessed by each strain. The aligned sequences were compared by detecting identical spacers. Visual representation of the CRISPR arrays was performed as previously described [21,36]. The repeat sequences were removed for each array and the list of spacers was focused on the ancestral spacer on the left-hand side. Each spacer within the array was visually represented by a box. This allowed a comparison of conserved arrays by aligning spacers from the ancestral end. Spacer genotyping was based on common ancestral spacers. A matrix of zeros and ones was calculated, depending on the presence or absence of spacer arrays for every strain. The dendrogram was derived from the matrix of correlation distances by using the Jaccard similarity coefficient with the Dendro-UPGMA program (UPGMA), with a dendrogram construction utility (DendroUPGMA, http://genomes.urv.cat/UPGMA/index.php) [37]. CRISPRTarget (http://bioanalysis.otago.ac.nz/CRISPRTarget/crispr_analysis.html) [38] was utilized to predict the presence of possible protospacers. All spacer sequences were used for homology searching to find potential protospacers with >90% sequence identity [21]. Multilocus sequence typing Primers and PCR conditions for the seven housekeeping genes were as described in a previous report [32]. After confirming the single amplification products on 1% agarose gels, sequences were determined using a BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems) and an automatic DNA sequencer (3130 Genetic Analyzer, Applied Biosystems). Allelic MLST sequences were analyzed using the PubMLST website (http://pubmlst.org/). Different STs and CCs were assigned using the H. cinaedi MLST database (http://pubmlst.org/hcinaedi/). The phylogeny for the 42 isolates was estimated by concatenated sequences using the neighborjoining method [39]. Clustal X software was used to align the sequences [40], and calculate the genetic distances. The dendrogram was constructed using NJplot program [41] and MEGA 6 [42]. The CRISPR loci structure in H. cinaedi Based on genomic analysis [27,28], CRISPR loci are flanked by cas genes encoding Cas proteins (Fig 1). Three cas genes (cas2, cas1 and cas9, in this order) were located upstream of the CRISPR1, which is consistent with a type-II system [43]. Cas1 and Cas2 are the core proteins of the CRISPR-Cas system [15]. Cas9 protein sequencing analysis is consistent with the classification of type II systems characterized to date [44]. To determine the type of H. cinaedi CRISPR-Cas system, we obtained Cas9 amino acid sequences from Gram-negative type II system-containing bacteria, as previously described [17], and compared them with the Cas9 sequences of H. cinaedi strains PAGU597 T and PAGU611. We constructed a multiple sequence alignment and phylogenetic tree for Cas9 (S1 Fig). The phylogenetic tree showed that the Cas9 sequences from two H. cinaedi were closely related to those of Campylobacter jejuni subsp. jejuni NCTC11168, and formed part of the subtype II-C subcluster. RAMP gene was located downstream of the CRISPR2 locus. Cas1 and cas2 genes were not found in CRISPR2 and the predicted length of the ORF for RAMP was 1782 bp-RAMP is a signature gene of the type III system [15]. The two CRISPR loci, CRISPR1 and CRISPR2, were identified for all H. cinaedi strains by CRISPR PCR and sequencing analysis. An average of 32 spacers (ranging from 4 to 63) were identified in CRISPR1 loci, while CRISPR2 loci had 6 spacers (ranging from 2 to 10). It has been reported that CRISPR repeats are composed of exact repeat sequences ranging from 24 to 48 bases long [45]. These sequences have also been shown to contain palindromes. The 5' terminal portion of a repeat is normally composed of the sequence GTTT (G) and the 3' terminus contains GAAA (C/G) [17,46]. Generally, repeats associated with the type II system are weakly palindromic, and typically 36 bp in length [43]. CRISPR1 and CRISPR2 in H. cinaedi strains retained a 36-bp long repeat sequence. The consensus direct repeats associated with CRISPR1 contained a conserved 5'-GTTTTAGTCCCTTCTTAAACTTCTATATGCTAGAAT-3'. A conserved 5'-GTTTTAGTGGGACCCGATTTAAGGGGATTTGTATCA -3' was present in CRISPR2. Distribution and conservation of CRISPR spacer arrays Identification of the spacer sequences from the CRISPR loci was conducted to evaluate the extent of genotypic diversity among the H. cinaedi isolates. We applied an approach to outline the distribution of conserved CRISPR arrays-identified by their ancestral spacer content-in all 42 strains. A conserved ancestral spacer implies commonality among the strains, whereas spacers acquired later may differ between related strains due to different exposures to foreign invasive DNA. The distribution of the identified ancestral spacers enabled the CRISPR arrays to be grouped by spacer organization. The spacer composition of CRISPR1 and CRISPR2 loci are indicated in Figs 2 and 3. We found 20 unique CRISPR1 patterns (CRISPR1 patterns A to T, Fig 2) and 16 unique CRISPR2 patterns (CRISPR2 patterns a to p, Fig 3). The 42 H. cinaedi strains were grouped into 7 different genotypes (G1-G7), according to the sequence spacer arrays (presence or absence) and ancestral spacers (Figs 2-4). The six reference strains had different spacer arrays compared to the Japanese isolates. PAGU 640, 1749, 1752, and 1753 (from outside Japan) had the same ancestral spacer as genotype G1 and shared conserved spacers, in addition to unique spacers (spacer 1R, 2I, 2W of CRISPR 1; spacer 1i, 1j, 1k, 1l of CRISPR2). Unique spacers (spacer 3S, 5I, 5J of CRISPR1) were also present in PAGU 597 T isolated in the USA, which shared conserved spacers with genotype G2. In this study, the predecessor of genotype 2 had not been identified. It was predicted that a large spacer deletion occurred during expansion of the ancestral lineage, resulting 611, 1294, 1703, 1708 and 1811), the spacer distribution of CRISPR1 differed, even though that of CRISPR2 was same, and vice versa. in PAGU 597 strain. The spacer organization of PAGU 1744 from the USA was distinctive, and all spacers of the two loci were composed of unique nucleotide sequences. MLST typing A total of 11 different sequence types (STs) were identified among the 42 isolates (Table 1). PAGU 1922 had a different allelic profile and did not correspond with any ST belonging to CC1. Among the STs, 11 were assigned into 6 known CCs while ST-18 was unassigned. Based on the phylogenetic tree of MLST, the 36 clinical isolates from Japan were classified into 6 clusters, CC1 (4 isolates), CC4 (14 isolates), CC8 (4 isolates), CC9 (8 isolates), CC16 (7 isolates), and unassigned CC (ST18, 3 isolates) (Fig 5). These H. cinaedi isolates were collected from 5 hospitals in Japan, and the distributions within each hospital were compared. Hospital A obtained 24 isolates over 11 years, which were subsequently divided into 5 clusters (CCs 1, 4, 8, 16, and ST-18). The reference strains (PAGU 597 T and 1744) revealed slightly different sequences compared to the Japanese isolates, while PAGU 640, 1749, 1752, and 1753, which were classified as ST-4, had the same ST as the isolates from hospital A. Comparison of CRISPR analysis and MLST Twelve MLST STs were identified among the 42 isolates, whereas there was a greater number of CRISPR patterns (20 CRISPR1 patterns, 16 CRISPR2 patterns, Table 1 and Fig 4), which indicated that CRISPR analysis has greater discriminatory power than MLST. Isolates assigned to ST-4 diversified into six distribution patterns (B, C, D, E, F and G) of CRISPR1 and six patterns of the CRISPR2 (b, c, d, e, f and g). Similarly, the strains assigned to ST-3, ST-8, and ST-16 differentiated into separate CRISPR1 patterns (ST-3, H and I; ST-8, L and M; ST16, P, Q, R, and S). Each sequence of seven housekeeping genes among the PAGU611 and PAGU1496 strains belonging to ST-8 demonstrated identical sequences at all seven loci. However, these strains were isolated at different times and, according to the distribution of CRISPR1 loci, it appeared that the spacer defect of PAGU1496 occurred between 2004 and 2010 in hospital A (spacer 6K, Fig 2). The diversity was revealed by determining the CRISPR sequences for strains assigned to the same ST in MLST analysis. Discussion Reports on the number of H. cinaedi infections have been steadily growing, and the association of this bacterium with a variety of human infections and atherosclerotic diseases has received increasing attention in recent years [3,10]. H. cinaedi is currently the most commonly reported enterohepatic Helicobacter isolated in humans. Kitamura et al. previously documented an outbreak of nosocomial H. cinaedi infections caused by direct person-to-person spread [6]. We have also received reports of a growing number of cases of nosocomial H. cinaedi infections in Japan. Indeed, this microorganism is recognized as a causative agent of nosocomial infections [4]. H. cinaedi strains were isolated from men and women of a broad age-range (from neonates to the elderly). Some patients had immunocompromised conditions, while others had not been in apparently immunocompetent [47]. H. cinaedi infections have been detected in hospitals throughout Japan, and we hypothesize that they are more common in Japanese hospitals than is currently recognized. This study attempted to compare CRISPR arrays to gain an understanding of the diversity of H. cinaedi. The CRISPR1-cas locus possessed the minimum number of cas genes required to formulate the cas operon-a characteristic of subtype II-C [15]. The repeats of H. cinaedi CRISPR1 were 36 bp in length, which corresponded with type II systems. The cas components suggested that CRISPR2 of H. cinaedi strains resembles type III systems. Cas1 and cas2 genes were not found in the CRISPR2 loci, but in many organisms, the type III CRISPR-cas operons lack the cas1-cas2 gene pair [15]. Hospital A has been isolating H. cinaedi strains since 2004. Two genotypes of H. cinaedi (genotype G1 and G3) were found in 2004 in a comparison of the evolution of spacer organization over time. Genotypes G1 and G3 were distinguished by the presence of different ancestral spacer. Genotype G1 strains shared the ancestral spacers 1A and 1a. The ancestral spacers 6A and 1p were present in genotype G3. In a previous analysis using pulse field gel electrophoresis typing [6], the strains isolated from 2004 to 2005 in hospital A could be divided into two clusters (initial outbreak strain, subsequent outbreak strain). This clustering pattern was also supported by the phylogenetic tree of hsp gene, as well as the RAPD pattern. These findings are consistent with our results showing genotypes G1 and G3 by CRISPR analysis (Fig 4). Our results not only provide information about the homology of the sequences in the CRISPR region, but also enable the process of spread to be traced via CRISPR arrays by showing the acquisition and deletion of spacers. Although genotype G1 strains have been circulating in hospital A since 2004, the arrangement of the spacers has frequently changed. These strains were subsequently isolated in the same hospital in 2008, 2009, and 2010. Based on CRISPR distribution, genotype G1 isolates obtained in hospital A were further divided into three subtypes (genotype G1-I; PAGU 617 and 627, genotype G1-II; PAGU 1024, 1123, 1124, and 1125, genotype G1-III; PAGU 1411, 1459, 1500, and 1513). It could be considered that the predecessor of these subtypes was not identified in this study, and spacer deletions occurred while the genotype G1 isolates were spreading. These data show that CRISPR pattern can systematically distinguish closely-related strains, and reflect the microevolution of strains that are particularly relevant among the same genotypes. Strains classified as genotype G3 were isolated for the first time in 2004 (PAGU611, PAGU612, and PAGU614), circulated without elimination for several years at the same hospital, and were again detected in patients in 2010 (PAGU1496). In addition to the two major genotypes G1 and G3, genotypes G2, G5, and G6 have also circulated since 2011 at hospital A. Based on CRISPR analysis, all strains from hospital B were classified as genotype G4 except one (PAGU 1294). However, six of the isolated strains were grouped into two STs (ST-10 and ST-11) via MLST. The reason for dividing the strains into ST-10 and ST-11 was due to differences between two of the bases of the 23S rRNA sequence. Alignments of the sequences of the 23S rRNA gene showed that the nucleotides at positions 547659, 547760, and 548262 (the base order of the genomic sequence of H. cinaedi PAGU597 T , AP012492) were G-T-T in ST-10 and G-C-C in ST-11, respectively. The nucleotide sequence of the above-mentioned site of the strain classified as ST-9 from hospital C is G-T-C. Thus, the distinction between the three STs classified as CC-9 is derived from only two base differences in the nucleotide sequence of 23S rRNA gene. In the 8 strains classified as CC-9, the nucleotide sequences of the other 6 genes were identical by MLST. A previous comparison of the 23S rRNA gene sequences has been reported for the strain isolated in hospital B, [48].These ST-10 and ST-11 strains were isolated from female and male patients, respectively, and it was reported that nosocomial infections could have occurred in these cases via the female or male toilets, respectively. Although the efficacy of sequencing analysis of the 23S rRNA gene has been described [49], the sequences of the 23 rRNA gene of the 4 strains assigned to ST-1 and ST-3 appeared identical, as did those of the 20 ST-4, ST-5, ST-8, and ST-9 strains in this study (Table 1). Therefore, the discrimination value of the 23S rRNA gene sequencing analysis of H. cinaedi strains is low. The gyrA sequence is an appropriate marker with a high discrimination rate for the phylogenetic analysis of the Helicobacter genus [50]. We evaluated the genetic relationships of our isolates using gyrA and 16S rRNA gene sequences, which are the gold standard for phylogenetic analysis. The gyrA sequences of H. cinaedi isolates showed low diversity (S2 Fig), which led us to conclude that these sequences were useful for analysis within genus, but not within species. The 16S rRNA gene was further investigated for the analysis of H. cinaedi isolates within species (S3 Fig). It was generally thought that the 16S rRNA gene was insufficient for identification at the species level as a stand-alone technique in phylogenetic analysis, but the 16S rRNA phylogenetic tree yielded the same topology as MLST in H. cinaedi species. Thus, the phylogenetic analysis of the 16S rRNA gene was considered reliable for H. cinaedi, contrary to its use for other bacterial species. In MLST analysis, the strains assigned to ST-3, ST-4, ST-8, and ST-16 had seven genes showing identical nucleotide sequences within each ST group, and no diversity was observed. Meanwhile, in CRISPR analysis, these strains had different spacer distributions, even within the same ST via MLST, and the strains belonging to one ST were divided into two or more CRISPR patterns (Table 1). The 12 STs were divided into 20 CRISPR patterns, and CRISPR typing is considered to have higher discriminatory power than MLST. In addition, the spacer array of CRISPR does not only distinguish between strains, but also provides useful background information about the evolution of the strains. We can predict the relevance of isolates depending on whether they have a common ancestral spacer. For these reasons, CRISPR analysis is thought to be efficient and provide more information than other genotyping methods. We have described the epidemiological analysis of H. cinaedi isolates using CRISPR arrays. The polymorphisms among the organization of spacers reflect the adaptation process of H. cinaedi. Thus, the distribution of CRISPR spacers may assist in the study of nosocomial H. cinaedi infections, and may be useful for typing H. cinaedi isolates and elucidating how they spread. CRISPR-Cas system data will contribute to a better understanding of the origins and microevolution of this microorganism. Supporting information S1 Cas9 proteins from Gram-negative type II system-containing bacteria are referenced [17]. A phylogenetic tree based on Cas9 proteins was constructed by the neighbor-joining method. H. cinaedi strains PAGU597 T and PAGU611 are shown in red. Two Cas9 protein sequences were obtained from the DDBJ (Accession Nos. AP012492 and AP012344, respectively). (PDF)
2018-04-03T05:14:17.785Z
2017-10-13T00:00:00.000
{ "year": 2017, "sha1": "0719c89b28945077779808f17d46f515515ccba4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0186241&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0719c89b28945077779808f17d46f515515ccba4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16857539
pes2o/s2orc
v3-fos-license
Radio continuum emission and water masers towards CB 54 We present high angular resolution observations of water masers at 1.3 cm and radio continuum emission at 1.3, 3.6 and 6 cm towards the Bok globule CB 54 using the Very Large Array. At 1.3 cm, with subarsecond angular resolution, we detect a radio continuum compact source located to the south-west of the globule and spatially coincident with a mid-infrared embedded object (MIR-b). The spectral index derived between 6 and 1.3 cm (alpha=0.3+/-0.4) is flat, consistent with optically thin free-free emission from ionized gas. We propose the shock-ionization scenario as a viable mechanism to produce the radio continuum emission observed at cm frequencies. Water masers are detected at two different positions separated by 2.3'', and coincide spatially with two mid-infrared sources: MIR-b and MIR-c. The association of these mid-IR sources with water masers confirms that they are likely protostars undergoing mass-loss, and they are the best candidate as driving sources of the molecular outflows in the region. Introduction The Bok globule CB 54 is an active star forming region associated with the Vela OB1 cloud complex and located at 1.5 kpc (Launhardt & Henning 1997). Due to their simplicity and relatively small size (<10 ′ ), Bok globules offer an unique environment to study the outcome of the star formation processes in a relatively idealistic and isolated way (Bok & Reilly 1947;Clemens & Barvainis 1988). CB 54 shows different signposts of multiple star formation. The region contains several molecular outflows. There is a main bipolar CO outflow oriented in the northeast-southwest direction and centered near the IRAS point-like source IRAS 07020-1618 (also named CB 54 YC1; Yun & Clemens 1994b). In addition, several H 2 [v=1-0 S(1)] line emission knots detected by Khanzadyan (2003) suggest the presence of a second outflow in the east-west direction, which indicates the existence of different driving protostars. In fact, this globule harbors at its center a multiple protostellar system of young stellar objects (YSOs) at different stages of evolution. Near-IR observations towards the central IRAS source revealed the presence of two bright near-IR (K band, 2.2 µm) objects classified as Class I protostellar candidates, CB 54 YC1-I (a confirmed Class I source; Ciardi & Gómez-Martín 2007) and CB 54 YC1-II, plus a bright elongated feature (CB 54 YC1-SW) mainly seen in H 2 [v=1-0 S(1)], 2.121 µm line (Yun & Clemens 1994aYun et al. 1996;Khanzadyan 2003). Water masers were detected by Gómez et al. (2006) and de inside this southern elongated feature, suggesting the presence of an embedded protostar that pumps the maser emission. This prediction was recently confirmed by the discovery of three faint mid-IR sources clustered near the position of the IRAS source and within the near-IR elongated feature (Ciardi & Gómez-Martín 2007). They were named as MIR-a, MIR-b, and MIR-c and interpreted as very cool (≃ 100 K) Class 0 protostellar candidates of masses ∼1.5 M ⊙ , ∼4 M ⊙ , and ∼0.2 M ⊙ respectively. Water maser emission at 22 GHz is a good tracer of the mass-loss phenomena observed at the earliest stages of the formation of stars of all masses (Rodríguez et al. 1980;Felli, Palagi, & Tofani 1992;Xiang & Turner 1995;De Buizer et al. 2005). In the case of low-mass objects, these water masers are usually associated with the youngest Class 0 protostars, produced by the interaction of powerful jets with a large amount of circumstellar material (Furuya et al. 2001) and they tend to be located close to their powering source (within several hundred AU; Chernin 1995;Claussen et al. 1998;Furuya et al. 2000Furuya et al. , 2003. At those earliest stages of evolution young protostars show the most powerful molecular outflows (Bontemps et al. 1996), which are believed to be driven by collimated jets (Raga et al. 1993). The central objects that power the outflows are frequently associated with weak and compact centimeter free-free continuum emission from thermal radio jets (Anglada 1995(Anglada , 1996Beltrán et al. 2001). These radio jets trace the part of the outflow closest to the exciting source. These properties make the combination of water masers and radio continuum emission well suited for pinpointing the location of Class 0 protostars. In this work we present sensitive interferometric observations of water masers and radio continuum at 1.3 cm, using the Very Large Array (VLA). We also show radio continuum data at 3.6 and 6 cm from the VLA archive. The main goals of these observations were to derive accurately the position of the water maser emission, to pinpoint the location of the exciting sources of the maser phenomenon, as well as to derive information about the driving engine of the molecular outflows that exist in the region. This paper is structured as follows: in §2 we describe the observations and data processing. In §3 we present and discuss the results derived from radio continuum and water masers observations. Finally, we present the conclusions of this work in §4. Observations and data processing Observations towards CB 54 were performed on 2005 January 22 and 31, and February 4 using the VLA of the National Radio Astronomy Observatory (NRAO) 1 in the BnA configuration (project AG684). We observed simultaneously the 6 16 -5 23 transition of H 2 O (rest frequency = 22235.080 MHz) and continuum at 22285.080 MHz (≃ 1.3 cm) using the four IF spectral line mode and processing both right and left circular polarizations. For the H 2 O observations we sampled 64 channels over a bandwidth of 3.125 MHz, centered at V LSR = 15 km s −1 , with 0.66 km s −1 velocity resolution. For continuum observations we used a bandwidth of 25 MHz that comprised 8 channels of 3.125 MHz. The total observing time including calibration was 4.5 hours per day. The splitting of the observations into three different days was required to reach the necessary sensitivity for the continuum data. Our flux calibrator was 3C48, for which we adopted a flux density of 1.1 Jy using the latest VLA values (1999.2). The source J0609−157 was used as phase and bandpass calibrator (bootstrapped flux density = 3.90±0.08 Jy). The phase center of the observations was R.A.(J2000) = 07 h 04 m 21. s 4, Dec(J2000) = −16 • 23 ′ 15 ′′ . The Astronomical Image and Processing System (AIPS), developed by NRAO, was used to calibrate and process our data. We produced H 2 O line maps setting the "robust" weight parameter to 0, as a compromise between angular resolution and sensitivity. The size of the synthesized beam was ≃0.25 ′′ ×0.14 ′′ in the maps of each individual observing day. The water maser emission was strong enough to enable self-calibration. Spectral Hanning smoothing was applied to mitigate the Gibbs ringing, which provided a final velocity resolution of 1.3 km s −1 . The 1.3 cm continuum data were cross calibrated using the self-calibration solutions obtained from the line data for each individual day and were combined. Continuum maps were obtained using natural weighting to improve the signal to noise ratio, providing a synthesized beam of ≃0.30 ′′ ×0.19 ′′ (P.A. = 65 o ). We have also reanalyzed radio continuum data at 8.44 GHz (≃ 3.6 cm) and 4.86 GHz (≃ 6 cm) from the VLA archive (these data were included in the papers by Yun et al. 1996 andMoreira et al. 1997). Both observations were performed with the array in the D configuration for projects AY071 and AY073. A total bandwidth of 100 MHz was selected in the two sets of observations, and both right and left circular polarizations were processed. The time on source was ≃ 20 minutes for 8.44 GHz data and ≃ 1 hour for 4.86 GHz data. The source 3C48 was selected as the flux calibrator in both cases (adopted flux density equal to 3.2 Jy and 5.4 Jy respectively for 8.44 and 4.86 GHz frequencies). We have summarized the setup of these archived observations in Table 1. Radio continuum emission We have detected a compact (≤ 0.2 ′′ ) continuum source at 1.3 cm (Table 2) at a position coinciding with the near-infrared elongated feature CB 54 YC1-SW (see Fig. 1). This feature was proposed to trace an embedded YSO by de on the basis of its water maser emission, a result that has been recently confirmed by the detection of three mid-infrared sources, within this feature, by Ciardi & Gómez-Martín (2007), who classified these objects as Class 0 protostellar candidates. Our 1.3 cm source, is spatially coincident with MIR-b (see Fig. 1), one of the mid-infrared protostars detected by Ciardi & Gómez-Martín (2007). Radio continuum emission at 3.6 and 6 cm is unresolved at both frequencies (see contour maps in Fig. 2). In Table 2 we show detailed information about positions, flux densities, and uncertainties of the continuum emission presented in this section. From this analysis, we see that the position of the radio continuum emission at the three different frequencies is the same within the absolute positional errors, concluding that it comes from the same source, named as CB 54 VLA1 by Yun et al. (1996) and Moreira et al. (1997). Origin of the radio continuum emission In order to study the nature of the radio continuum emission associated with CB 54 VLA1, we compare the centimeter continuum luminosity inferred from the radio observations with the centimeter continuum luminosity expected from Lyman-continuum radiation from a ZAMS star of the given luminosity of the source. The bolometric luminosity derived from the flux densities of the source IRAS 07020−1618 close to our radio continuum source is ∼344 L ⊙ for an adopted distance of 1.5 kpc (Wang et al. 1995), which corresponds to a B5.5 ZAMS star (Thompson 1984). We warn that the cluster of three YSOs detected by Ciardi & Gómez-Martín (2007) falls within the positional error ellipsoid of IRAS 07020−1618 and they could contribute to the total luminosity of the IRAS source. Therefore we consider this value as an upper limit. The observed radio continuum luminosity at 1.3 cm wavelength (i.e., S ν d 2 ) is ∼ 7 × 10 −1 mJy kpc 2 . On the other hand, assuming optically thin freefree emission from ionized hydrogen with an electron temperature of 10 4 K, we derive an expected upper limit of S ν d 2 7 × 10 −3 mJy kpc 2 from a Lyman-continuum flux of ∼ 7 × 10 41 s −1 (obtained from Thompson 1984 for a B5.5 ZAMS star). Thus, ionization by stellar photons fails by two orders of magnitude in explaining the observed radio emission, and another ionizing mechanism is required. This behavior has been observed before, for instance, by Torrelles et al. (1985), Rodríguez et al. (1989), and Anglada (1995) for a large set of low-mass YSOs. A plausible mechanism for explaining the centimeter continuum emission observed is the shock-ionization scenario proposed by Torrelles et al. (1985). In this scenario the stellar wind responsible for a molecular outflow generate shocks in the dense gas surrounding the central protostar and induces its ionization. Curiel et al. (1987Curiel et al. ( , 1989) modeled the shock-ionization scenario and derived the radio continuum emission under optically thin conditions. The spectral index measured by us in the 6−1.3 cm wavelength range is α 6−1.3 cm =0.3±0.4 (where S ν ∝ ν α ), consistent, within the errors, with optically thin free-free emission. The formulation of Curiel et al. (1987Curiel et al. ( , 1989) predicts a correlation between S ν d 2 and the momentum rate of the outflowṖ. Assuming that the momentum rate in the outflow equals that in the stellar wind,Ṗ=Ṁv (whereṀ is the mass loss rate of the wind, and v the terminal velocity of the wind, adopting a typical value of 200 km s −1 ), and a typical electron temperature in the ionized wind of 10 4 K, the prediction of the model gives: with S ν d 2 in mJy kpc 2 andṖ in M ⊙ yr −1 km s −1 , being S ν the flux density at 6 cm and η = Ω/4π an efficiency factor that represents the fraction of the stellar wind that is shocked and produces the observed radio continuum emission. Scaling the outflow force derived by Yun & Clemens (1994b) from CO observations, to the adopted distance of 1.5 kpc, we obtaiṅ P = 4× 10 −4 M ⊙ km s −1 yr −1 . Considering a radio continuum luminosity at 6 cm of 0.5 mJy kpc 2 , we derive and efficiency factor η≃0.4, which indicates that the shock-ionization mechanism could explain the observed radio continuum emission. We note that our estimate of efficient factor η can be affected by large errors, mainly due to the uncertainty in the value of the momentum rate of the outflow from molecular lines observations (see Anglada et al. 1992 and Anglada 1995 for a detailed discussion of the dependence of the error in η with the observational parameters). The efficiency factor derived in this work is somewhat higher than the average value of η≃0.1 derived by Anglada (1995) for a large set of low luminosity objects. Nevertheless, the dispersion of the efficiency values is relatively large and the best fit to that set of data provides a value of η=10 −1±0.6 (adopting an uncertainty of 2σ), i.e., 0.025 < η < 0.4. The observations presented here make CB 54 VLA1 a very good candidate for driving a molecular outflow. Nevertheless, higher angular resolution observations at centimeter and millimeter wavelengths should be made to study the presence of a jet-disk system, since these structures are typically observed with sizes of ≃100 AU (≃ 0. ′′ 07 at a distance of 1.5 kpc), see Anglada (1996). In particular, high angular resolution observations of 3.6 and 6 cm emission, optically thicker than that at 1.3 cm, could better trace low-brightness structures, and therefore, would be useful to prove the presence of a thermal radio jet with a morphology elongated in the same direction of the large scale molecular outflow. On the other hand, radio continuum millimeter data would be useful to reveal the presence of heated dust associated with a possible protoplanetary disk. Table 3 contains the results of our water masers observations with the VLA. We observe three independent spectral features (see Fig. 3, left panel). One of the features is approximately at the velocity of the cloud (V LSR =19.5 km s −1 ; Clemens & Barvainis 1988) and the rest are blue-shifted within 10 km s −1 from the cloud velocity. All of them were detected on the three different days of observation. The masers are found at two different positions separated by 2.286 ′′ ±0.004 ′′ (distance from the southern component to the northern reference feature, ≃ 3400 AU at a distance of 1.5 kpc; see Fig. 1). The northern group of masers is spatially associated with the mid-infrared object MIR-b and shows three spots at velocities 10.4, 13.7, and 19.6 km s −1 , separated by a few centi-arcseconds (see Fig. 3, right panel). The southern group is composed by a single spot at a velocity of 9.7 km s −1 , spatially associated with the mid-infrared object MIR-c. Water maser emission The water maser emission in the region shows high variability, which is typically observed in both low and high-mass young stellar objects (Reid & Moran 1981;Wilking et al. 1994;Claussen et al. 1996). Gómez et al. (2006), using the Robledo 70 m antenna, detected a water maser spectrum composed of a single water maser spectral feature observed at V LSR = 13.7 km s −1 in 2002, at 7.9 km s −1 in 2003, and at 8.7 km s −1 in 2005. In addition, de , using the VLA, detected two different features at 15.8 and 17.8 km s −1 in February 2004. In the observations reported here, do not detect any of the maser spectral features observed in the mentioned previous works in the region except the feature at 13.7 km s −1 . This component shows a variation in its intensity by a factor of ∼2 between 2005 January 22 and 2005 January 31 (see Fig. 3, left panel). On the other hand, the features at 10.4 and 19.6 km s −1 have not been reported before. Water masers associated with the northern YSO MIR-b are located at a distance ≤100 AU (assuming a distance of 1.5 kpc to the Bok globule) from the compact radio continuum source CB 54 VLA1 we detected at 1.3 cm, which suggests this object as the exciting source of the northern group of water masers. This short distance ≤100 AU between water masers and their exciting source is typically observed in a large set of low-mass star forming regions (Chernin 1995;Claussen et al. 1998;Furuya et al. 2000). The right panel of Fig. 3 shows the spatial distribution of the three northern spots obtained on the second day of observation, which corresponds to the data with the best signal to noise ratio. All the spots show similar positions on the three days of observation. They delineate a spatial structure of ≃0.06 ′′ (90 AU), elongated in the north-south direction. To ascertain whether the masers are associated with the molecular outflow or with disk material (i.e., whether they are tracing unbound or bound motions), for each maser component we estimate its velocity (V ) with respect to the mean velocity of the maser structure, and we calculate the mass (M) necessary to bind the gas that shows the maser emission as M = V 2 RG −1 , where R is the distance of the maser structure to the central YSO, and we have conservatively assumed a value of R ≃ 100 AU for the maser components (this distance to the center is an upper limit). A mass of M ≃ 2.8M ⊙ is enough to bind the gas responsible for the maser features. Since the mass of this source is estimated to be ∼ 4 M ⊙ (Ciardi & Gómez-Martín 2007) we cannot discard bound motions. On the other hand, the spatial association of the southernmost water maser emission with the mid-infrared YSO MIR-c suggests this object as the pumping source of the southern maser emission. In this case, we do not detect 1.3 cm continuum emission towards this object with a 3 σ upper limit of 0.17 mJy. Furuya et al. (2001) found that water masers in low-mass YSOs are usually excited by Class 0 sources due to the interaction of powerful jets with a large amount of circumstellar material. Therefore, since sources that host water maser emission are good candidates for being in a very early stage of its evolution, as well as for being the exciting source of the mass-loss phenomenon, we propose the sources MIR-b (CB 54 VLA1) and MIR-c as the best candidates to be the driving engine of the molecular outflows that exist in this Bok globule. Conclusions We presented high angular resolution VLA observations of water masers and continuum emission at 1.3 cm towards the Bok globule CB 54. We complemented our observations with VLA archive data in the radio continuum at 3.6 and 6 cm. The main conclusions are the following: • Our subarcsecond angular resolution observations at 1.3 cm allow us to establish that the radio continuum emission detected to the south-west of the Bok globule is associated with the mid-infrared source MIR-b. The spectral index of the emission between 6 and 1.3 cm is flat, consistent with optically thin free-free emission from ionized gas. A shock scenario mechanism is needed to produce the radio continuum luminosity at cm wavelengths. • Water masers are found in two different regions. The northern group of masers coincides within < 100 AU with the source CB 54 VLA1, which is associated with the mid-IR protostar MIR-b, and whose position at 1.3 cm is reported in this paper. The southern region of water maser emission is located ∼ 2.3 ′′ SW, towards the position of the faint mid-IR object MIR-c, without detectable radio continuum emission. • The association of the mid-IR sources MIR-b and MIR-c with water masers, confirms the embedded protostellar nature of both objects and suggests these protostars are the best candidates for driving the molecular outflows observed in the region. We thank the referee for providing constructive comments and help in improving the contents of this paper. We are thankful to Per Bergman for his suggestions. GA, IdG, JFG, JMT, and OS are partially supported by Ministerio de Ciencia e Innovación (Spain), grant AYA 2008-06189-C03 (including FEDER funds), and by Consejería de Innovación, Ciencia y Empresa of Junta de Andalucía, (Spain). The work by TBHK was performed in part at the Jet Propulsion Laboratory under contract between the National Aeronautics and Space Administration and the California Institute of Technology. -Map of radio continuum emission observed at 3.6 and 6 cm. Contour levels are -3, 3, 5 and 7 times the rms of the map (0.025 mJy beam −1 ) for data at 3.6 cm wavelength, and -3, 3 and 5 times the rms of the map (0.045 mJy beam −1 ) for data at 6 cm. Crosses mark the positions of the two groups of water masers detected in this work. The HPBW of the synthesized beam is represented in the lower-right corner of each plot. The (0,0) is the position of the reference feature used for self-calibration, and the size of the crosses represent the relative position uncertainty with respect to the reference feature (see Table 1). The southern group of water masers show one spot at 9.7 km s −1 . a Right Ascension and Declination offsets of the peak of each distinct water maser spectral feature with respect to the reference feature used for self-calibration, whose position is c LSR velocity of the spectral features. Velocity resolution is ∼ 1.3 km s −1 . d Reference feature.
2009-04-06T15:59:18.000Z
2009-04-06T00:00:00.000
{ "year": 2009, "sha1": "578f748d03c70291c7a667dadfa8ec053a2e4053", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0904.0736", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "578f748d03c70291c7a667dadfa8ec053a2e4053", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255029488
pes2o/s2orc
v3-fos-license
Diverse silent chromatin states modulate genome compartmentalization and loop extrusion barriers The relationships between chromosomal compartmentalization, chromatin state and function are poorly understood. Here by profiling long-range contact frequencies in HCT116 colon cancer cells, we distinguish three silent chromatin states, comprising two types of heterochromatin and a state enriched for H3K9me2 and H2A.Z that exhibits neutral three-dimensional interaction preferences and which, to our knowledge, has not previously been characterized. We find that heterochromatin marked by H3K9me3, HP1α and HP1β correlates with strong compartmentalization. We demonstrate that disruption of DNA methyltransferase activity greatly remodels genome compartmentalization whereby domains lose H3K9me3-HP1α/β binding and acquire the neutrally interacting state while retaining late replication timing. Furthermore, we show that H3K9me3-HP1α/β heterochromatin is permissive to loop extrusion by cohesin but refractory to CTCF binding. Together, our work reveals a dynamic structural and organizational diversity of the silent portion of the genome and establishes connections between the regulation of chromatin state and chromosome organization, including an interplay between DNA methylation, compartmentalization and loop extrusion. The relationships between chromosomal compartmentalization, chromatin state and function are poorly understood. Here by profiling long-range contact frequencies in HCT116 colon cancer cells, we distinguish three silent chromatin states, comprising two types of heterochromatin and a state enriched for H3K9me2 and H2A.Z that exhibits neutral three-dimensional interaction preferences and which, to our knowledge, has not previously been characterized. We find that heterochromatin marked by H3K9me3, HP1α and HP1β correlates with strong compartmentalization. We demonstrate that disruption of DNA methyltransferase activity greatly remodels genome compartmentalization whereby domains lose H3K9me3-HP1α/β binding and acquire the neutrally interacting state while retaining late replication timing. Furthermore, we show that H3K9me3-HP1α/β heterochromatin is permissive to loop extrusion by cohesin but refractory to CTCF binding. Together, our work reveals a dynamic structural and organizational diversity of the silent portion of the genome and establishes connections between the regulation of chromatin state and chromosome organization, including an interplay between DNA methylation, compartmentalization and loop extrusion. Chromosome organization within the nucleus is associated with vital cellular processes [1][2][3] . The best characterized chromosome-organizing process is loop extrusion. During interphase, cohesin complexes act as motors to extrude progressively growing chromatin loops. In vertebrates, the insulator protein CTCF serves as a directional barrier that halts loop-extruding cohesin [4][5][6][7][8][9] . Independent of loop extrusion, chromosomes are also spatially compartmentalized, with transcriptionally active chromatin located centrally and inactive chromatin more peripherally in the nucleus. As independent organizing processes, perturbing loop extrusion and its barriers does not eliminate compartmentalization [10][11][12][13][14] ; however, the two processes act simultaneously and therefore can interfere with each other 12,15 . Simulations of chromosome compartmentalization in inverted nuclei have suggested that attraction between heterochromatic loci is a major force driving compartmentalization 16 . Heterochromatin is usually categorized into two types. Facultative heterochromatin, which is considered to be developmentally regulated, is enriched in H3K27me3 (ref. 17), while constitutive heterochromatin is viewed Article https://doi.org/10.1038/s41594-022-00892-7 approximation and projection (UMAP) embedding of the leading eigenvectors ( Fig. 1b and Methods). Furthermore, projecting loci onto the first two eigenvectors (E1 and E2), we notice that GC content and genomic distance from centromere of individual loci vary along almost perpendicular components in the projection (Fig. 1c). A similar pattern is observed in other cell types, suggesting that these two roughly independent gradients are conserved features (Extended Data Fig. 1a). The alignment of GC content to E1 is well known, but the exact relationship differs across cell types 35 . The positional component correlating strongly with E2 reflects the observation that pairs of centromere-proximal and centromere-distal regions show mildly elevated contact frequency throughout the genome (Fig. 1c) 35 . This may be due to known enrichment of interactions between telomeres and/or between centromeres (for example, Rabl configuration), or a relationship between chromosomal and nuclear landmarks during interphase. As a result, we expected that the clustering of interaction profiles using trans Hi-C data would be influenced by chromosomal position independently of chromatin state. To test this idea, we examined subcompartment calls from GM12878 (ref. 4). Indeed, the loci from inactive subcompartments B2 and B3 in GM12878 appear to differ positionally along the E2 axis (Extended Data Fig. 2a-e). Similarly, in HCT116 cells we observe that several pairs of clusters with similar E1 ranges separate along the E2 axis (Fig. 1b). We found that the data can be sensibly partitioned into eight clusters (Methods and Extended Data Fig. 1d,e). To exclude the influence of genomic position, we next examined data obtained with functional genomic assays including publicly available data (Supplementary Table 1) 40,41 . Indeed, several centromere-proximal and distal pairs of clusters showed similar functional profiles, so we consolidated the clusters into a total of five groups, described in detail below. Importantly, since not all interaction profiles imply the existence of spatially or phase-separated subnuclear compartments (see below), we will refer to our consolidated classification as interaction profile groups (IPGs) rather than (sub)-compartments. For simplicity, we have chosen a naming system similar to the one used for GM12878 trans interaction profile clusters (subcompartments), but below we discuss what correspondences can be made. We identified two transcriptionally active IPGs, consistent with previous reports 4 . The first IPG, corresponding to cluster I, has the strongest self-interaction preference in trans, is enriched for the nuclear speckle marker SON and displays the greatest amount of transcriptional activity ( Fig. 1d-f). Its loci have a high degree of overlap with the A1 subcompartment identified in GM12878 cells and thus we termed this IPG A 1 (Extended Data Fig. 1b). In GM12878, subcompartment A2 has been described in more generic terms as domains with weak transcriptional activity. Thus, clusters II and III which display weak transcriptional activity and separate along the E2 axis were grouped and classified as A 2 (Fig. 1b,e). Interestingly, the A 2 IPG interacts with the A 1 IPG (heterotypic) at least as strongly as it does with itself (homotypic) (Fig. 1f). The five remaining clusters all display low transcriptional activity and gene density and thus likely constitute inactive chromatin domains (Fig. 1e). Clusters V and VI are both enriched in LaminB1, are late replicating and have intermediate CpG methylation, consistent with the B1 subcompartment label, so we combined them to form an IPG termed B 1 (Fig. 1d). Clusters VII and VIII are both enriched in Protect-seq signal, are late replicating, display the lowest CpG methylation frequency (~50% on average, corresponding to partially methylated domains [42][43][44] ) and have the strongest preference for homotypic contacts in cis (Fig. 1d,f). The majority of loci in these clusters are assigned subcompartment labels B2 and B3 in GM12878 cells and are consistently assigned labels B2/B3 across different cell types based on SNIPER (subcompartment inference using imputed probabilistic expressions) 36 , a supervised model that generalizes the GM12878 labels to other cell types ( Fig. 1g and Extended Data Fig. 1b). However, as more static, is primarily associated with H3K9me3 and forms at centromeres, pericentromeric regions and at telomeres 18 . However, H3K9me3-associated heterochromatin is also found to form large contiguous domains genome-wide that expand in number and size during differentiation from pluripotency 19 . HP1 proteins bind H3K9me3 (reviewed in ref. 20) and can self-oligomerize and recruit H3K9 methyltransferases potentially contributing to heterochromatin compaction 21,22 , spread 23,24 and phase separation [25][26][27] . DNA methylation is associated with both heterochromatin and extrusion barriers. In humans, the DNA methyltransferase DNMT1 physically associates with HP1 proteins suggesting an interplay between DNA and histone methylation 28,29 . CTCF-DNA binding also depends on CpG methylation of the core binding motif [30][31][32][33] . Overall, the regulatory relationships between DNA methylation, CTCF binding and heterochromatin formation are likely critical for cell-type specification but are still poorly understood. Early studies subdivided mammalian genomes on the basis of long-range contact frequencies into two groups or 'compartments', broadly correlating with active and inactive chromatin 34,35 . Higher resolution Hi-C data have shown that this binary classification is too simplistic. Until recently, most of these studies have largely focused on a single deeply sequenced immortalized lymphoid cell line, GM12878 (ref. 4). However, since the Hi-C profile of a single locus depends on the chromatin state of the remainder of the genome, long-range patterns can be difficult to generalize and compare across cell types. Conversely, even when congruences are found where a group of loci share similar interaction profiles in each of two different cell types, there is no guarantee that the underlying chromatin states are identical. Here, we report a detailed investigation of nuclear compartmentalization motivated by the prominent compartmentalization of heterochromatin in HCT116 colon cancer cells. We identify three inactive chromatin states having coherent long-range contact profiles, including a state marked by H3K9me2 and the histone variant H2A.Z, which, to our knowledge, has not previously been characterized. We find a strong compartmentalization signature for heterochromatin marked by H3K9me3, HP1α and HP1β and demonstrate that this heterochromatin is lost upon DNA methylation inhibition to yield the H3K9me2-enriched state, dramatically altering genome compartmentalization but not replication timing. Finally, we reveal an interplay between heterochromatin and loop extrusion. Together, our results demonstrate diversity and plasticity in silent chromatin, and their influence on the two major chromosome-organizing processes in interphase. Identifying interaction profiles by spectral decomposition Evidence exists that some cell lines or cell types may have unique nuclear compartmentalization and that this may be linked to the structural differences of distinct states of chromatin [36][37][38] . To this end, we sought to identify groups of loci with similar long-range three-dimensional (3D) interaction profiles in HCT116 cells and to understand their relationship to the chromatin landscape (Fig. 1a). Our method for characterizing interaction profiles leverages the information from trans (interchromosomal) interactions as in ref. 4 but introduces an initial dimensionality reduction step similar to ref. 39. Rather than clustering columns of Hi-C contact matrices directly, we replace the contact frequency data of individual loci with their dimensionally reduced representation (that is, leading eigenvectors; Methods). This representation also facilitates the projection and embedding of genomic loci to allow investigation of the structure of the interaction profile manifold, in which each point corresponds to a 50-kilobase (kb) genomic bin (Fig. 1b). In contrast to the discrete compartment model, we observe that the manifold does not form dense, strongly separated clusters as evidenced by the relatively continuous uniform manifold Fig. 1b). Interestingly, we identified an IPG (cluster IV) with no equivalent in GM12878, whose loci share hallmarks of inactive chromatin (Fig. 1d). Despite low GC content, it exhibits high CpG methylation frequencies and no Protect-seq enrichment (Fig. 1d). This IPG has a distinct 3D interaction profile, showing only modest preference for homotypic contacts (Fig. 1f), suggesting these do not form well-defined spatial subnuclear compartments. However, the regions of this IPG do form large continuous domains, present on many chromosomes (Extended Data Fig. 1c). When these loci are compared with subcompartment labels in other cell types they appear to be either weakly transcriptionally active (A2) or silent (B3) (Extended Data Fig. 1b), suggesting that this IPG could represent a 'poised heterochromatin' that transitions between active and inactive chromatin in different cell types. We termed this IPG B 0 . Epigenomic data support three inactive IPGs in HCT116 To understand the chromatin composition of the IPGs, we examined histone modifications, histone variants and related factors (Fig. 2a). Consistent with B 1 being facultative heterochromatin, these loci are predominantly enriched for H3K27me3, with a mild enrichment in H3K9me2 (Fig. 2b,d). B 0 also displays a subtle enrichment in H3K9me2 and a Hidden Markov Model (HMM) (ChromHMM; Methods) showed that B 0 is almost entirely composed of H3K9me2 without H3K27me3 (Fig. 2b,d and Extended Data Fig. 3a,d). Loci in IPG B 4 are marked with H3K9me3, HP1α and HP1β, consistent with these loci being in a constitutive heterochromatic state (Fig. 2b,d and Extended Data Fig. 4b). Finally, when the E1-E2 projection of loci is colored by H3K27me3 or H3K9me3 an enrichment pattern spans the entire E2 axis, further validating the consolidation of centromere/telomere-proximal cluster pairs into functionally consistent IPGs ( Fig. 2c and Extended Data Fig. 3c). Curiously, in addition to H3K9me2, B 0 also has a mild enrichment for the histone variant H2A.Z (Fig. 2b,d). In humans, hypoacetylated H2A.Z has been reported to coexist with H3K9me2 in broad lamina-associated chromatin domains, suggesting that the B 0 IPG could correspond to a similar type of chromatin [45][46][47] . Moreover, B 0 -like domains that display neutral interaction profiles in Hi-C, late replication timing and broad H2A.Z chromatin modifications can be observed in other cell types including primary cells (Extended Data Fig. 4a). Our A 1 and B 4 IPG assignments (7.5% and 15.9% of the genome, respectively) exhibit the closest correspondence to known euchromatic and heterochromatic chromatin states, respectively. This can be observed using receiver operating characteristic (ROC) curves generated by using thresholded 50-kb binned signal tracks as binary classifiers for individual IPG assignments ( Fig. 2e and Extended Data Fig. 3b). The A 1 label is predicted by the nuclear speckle marker SON with an area under the curve of 0.986, and the B 4 label is predicted by each of H3K9me3, HP1α, HP1β and Protect-seq with area under the curve > 0.992. These close correspondences, coupled with A 1 and B 4 being the most self-interacting IPGs, suggest that homotypic affinity between those marks or associated factors could be drivers of A 1 and B 4 compartmentalization. Other IPGs are less well predicted by any single chromatin modification, even though a particular histone modification may be globally enriched. The lack of contact enrichment between the different inactive IPGs (B 0 , B 1 , B 4 ) suggests that the homotypic interactions are specific to each type (for example, specific bridging proteins) rather than a generic form of interaction common to all inactive chromatin. In summary, we discern three types of inactive chromatin by long-range contact frequencies in HCT116. Notably, none of these types appears to share an epigenetic similarity with the B2/B3 subcompartments described in GM12878 (Extended Data Fig. 2a,b). These results therefore hint at a greater diversity of inactive chromatin types, within and between cell types, than broadly attested. B 4 's chromatin state has varying cell-type abundance Our data show that B 4 domains are enriched for H3K9me3, HP1α and HP1β and have strong homotypic interaction preferences. We next asked whether these properties are conserved in other cell lines. First, we examined enrichments of H3K9me2/3, HP1α/β/γ, H3K27me3 and H2A.Z and binned them into quantiles according to E1 value (Fig. 3a). K562 cells, similar to HCT116 cells, are enriched for H3K9me3, albeit more weakly ( Fig. 3a and Extended Data Fig. 5a,b). In GM12878 cells we observed lower abundance of H3K9me3, and H3K9me3 was also found in active regions. Human embryonic stem cells (H1) have an even lower abundance of H3K9me3 (Fig. 3a), consistent with microscopy data suggesting H1 lacks punctate constitutive heterochromatin 48,49 . To understand whether the presence of H3K9me3, HP1α and HP1β was correlated with preferential homotypic interactions, we profiled cis contact frequency between pairs of loci ranked by their E1 eigenvector status and compared this with a ranking by H3K9me3 enrichment. Loci with similar E1 status tend to interact with each other, as expected ( Fig. 3b), and loci that display high levels of H3K9me3 also show particularly high contact frequencies with each other (Fig. 3c and Extended Data Fig. 5c,d). This phenomenon is observed in all cell types even though GM12878 and H1 have a much lower abundance of H3K9me3 loci than HCT116. Loci in the highest H3K9me3 quantiles also show elevated HP1α in all cell types as well as HP1β where data were available (Fig. 3d). We conclude that the presence of H3K9me3 along with HP1α and HP1β is correlated with elevated homotypic contact frequency across cell types regardless of genomic abundance. Additionally, in GM12878 and K562 we also observe a coenrichment of HP1γ with H3K9me3, while HP1γ is anticorrelated with H3K9me3/HP1α in HCT116 (data for H1 were unavailable). HCT116 cells have large ungapped H3K9me3 (B 4 ) domains up to several megabases in length ( Fig. 3e and Extended Data Fig. 6a,b). Taking the largest domains ranked by size for each of the other cell types, we observe that K562 and fibroblasts (HFFc6, IMR90) also exhibit large domains. In GM12878 and H1 cells we observed shorter domains compared with HCT116 and K562. Yet even among the few domains in H1 cells displaying H3K9me3 and HP1α, we observe a tendency to self-interact (Extended Data Fig. 6c). It is noteworthy that, in contrast to cis contact frequency, trans contact frequency between H3K9me3-containing loci is not generally elevated across cell types (Extended Data Fig. 5c,d). These data argue that chromosomal territoriality and/or association with nuclear landmarks (for example, lamina) can limit the extent of interchromosomal contacts between H3K9me3 loci. Finally, the fact that loci with similar E1 values show preferred interactions with each other, across the full range of E1 values, indicates that other factors besides H3K9me3-HP1 can also mediate such interactions (Fig. 3b). Taken together, these data suggest that the constitutive heterochromatin marks, H3K9me3 and HP1, define a homotypically interacting chromatin state, but that the prevalence and distribution of this chromatin state varies substantially across cell types. The exact combination of HP1 homologs and/or posttranslational modifications may govern the abundance and strength of the interactions 50 . H3K9me3-HP1α/β chromatin is depleted for extrusion barriers Besides compartmentalization, another major organizing mechanism in the nucleus is loop extrusion. The signature patterns of loop extrusion are fewer in number and less evident in B 4 domains in HCT116 cells. We therefore wanted to understand why these features are depleted and asked whether it is due to a lack of extrusion by cohesin, a lack of CTCF barriers or both. First, we examined B 4 domains in cells with normal CTCF barriers but without cohesin-extruded loops (that is, cells depleted for Rad21 using an auxin-inducible degron approach) 10 . We looked at the decay of contact probability with genomic separation, P(s), which is indicative of the underlying polymeric folding of the region 51 . We found that P(s) was affected by depletion of cohesin in all IPGs, including B 4 domains, leading to the disappearance of the characteristic extrusion 'shoulder' in P(s) (Fig. 4a) 52 . Moreover, we found that the shapes of the P(s) derivatives suggest that A 1 and A 2 domains have more loops per kilobase than B 4 and that B 4 has a larger average loop size (Fig. 4a). Second, despite B 4 domains appearing relatively featureless in Hi-C maps, we find that extrusion-related stripes and dots (which disappear upon cohesin depletion) originating outside a domain can sometimes propagate through it, appearing along the periphery of the square (Extended Data Fig. 7a). In the loop extrusion model, this would require the passage of extruded loops through the heterochromatic region, suggesting that heterochromatic regions are traversable by cohesin. To test whether the loop extrusion machinery can traverse B 4 domains, we turned to polymer simulations of loop extrusion in a heterochromatic domain surrounded by tandem CTCF clusters. Stripes extending along the periphery of the B 4 domains failed to appear when translocation of loop extrusion factors into such domains was blocked (Extended Data Fig. 7b). Third, we find that the number and strength of CTCF peaks is depleted in B 4 domains compared with other IPG domains ( Fig. 4b and Extended Data Fig. 7d,e). Concomitantly, we see fewer and weaker insulating loci in Hi-C at B 4 domains (Fig. 4c). Likewise, when we aggregate Hi-C data at CTCF-bound sites we find these sites form stripe-like features and local insulation (Fig. 4d). For CTCF-bound sites in B 4 domains these features are weak compared with those in other IPGs (Fig. 4a). In contrast, when we examine HCT116 B 4 regions in H1 human embryonic stem cells (H1-hESC), where H3K9me3-HP1α/β chromatin is lacking, we do not observe a similar reduction in number, occupancy or insulation of CTCF sites (Extended Data Fig. 7c-e). Altogether, our analysis argues that the low CTCF occupancy of B 4 domains in HCT116 is not intrinsic to the DNA sequence, but rather that B 4 domains in HCT116 are refractory to CTCF occupancy. Finally, we also asked whether the depletions of extrusion features in H3K9me3-HP1α/β regions are conserved across cell types. While we find it generally to be the case, we do find a subset of heterochromatic domains that have both broad H3K9me3 enrichment and late replication timing, but also include extrusion-associated patterns in Hi-C (for example, normal human epidermal keratinocyte (NHEK) cells) (Fig. 4e). We predicted that this subset of domains should have occupied CTCF binding sites at regions of low H3K9me3 saturation. Indeed, the visible TAD boundary loci have lower H3K9me3, are enriched for H2A.Z and display narrow peaks for CTCF as well as marks such as H3K27ac and H3K27me3, suggesting that chromatin tends to be locally decompacted at these sites (Fig. 4e). These data are reminiscent of 'euchromatin islands' previously described as small regions of CTCF occupancy embedded within large heterochromatin domains 53 . The fact that dots and stripes can be detected in NHEK cells that cross domains enriched in H3K9me3 again shows that loop extrusion can traverse heterochromatin. Altogether, these data suggest that the depletion of dots and stripes in B 4 /H3K9me3-HP1α/β is the result of low CTCF occupancy, and not because of an absence of extrusion. The density of extrusion barriers differs across IPG domains, resulting in different average extruded loop sizes (Fig. 4f). DNMT perturbation selectively disrupts B 4 compartmentalization Thus far we have defined the properties of H3K9me3-HP1α/β heterochromatin domains. We next wanted to understand how these features contribute to compartmentalization and chromatin state by disrupting these regions. To this end we chose to interrogate a double-knockout DNA-methylation-deficient HCT116 cell line (DNMT3b −/− ;DNMT1 −/− , hereafter referred to as DKO) 54 which has been shown to have defects in H3K9me3 (ref. 55) and HP1α/β deposition 37 , in addition to perturbing DNA methylation in HCT116 cells by treatment with 5-Azacytidine for 48 h (5Aza) (Fig. 5a). In our hands, both conditions reduced DNA methylation compared with HCT116 cells as measured by LC-MS (Fig. 5b). As we have previously shown, in DKO cells only a subset of domains are no longer detected by Protect-seq and no longer display HP1α and H3K9me3 binding, indicating that these domains are no longer in a closed heterochromatic state ( Fig. 5c and Extended Data Fig. 8a) 37 . This shows that not all B 4 domains are equally sensitive to DNMT1/ DNMT3b loss. Interestingly, in the 5Aza-treated cells we find that all H3K9me3-HP1α/β domains show mild but uniform depletion of both Protect-seq signal, and HP1α and H3K9me3 levels (Fig. 5c,d and Extended Data Fig. 8a). To determine if loss of H3K9me3 affected self-affinity, we performed Hi-C on HCT116, DKO and 5Aza-treated cells. We ranked HCT116 B 4 domains by H3K9me3 loss in DKO and split them into those that lose H3K9me3-HP1α/β status in DKO cells (disrupted domains) and those that retain it (persistent domains) (Fig. 5e,f). Hi-C analysis shows striking local defects in B 4 compartmentalization (loss of checkering on the Hi-C map) and a global weakening of B 4 compartmentalization in 5Aza-treated cells (Fig. 5g,h and Extended Data Fig. 8d,f,g). Next, we aimed to investigate the interaction profile acquired by disrupted domains in DKO. Aggregate analysis of contact frequency shows that disrupted domains change to a more neutral interaction profile (Fig. 5h), reminiscent of the interaction profile of B 0 domains. We also examined the chromatin state at disrupted domains in DKO cells using available data for histone modifications and H2A.Z in DKO cells 37,55 . In contrast to persistent domains which maintain an H3K9me3-HP1α/β chromatin state, we find that disrupted domains transition to a chromatin state enriched for H3K9me2 and H2A.Z (Fig. 5i and Extended Data Fig. 8a-c,e), which is characteristic of B 0 domains. Late replication timing persists without H3K9me3-HP1α/β Our data suggest that upon loss of DNA methylation, B 4 domains can lose H3K9me3, HP1 and self-affinity. Replication timing has been proposed to maintain the global epigenetic state in human cells 56 . In turn, histone deposition, HP1 proteins and DNMT1 are associated with chromatin restoration at the replication fork 57,58 . Therefore, we hypothesized that the loss of H3K9me3-HP1α/β heterochromatin in DKO cells would be accompanied by a change in the timing of DNA replication at disrupted domains. To address whether replication timing is altered by the disruption of heterochromatin, we performed two-stage Repli-seq in HCT116 and DKO cells. Surprisingly, we observe similar replication timing profiles between HCT116 and DKO cells ( Fig. 6a and Extended Data Fig. 9a,b), consistent with recent findings using single-cell Repli-seq 59 . A fine-scale analysis of individual loci further shows that changes in replication timing and changes in the Hi-C E1 eigenvector are uncoupled (Fig. 6b,c). Both persistent and disrupted B 4 domains, which are late replicating in HCT116 cells, remain late replicating in DKO cells (Fig. 6b,e). Importantly, we do not see major early/late replication timing differences within disrupted B 4 regions (that is, that lose H3K9me3 and HP1 and cease to compartmentalize in DKO cells) or within regions where H3K9me3 and HP1 were gained in DKO (Fig. 6a,d). We further identified regions of differential replication timing and we find that those regions which transition to early replication timing in DKO correlate with loss in H3K27me3, but not H3K9me3 (Extended Data Fig. 9c,d). We find that replication timing in regions labeled B 4 in HCT116 is surprisingly insensitive to the presence or absence of H3K9me3-HP1α/β, despite the necessity of the H3K9me3-HP1α/β chromatin state for B 4 compartmentalization integrity. The fact that late replication is maintained in the absence of epigenetic and 3D signatures of heterochromatin implies that H3K9me3 and HP1 are not uniquely required to suppress the early onset of DNA replication and suggests alternative or compensatory mechanisms for maintaining late replication timing at disrupted domains. Motivated by this possibility, we investigated Hi-C and multistage (16-fraction) Repli-seq data from a recent study on the replication timing regulatory factor RIF1 (refs. 56,60). We found that while replication timing globally loses precision in the absence of RIF1, B 4 domains preserve very late replication timing (S12-S16 fractions) while B 0 domains shift from being moderately late in the wild type to predominantly early (Extended Data Fig. 9e,f). This suggests that the B 0 -associated chromatin state depends on RIF1 for its late replication timing. Overall, these results support that disrupted B 4 domains in DKO cells transition to the late replicating silent chromatin state associated with the B 0 IPG. H3K9me3-HP1α/β heterochromatin suppresses CTCF binding sites Our work thus far suggests that H3K9me3-HP1α/β domains cosegregate in the nucleus and permit loop extrusion, but are depleted in extrusion barriers. One striking observation in Hi-C data obtained with DKO and 5Aza-treated cells is the emergence of loop extrusion features (that is, extrusion barriers) in H3K9me3-HP1α/β domains, compared with HCT116 (Fig. 7a). Moreover, we observe an increase in insulating loci in all IPGs, suggesting that this is not limited to H3K9me3-HP1α/β domains but rather is a global phenotype (Extended Data Fig. 10b,c). Next, we aimed to understand the mechanism behind the gain of extrusion barriers. It has been shown that CTCF binding to DNA can be blocked by DNA methylation 30,31 , and genome-wide loss of DNA methylation has been shown to increase CTCF occupancy at CpG-containing motifs (termed reactivated CTCF sites) 61 . Hence, we hypothesized that new loop extrusion features seen in DKO and 5Aza-treated cells are due to reactivated CTCF sites. To confirm that loss of DNA methylation reactivates cryptic CTCF sites, we performed chromatin immunoprecipitation (ChIP) followed by sequencing (ChIP-seq) in HCT116, DKO and 5Aza-treated cells. To identify high-confidence reactivated CTCF peaks, we chose overlapping reactivated CTCF peaks from DKO (this study), DKO (ref. 61) and 5Aza (this study) not present in HCT116 (n = 1,050) (Extended Data Fig. 10a,d). Reactivated CTCF sites are present in all IPGs, consistent with our observation that the increase in extrusion barriers occurs globally (Fig. 7b). In accordance with the role of CTCF as a barrier to loop extrusion, we also see an enrichment of cohesin complex factors RAD21 and SMC3 at reactivated CTCF sites only in DKO and 5Aza-treated cells (Fig. 7b and Extended Data Fig. 10d,e). To further demonstrate that reactivated CTCF sites are functional as extrusion barriers, we generated aggregate heatmaps of Hi-C contact frequency centered at reactivated CTCF sites for each IPG (Extended Data Fig. 10b). As expected, we observe an increase in insulation in DKO and 5Aza compared with HCT116. In sum, these data support that loss of DNA methylation leads to the emergence of functional CTCF sites which can act as barriers to stall loop-extruding cohesin complexes. To further investigate the genome-wide patterns of CTCF reactivation, we profiled DNA methylation, chromatin inaccessibility and histone modifications in relation to IPGs. To our surprise, reactivated CTCF motifs within B 4 regions lack CpG methylation in normal untreated HCT116 cells, in contrast to motifs in all other IPGs ( Fig. 7c and Extended Data Fig. 10f). These data suggest that DNA methylation could regulate CTCF via two mechanisms: direct and indirect. The direct mechanism relies on canonical CpG methylation within the core motif 30,31,33,61-63 , while the indirect mode of regulation within B 4 is likely independent of motif methylation. Consistent with this observation, CTCF motifs within B 4 contain lower CpG dinucleotide frequencies than the consensus core motif (Extended Data Fig. 10g). We speculate that this mechanism acts through nucleosome occlusion, which is consistent with the strong H3K9me3, Protect-seq and HP1α/ HP1β signal directly over the CTCF motif ( Fig. 7d and Extended Data Fig. 10e). In agreement with our results, increased CTCF occupancy was observed in Setdb1-deficient mouse neurons 64 , and a similar 5-methylcytosine (5mC)/nucleosome occlusion model has been proposed to regulate CTCF binding in mouse embryonic stem cells 65,66 . Discussion Our study demonstrates a remarkable cell-type-related diversity in inactive chromatin and its relationship to 3D genome organization. In HCT116, each of the three inactive IPGs exhibits a distinct chromatin state, Protect-seq signal and DNA methylation status, and displays differences in homotypic affinity and the regulation of loop extrusion barriers ( Table 1). The existence of cell-type-specific chromatin and contact frequency profiles highlights the need for de novo assessment of any given cell type. Our approach identified the B 0 IPG in HCT116 cells which is not observed in GM12878 cells, forming large domains that do not display strong homotypic interactions. Yet another inactive chromatin state appears to underlie the B2/B3 subcompartments in GM12878 and remains poorly characterized. Notably, the features originally reported as enriched in B2 and B3 came from dissimilar cell types: HeLa 67 , HT1080 fibrosarcoma 68 and skin fibroblasts 69 . Elucidating the molecular intermediates determining the behavior of known and novel IPGs will require a combination of unsupervised techniques and deep chromatin profiling [70][71][72] . Our results reveal striking connections between DNA methylation, H3K9me3 and HP1 deposition, and 3D chromosome organization at the level of chromosome compartmentalization and loop extrusion. influenced by the epigenome is not well understood. As loop extrusion has been shown to reduce the strength of compartmentalization and interfere with the segregation of short compartmental domains 10,12,14,15 , our results represent a complementary phenomenon: strongly compartmentalizing heterochromatin suppressing the imposition of extrusion barriers (CTCF-bound sites) while remaining permissive to extrusion. These results highlight the two-way interplay between compartmentalization and extrusion. The classic definition of heterochromatin originated from staining mitotic chromosomes 74 and later came to be associated with histone modifications 75 . We now have a more nuanced understanding of the molecular details, including several types of repressive histone modifications and associated proteins and their genomic distributions across cell types. Our work begins to unravel the diversity and plasticity in silent chromatin and its influence on genome compartmentalization, nuclear architecture and other chromosome-organizing processes. Online content Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/ s41594-022-00892-7. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cell culture HCT116 and DKO cells were cultured in McCoy5A medium. DKO cells were grown in the presence of G418, geneticin. All media were supplemented with 10% FBS at 37 °C and 5% CO 2 . For drug treatment, HCT116 cells were treated with 5 µM 5Aza for 48 h, then washed with 1 × PBS before collection. Crosslinking and nuclei preparation Cells were grown to ∼75% confluency, collected with trypsin, washed in 1× PBS and frozen/stored at −80 °C. Thawed cells were fixed in 1% formaldehyde and quenched in 0.125 M glycine, then washed twice in 1 × PBS. Fixed cells were then resuspended in 500 µl of lysis buffer (50 mM Tris-HCl pH 8.0, 10 mM NaCl, 0.2% NP40, 1 × protease inhibitor cocktail (PIC)) for 30 min on ice with periodic resuspension. Lysed cells were spun at 3,500 r.p.m. for 3 min and resuspended in 300 µl of 1 × NEB buffer 2, spun and resuspended in 198 µl of 1 × NEB buffer 2. Next, 2 µl of 10% SDS was added and incubated at 65 °C for 10 min. Afterwards, 400 µl of 1 × NEB buffer 2 and 60 µl of 10% Triton X-100 were added to quench the SDS. Samples were incubated at 37 °C for 15 min. Nuclei were spun at 3,500 r.p.m. for 3 min and resuspended in 300 µl of 1 × NEB buffer 2, and the wash step repeated. Protect-seq protocol The Protect-seq protocol was performed as described in ref. 37. Pelleted nuclei were resuspended in 183 µl of DNaseI Buffer, then 2 µl of 100 mM Ca 2+ (1 mM final), 5 µl of DNaseI (10 U), 5 µl of MNase (10,000 U) and 5 µl of RNase A (20 mg ml −1 ) were added (200-µl final volume). Cells plus the enzyme cocktail were incubated at room temperature (also works at 37 °C) for 30 min. Digested cells were spun at 3,500 r.p.m. for 3 min and resuspended in 400 µl of 1 × NEB buffer 2, then rotated at room temperature for 15 min. Digested/wash no. 1 cells were spun at 5,000 r.p.m. for 3 min and resuspended in the same 200 µl of cocktail mix and incubated again at room temperature (or 37 °C) for 30 min. Digested cells no. 2 were spun at 10,000 r.p.m. for 3 min and resuspended in 400 µl of 1 × NEB buffer 2, then rotated at room temperature for 15 min (save aliquot for microscopy). Then we spun digested cells no. 2 at 10,000 r.p.m. for 3 min and resuspended in 200 µl of 1 × NEB buffer 2, 20 µl of Proteinase K (SDS optional). They were digested overnight at 65 °C then purified using phenol/chloroform and ethanol precipitation (compatible with silica-bead purification). Illumina library preparation DNA was quantified with Qubit (high-sensitivity) and sonicated using Covaris 50-µl, 300-bp protocol. Illumina libraries were prepared using the NEB Ultra II DNA library kit using the manufacturer's protocol. We used 4-5 PCR cycles to amplify next-generation sequencing (NGS) libraries and index samples. In situ Hi-C The Hi-C protocol was performed similarly to ref. 4. In brief, fixed nuclei were isolated and digested with MboI (NEB no. R0147M), 5′ overhangs were filled-in with a biotinylated nucleotide, blunt-ends were ligated, followed by reverse crosslinking overnight. The purified DNA (2 µg) was sonicated using Covaris 50-µl, 400-bp protocol. The sonicated DNA was brought to a volume of 400 µl in binding buffer (5 mM Tris-HCl pH 7.5; 0.5 mM EDTA; 1 M NaCl) and mixed with 20 µl of streptavidin magnetic beads (NEB no. S1421) and rotated for 1 h at room temperature. The bead-bound DNA was washed twice with 400 µl of low-TE (10 mM Tris-HCl (pH 8.0) + 0.1 mM EDTA) and resuspended in 50 µl of low-TE. Next-generation sequencing (NGS) libraries were prepared using NEB DNA Ultra II kit (NEB no. E7645). End prep: mixed 50 µl of sample with 7 µl of End prep buffer and 3 µl of End prep enzyme, incubated for 30 min at room temperature then 30 min at 65 °C, washed twice with 400 µl of low-TE and resuspended in 60 µl of low-TE. Adapter ligation: 2.5 µl of adapter and 30 µl of ligation mix were incubated at room temperature for 1-3 h, washed twice with low-TE and resuspended in 90 µl of low-TE; following ligation, 3 µl of USER was added for 30 min at 37 °C, washed twice with 400 µl and resuspended in 15 µl. PCR: added 5 µl of universal F and index R primer, 25 µl of Q5 mix, 15 µl of sample for 5 PCR cycles. Libraries were purified with SPRI beads (0.9×) and quantified on a bioanalyzer and with NEB Illumina Quant kit (NEB no. E7630). Hi-C libraries were sequenced on a NextSeq500, either 150-bp or 75-bp paired-end reads. ChIP experiments SimpleChIP Plus Enzymatic Chromatin IP Kit (Magnetic Beads) no. 9005 from Cell Signaling Technologies was used for all ChIP-seq experiments, using the manufacturer's recommended protocol. We used 4 million cells per immunoprecipitation. Digested chromatin was pooled into a single tube for brief sonication to lyse nuclei. Supernatant was then split evenly between immunoprecipitations (minus 2% input). Antibodies and chromatin were incubated overnight at 4 °C, rotating. DNA was purified using spin columns and prepared using NEB Ultra II DNA Library Kit. Repli-seq Repli-seq was performed and analyzed as described in ref. 78. In brief, cells were pulsed with 100 µM BrdU for 2 h, trypsinized, ethanol fixed, stained with propidium iodide and FACS sorted (SONY SH-800) based on DNA content (early S versus late S). Genomic DNA was purified using Zymo DNA Clean & Concentrator and sonicated on a Covaris (S2) using the 300-bp, 50-µl protocol. Libraries were made with Ultra II DNA kits from NEB and sequenced on an Illumina miSeq and/or nextSeq. Computational analysis Hi-C data processing. Hi-C libraries were trimmed with the fastp package 79 to remove low-quality reads and sequencing adapters. Hi-C datasets were processed using the distiller pipeline (https://github. com/open2c/distiller-nf) written for nextflow 80 . Briefly, we mapped Hi-C sequencing reads to the human reference assembly hg38 using bwa mem (ref. 81) with flags -SP. Alignments were parsed, filtered for duplicates and pairs were classified using the pairtools package (https://github.com/open2c/pairtools). Hi-C pairs were aggregated into contact matrices in the cooler format using the cooler package at multiple resolutions 82 . All contact matrices were normalized using the iterative correction procedure 35 after bin-level filtering. ChIP-seq and Protect-seq data processing. All ChIP-seq data, including data from ref. 55 and ref. 61 but excluding those obtained from the ENCODE portal, were processed following the steps of the ENCODE ChIP-seq pipeline (https://github.com/ENCODE-DCC/ chip-seq-pipeline2) with slight modifications using a simplified custom snakemake workflow. Briefly, reads were mapped to hg38 using bwa mem (ref. 81). Alignment files (BAM format) were filtered for quality and duplicates using the samtools and Picard packages 83 . Cross-correlation analysis and fragment length estimation for single-ended datasets were performed using the phantompeakqualtools package 84 . Signal track (target over input) generation was performed using MACS2 (ref. 85). For CTCF, a motif instance was assigned to each ChIP-seq peak by scanning the core motif PWM ( JASPAR MA0139.1) using gimmemotifs (ref. 86). Protect-seq data were mapped following the same procedure to produce signal tracks (treatment over input). Repli-seq data processing. Two-stage Repli-seq reads were processed following the protocol described in ref. 78. Replicates were merged to produce signal tracks of log 2 count-normalized ratios of early divided by late fractions binned at 50-kb resolution. Tracks were then normalized by z-score transformation. Article https://doi.org/10.1038/s41594-022-00892-7 Spectral analysis. To characterize long-range interaction profiles, 50-kb resolution Hi-C maps were dimensionally reduced by applying global eigendecomposition on trans contact frequencies. First, we manually identified and excluded three large translocated segments in HCT116 based on published karyotype analysis 87 narrowed down by visual inspection of Hi-C data in HiGlass 88 . Structural variations in DKO, on the other hand, were too widespread to systematically exclude so DKO clustering results were omitted from this study. Next, to mask the influence of cis data, we followed the same procedure described in ref. 35, where cis pixels in the contact matrix are replaced with randomly sampled pixels from the same row or column. The resulting matrix was then re-balanced and scaled such that rows and columns summed to 1. Finally, the leading eigenvalues and associated eigenvectors of this matrix were then calculated using the eigsh routine from numpy, in descending order of eigenvalue modulus (that is, not respecting algebraic sign). We describe our clustering method in more detail in the Supplementary Note. In summary, m leading eigenvectors were rescaled and concatenated as columns, and k-means clustering was applied to the rows using scikit-learn. We produced cluster assignments for a range of k for Hi-C maps of GM12878 (ref. 4), and both unsynchronized untreated and unsynchronized 6-h Auxin-treated Rad21-AID HCT116 (ref. 10), calculated silhouette scores (Extended Data Fig. 1) and visually compared cluster profiles with a large number of independent genomic tracks. The final number of clusters was chosen based on a balance of clustering metrics and interpretability. For visualization of the approximate manifold structure, further dimensionality reduction on the m leading eigenvectors was performed using UMAP 89 . Additionally, direct visual inspection of the unreduced eigenvector subspaces (pairwise) and related genomic and functional data proved to be indispensable for interpretability of clusters (see below). Rasterized scatter plots. The new matplotlib (ref. 90) extension for the data graphics pipeline datashader (ref. 91) (dsshow function) (https:// datashader.org) was used to generate scatter plot visualizations of points representing 50-kb genomic bins. The datashader pipeline is used to prevent overplotting dense point clouds by aggregating points onto a regular two-dimensional grid and either (1) color-mapping the resulting raster to associated quantitative values (for example, point count, mean value) or (2) displaying associated color-coded categorical values (cluster labels, chromosome and so on) via image compositing. ChromHMM state assignment. We ran ChromHMM (ref. 92) to create epigenomic segmentations for HCT116 and DKO using bam files for ChIP-seq of broad marks/factors HP1a, HP1b, H3K9me3 and H3K27me3. For HCT116, we also included data for SON tyramide signal amplification sequencing (TSA-seq) 93 . Tracks were binarized at 50 kb using BinarizeBam and were modified to ignore bins filtered in Hi-C data. Models were trained using 50-kb bins (LearnModel -b 50000) for a range of state numbers. A seven-state model was chosen for HCT116. For DKO, a six-state model was able to qualitatively capture the same repressive states based on emission parameters (with only a single active state, since TSA-seq was not available to discriminate between two active states). 77 were also lifted over to hg38 using Crossmap. All data were filtered for CpG context to exclude liftover base changes. A custom script was used to aggregate records into 50-kb bins and calculate the cumulative methylation fraction from CpGs divided by total number of CpGs per bin. Chromatin state analysis. A gene quantification Functional profiles for spectral clusters (as in Fig. 1d, and averages in Fig. 2b) were derived from categorical or mean-aggregated quantitative signal tracks (distance from centromere, LaminB1 DNA adenine methyltransferase identification and sequencing (DamID-seq), SON TSA-seq, Protect-seq, Repli-seq, whole genome bisulfite sequencing (WGBS), ChIP-seq) at 50-kb resolution to match the resolution of IPG analysis. IPG domain metaplots and stacked signal heatmaps were generated from BigWig files using the pybbi package (https://github. com/nvictus/pybbi). Unscaled stacked heatmaps were defined using the domain midpoints as a reference point flanked by a fixed genomic distance left and right, while rescaled stacked heatmaps were generated by independently partitioning the intradomain signal and flanking regions into a fixed number of bins. Metaplots were generated by averaging rescaled heatmaps vertically. Sankey plots were generated by using ChromHMM segmentation maps from DKO cells. Chromatin states were intersected against disrupted domains using bioframe. Next, total base pairs overlapped for each chromatin state were counted. Sankey plots were generated using plotly. ROC curves. To assess the correspondence of individual signal tracks to IPG assignments derived from Hi-C data, we treated each mean-aggregated 50-kb resolution track as a binary classifier to predict a given IPG label (one of A 1 , A 2 , B 0 , B 1 , B 4 ) by applying a simple value-based discrimination threshold on the signal track. ROC curves and area under ROC for these classifiers were calculated using scikit-learn. Curves that dip below the diagonal indicate thres holds with predictive power for the complement of the target label (for example, 'not A 1 '). Quantile-based ChIP-seq histograms and Hi-C summary maps. The 50-kb-resolution ChIP-seq tracks were grouped into percentiles of either E1 signal or H3K9me3 signal to generate histograms and standard deviation envelopes. Expected contact frequency versus distance profiles were generated using cooltools (ref. 97) (https://github.com/open2c/ cooltools) and bivariate summary maps of observed/expected contact frequency (also known as saddle plots) using percentiles of either E1 or H3K9me3 signal as bins were also generated using cooltools. H3K9me3 domain calling. Domains defined by broad H3K9me3 ChIPseq enrichment across six cell types (HCT116, HFFc6, IMR90, K562, GM12878, H1-hESC) were called using an HMM procedure. H3K9me3 ChIP-seq bigwigs were mean-aggregated at 25 kb, log-transformed and z-scored, and binarized with a threshold of 1, and were used to train a two-state Bernoulli HMM using Pomegranate. Smoothed runs of 1 s from the Viterbi parses were used to define domains. P(s) curves per IPG. Scaling curves of contact frequency P as a function of genomic separation s were generated using cooltools by aggregating normalized contact frequency over valid pixels along diagonals of 10-kb-resolution cis contact maps limited to IPG domains, with diagonals grouped into geometrically increasing strata of genomic separation. Average contact frequency P(s) curves are displayed using log-log axes. 98 were calculated on 25-kb-resolution Hi-C maps with a 100-kb sliding window using the cooltools package. Additionally, an insulation minimum calling Hi-C pileup maps. The cooltools package was used to calculate aggregate observed-over-expected contact frequency maps (pileup maps) centered at CTCF sites and bounded by a fixed flanking genomic distance. Pileup maps are centered on the main diagonal at each feature's midpoint. Replication timing domain analysis. To identify early and late replicating domains, a 25-kb binned pandas dataframe was generated using bioframe. HCT116 and DKO replication timing signal tracks were imported into the binned dataframe using pybbi. Missing values were represented as Not a Number (NaN). Domains were identified with a two-state Gaussian HMM using Pomegranate 99 . Viterbi state calls were made on a per bin basis and used for downstream analysis. Neighboring states were merged to create domains then converted to bed files (https://github.com/gspracklin/hmm_bigwigs). Differential replication timing loci were identified by applying a cutoff of 0.75 on the difference between HCT116 and DKO 50-kb z-score tracks. Differentially timed loci separated by up to 250 kb were then merged into larger intervals using bioframe.cluster to produce 199 differentially timed regions. Polymer simulations. Simulations were created using the Polychrom library 100 . The polymer simulations ran using the OpenMM engine for GPU-assisted molecular dynamics simulations 101 . Each simulation modeled 8-11 megabases (Mb) of chromatin fiber as a chain of 1-kb monomers, and included five copies of the system inside the same container. Each simulation was run for 500,000,000 molecular dynamics steps. Periodic boundary conditions were used to maintain a density of 0.2 monomers per cubic nanometer. The following energies are in terms of kT (the Boltzmann constant times absolute temperature), and distances are measured in terms of the diameter of the monomers, which is 20 nm. Adjacent monomers on the chain are connected by a harmonic bond with potential U = 100(r − 1) 2 , where r is the distance between the centers of the monomers. Polymer stiffness is modeled by U = S(1 − cos(α)), a force dependent on the angle α formed by three adjacent monomers, and S is a stiffness parameter equal to 1.5. To model loop extrusion, loop-extruding factors (LEFs) were probabilistically loaded onto the polymer chain at uniformly random positions. Each LEF is represented by a harmonic bond equivalent to the one that connects adjacent monomers on the chain. Each step of one-dimensional (1D) dynamics corresponded to 400 molecular dynamics steps. An LEF with an upstream leg at monomer i will stay at i with probability ½ and move to i − 1 with probability ½ each step, unless i − 1 is occupied by an LEF or a CTCF. Similarly, a downstream leg at monomer j will stay at j with probability ½ and move to j + 1 with probability ½, unless j + 1 is occupied by an LEF or CTCF. CTCF sites were placed at fold-change peaks in HCT116 CTCF ChIP-seq (ENCODE ID ENCFF549PGC), with directionality according to CTCF motifs (from ref. 61). Each CTCF had a capture probability of min((fc − 1)/fc med ,1), where fc is the CTCF fold change and fc med is the median CTCF fold change over the region. Legs were released from CTCFs with a probability of 0.006 each monomer step. Each LEF was unloaded with a probability of 1/100 each step of 1D dynamics, and LEFs were separated by an average of 600 monomers. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability The references and accession numbers of published data used and analyzed in this work are indicated in Supplementary Table 1. Code availability A snakemake workflow for spectral decomposition, clustering and embedding is available at https://github.com/open2c/inspectro. Additional scripts and notebooks used to process the data in our study are available at https://github.com/mirnylab/heterochromatin-paper. Fig. 5 | Comparative analysis of compartmentalization and heterochromatin marks. Comparative analysis of genome organization and heterochromatic marks across HCT116, HFFc6, IMR90, K562, GM12878 and H1-hESC. (a) Histograms of ChIP-seq signal for repressive histone marks as in Fig. 3a based on eigenvector (E1) percentile and displayed in ascending order of E1 rank. Includes additional histograms for E1 and E2 (top) and data for two additional cell types: lung fibroblasts IMR-90 and foreskin fibroblasts HFFc6. (b) Histograms of ChIP-seq signal for repressive histone marks as in Fig. 3d based on H3K9me3 percentile and displayed in descending order of H3K9me3 rank. Includes additional histograms for E1 and E2 (top) and data for IMR-90 and HFFc6. (c) Bivariate summary maps of cis observed/expected contact frequency as in Fig. 3b, c based on E1 percentile in ascending order (top) and H3K9me3 percentile in descending order (bottom). (d) Bivariate summary maps as in (C) but describing observed/expected contact frequency in trans. In K562, GM12878 and H1 cells loci with low/negative E1 values still prefer to interact with other loci with similar E1 values even though in these cells most of these loci do not display strong H3K9me3-HP1 enrichment.
2022-12-24T16:19:09.939Z
2022-12-22T00:00:00.000
{ "year": 2022, "sha1": "232716abc53c7d2f41a0fa57eeba851a68708e17", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41594-022-00892-7.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "71fb61b99cf7cc7ab6b517f64ea5f43081861691", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
12890887
pes2o/s2orc
v3-fos-license
Cellular Multi-User Two-Way MIMO AF Relaying via Signal Space Alignment: Minimum Weighted SINR Maximization In this paper, we consider linear MIMO transceiver design for a cellular two-way amplify-and-forward relaying system consisting of a single multi-antenna base station, a single multi-antenna relay station, and multiple multi-antenna mobile stations (MSs). Due to the two-way transmission, the MSs could suffer from tremendous multi-user interference. We apply an interference management model exploiting signal space alignment and propose a transceiver design algorithm, which allows for alleviating the loss in spectral efficiency due to half-duplex operation and providing flexible performance optimization accounting for each user's quality of service priorities. Numerical comparisons to conventional two-way relaying schemes based on bidirectional channel inversion and spatial division multiple access-only processing show that the proposed scheme achieves superior error rate and average data rate performance. I. INTRODUCTION The use of relays to improve link reliability and coverage of cellular wireless communication systems has attracted significant research interest since the pioneer works [1]- [4], and various E. Chiu and V. K. N. Lau are with the Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong (e-mail: echiua@ieee.org and eeknlau@ust.hk). This work is funded by RGC 614910. The results in this paper were presented in part at IEEE ICC '12, Jun. 2012. relaying protocols are embraced by state-of-the-art and next generation commercial standards [5]- [7]. In practice, most relaying protocols operate in a half-duplex manner and transmission is divided into two phases using orthogonal channel accesses: in the first phase the source node broadcasts its message, and in the second phase the relay station (RS) forwards the source message to the destination node. The deficiency of half-duplex relaying is that when there is no direct link between the source and destination nodes the end-to-end transmission can only achieve half the degrees of freedom (DoF) of the channel. Two-way relaying is a promising means to alleviate the loss in spectral efficiency due to halfduplex operation. Specifically, given a pair of terminal nodes that are to exchange data, we allow transmissions in both directions to occur concurrently and reduce the total transmission time by half. The bidirectional transmissions will mutually interfere with each other; nonetheless, we can mitigate the impact of interference by employing spatial division multiple access (SDMA) processing at the RS [8], or by applying the principles of analogue network coding (ANC) [9] or physical layer network coding (PNC) [10]. By means of ANC, the RS performs amplifyand-forward (AF) relaying, and the terminal nodes utilize the a priori knowledge of their own transmitted signals to cancel the self-induced backward propagated interference. On the other hand, by means of PNC the RS attempts to decode-and-forward (DF) the network coded version of the terminal node signals, and the terminal nodes utilize the knowledge of their own signals to decode the network code. Note that PNC has strict feasibility requirements for the precoding and modulation and coding (MCS) schemes used at the terminal nodes (cf. [10,Proposition 1]). It is, however, nontrivial to extend the two-way relaying protocol to cellular multi-user systems. In this case, each node experiences self-induced interference as well as multi-user interference, and thus necessitates more sophisticated interference management techniques. To shed insight on designing an efficient scheme for cellular multi-user two-way relaying, we first review the qualities and limitations of the prominent related works. Single-User Two-Way AF Relaying: In [9], [11]- [15], the authors consider two-way AF relaying between two terminal nodes, and propose linear transceiver designs subject to various performance metrics (e.g., sum rate maximization and error rate minimization). In [16], the authors analyze the random coding error exponent in two-way AF relay networks and investigate rate and power allocation. However, these single-user designs cannot be easily extended to multiuser systems as they do not accommodate for the presence of multi-user interference. Multi-User Two-Way AF Relaying with Fixed Terminal Node Transceivers: In [8], [17], the authors consider two-way AF relaying between multiple pairs of terminal nodes. These works neither exploit self-interference cancelation nor optimize the terminal node transceivers (i.e., the transceivers are predetermined offline). Instead, they rely solely on conventional SDMA processing at the RS to mitigate the effects of interference. In [18]- [20], the authors consider cellular two-way relaying between a multi-antenna base station (BS) and multiple single-antenna mobile stations (MSs). Since the MSs are only equipped with a single antenna, they cannot apply MIMO interference mitigation techniques. The focus of these works is to jointly design the BS and RS transceivers to heuristically perform UL -DL bidirectional channel inversion, thereby simultaneously zero-force the multi-user interference from the received signals of the BS and the MSs. Two-Way DF Relaying: In [10], [21], [22], the authors consider single-user two-way DF relaying with PNC. Albeit theoretically promising, the application of PNC is subject to stringent feasibility requirements that greatly restrict the choices of the MCS schemes that could be employed, and the decoding operation at the RS has high computational complexity. In [23], the authors consider the unique scenario of three-user three-way DF relaying using the arguments of interference alignment (IA) [24]- [26]. Yet, this scheme cannot be easily extended to a general number of users as IA may be infeasible. In this paper, we consider a cellular system consisting of a BS, an RS, and multiple MSs. All nodes are equipped with multiple antennas. We seek to design linear MIMO transceiver for each node to facilitate efficient two-way AF relaying. The contributions and technical challenges of this work are as follows. • Two-Way Relaying by Virtue of Signal Space Alignment: We show that for the cellular multi-user two-way relaying system under study, the MSs could suffer from tremendous multiuser interference. Yet, exploiting the advantage of self-interference cancelation, we can align the signal spaces of the uplink (UL) and downlink (DL) signals to reduce the dimensions occupied by multi-user interference at each MS. Ultimately, this allows us to alleviate the half-duplex loss and achieve the DoF of the channel. The paradigm of two-way relaying exploiting signal space alignment is also considered in different contexts in [27], [28]. Specifically, in [27] the authors consider a single-user twoway MIMO relaying system, for which they propose a precoding design that align the two-way signals and deduce an algorithm for optimizing the basis of the aligned signal space. However, this single-user design cannot be easily extended to multi-user systems as it does not accommodate for the presence of multi-user interference. On the other hand, in [28] the authors consider a multi-user two-way multi-carrier relaying system, for which they propose different frequency domain precoding designs based on aligning the two-way signals of each pair of communicating terminal nodes. These frequency domain precoding designs neglect the impact of multi-user interference, and rely on the intrinsic high frequency diversity to mitigate the impact of interference. Note that it is non-trivial to extend these precoding designs to practical two-way MIMO relaying systems whose signal space dimensions are not large. • Algorithm for Two-Way Relay Transceiver Design with Quality of Service Constraints: In consideration of the fact that users in cellular systems have different quality of service (QoS) priorities, we formulate the two-way relay transceiver design problem to maximize the minimum weighted per stream signal-to-interference plus noise ratio (SINR) among all UL and DL data streams. This problem does not lead to closed-form solutions and is non-convex, and we propose to solve it using a two-stage algorithm. In the first stage we focus on attaining signal space alignment, and in the second stage we aim at optimizing the weighted per stream SINRs 1 . We show that the second stage subproblem belongs to the class of multigroup multicast problems, which are NP-hard [30,Claim 2]. So we further propose an algorithm to efficiently solve the second stage subproblem using second-order cone programming (SOCP) techniques [31,Section 4.4.2]. Outline: The rest of this paper is organized as follows. In Section II we present the system model. In Section III we discuss the interference management model and formulate the twoway relay transceiver design problem. In Section IV we present the proposed transceiver design algorithm. In Section V we present numerical simulation results. Finally, in Section VI we conclude the paper. Notations: C M ×N denotes the set of complex M × N matrices. Upper and lower case bold letters denote matrices and vectors, respectively. vec(X) denotes the column-by-column vectorization of X. [ X 1 ; . . . ; X N ] and [ X 1 , . . . , X N ] denote the matrices obtained by vertically and horizontally concatenating X 1 , . . . , X N , respectively. diag( X 1 , . . . , X N ) denotes a block diagonal matrix having X 1 , . . . , X N in the main diagonal. [ X ] (a : b , c : d) denotes the a-th to the b-th row and the c-th to the d-th column of X. (·) T , (·) † , and (·) * denote transpose, Hermitian transpose, and conjugate, respectively. range(X) denotes the column space of X. null(X) denotes the orthonormal basis for the null space of X. rank(X) and nullity(X) denote the rank and the nullity of X, respectively. pinv(X) denotes the pseudo-inverse of X. ℜ(y) denotes the real component of y. || X || denotes the Frobenius norm of X. K denotes the generalized inequality with respect to the second-order cone, i.e., [ y ; x ] K 0 means that y ≥ || x ||. X 1 ⊗ X 2 denotes the Kronecker product of X 1 and X 2 . x ∼ CN (µ, Ξ) denotes that x is complex Gaussian distributed with mean µ and covariance matrix Ξ. E(·) denotes expectation. K denotes the index set {1, . . . , K } and L k denotes the index set {1, . . . , L k }. 0 M ×N denotes an M × N matrix of zeros and I N denotes an N × N identity matrix. II. SYSTEM MODEL We consider a multi-user system where a BS communicates with multiple MSs as illustrated in Fig. 1. Due to the effects of path loss and shadowing, there is no direct link between the BS and the MSs, and a half-duplex RS is deployed to assist data transmission. In conventional relay systems, UL and DL transmissions utilize non-overlapping channel accesses (cf. Fig. 1a and Fig. 1b). We adopt the two-phase two-way relaying protocol whereby UL and DL transmissions share the channel: first in the multi-access (MAC) phase the BS and the MSs concurrently transmit to the RS (cf. Fig. 1c), then in the broadcast (BC) phase the RS forwards the aggregate signals to the BS and the MSs (cf. Fig. 1d). Specifically, we are interested in a time division duplex (TDD) system where conventional one-way relaying requires four time slots to complete the UL and DL transmissions while two-way relaying requires only two time slots as depicted in Fig. 2. The detailed model of the system under study is shown in Fig. 3. We consider two-way relaying between one BS and K MSs. For ease of exposition, we focus on the k-th MS and the same model applies to all the MSs. The BS is equipped with N B antennas, the RS is equipped with N R antennas, and the k-th MS is equipped with N k antennas. In the DL, the BS transmits L k data streams s A. Two-Way Relaying MAC Phase In the MAC phase, the BS precodes the DL data streams s D using the precoder matrix U using the precoder matrix W k ∈ C N k ×L k , and the transmitted signals of the k-th MS are given by x k = W k s (k) U . We make the following assumptions about the transmit power constraints of the BS and the MSs. Assumption 2 (BS and MS Transmit Power Constraints): The maximum transmit power of the BS is given by E(|| x B || 2 ) = || W B || 2 ≤ P B . The maximum transmit power of the k-th MS is Let H R ,B ∈ C N R ×N B denote the channel matrix from the BS to the RS, and let H R , k ∈ C N R ×N k denote the channel matrix from the k-th MS to the RS. It follows that the received signals of the RS can be expressed as is the AWGN. We make the following assumptions about the channel model. B. Two-Way Relaying BC Phase In the BC phase, the RS amplifies the received signals y R using the transformation matrix W R ∈ C N R ×N R , and the forwarded signals are given by We make the following assumption about the transmit power constraint of the RS. Assumption 4 (RS Transmit Power Constraint): The maximum transmit power of the RS is Accordingly, the received signals of the BS are given by where Likewise, the received signals of the k-th MS are given by C. Receive Processing The BS and MS receive processing consists of two steps. Inherent to the two-way relaying protocol, the BS and the MSs can exploit the a priori knowledge of their own transmitted signals to cancel the backward propagated self-interference 2 . After that, the BS and the MSs process the resultant signals using linear equalizers to produce data stream estimates. Specifically, the BS cancels the self-interference i B from the received signals y B and processes them using the 2 We shall elaborate in Section IV-D the assumptions on the side information available at each node to facilitate transceiver design and self-interference cancelation. U . The UL data stream estimates are given by In the same way, the k-th MS cancels the self-interference i k from the received signals y k and processes them using the equalizer matrix V k ∈ C L k ×N k to produce the DL data stream estimates In the UL the SINR of the data stream estimate [ s and in the DL the SINR of the data stream estimate [ s . (8) Furthermore, the achievable data rate for each data stream can be expressed as where the factor of 1/2 accounts for the half-duplex loss. III. INTERFERENCE MANAGEMENT AND TRANSCEIVER DESIGN PROBLEM FORMULATION In this section, we first discuss the motivations behind the interference management model to exploit signal space alignment. We then proceed to formulate the transceiver design problem. A. Interference Management via Signal Space Alignment As shown in (5) and (6), the UL and DL data stream estimates are given by where the DL data stream estimates s (k) D are prone to multi-user interference and it is nontrivial to mitigate its effects. On the one hand, it is detrimental to performance if we naively treat the multi-user interference as noise since its strength could be comparable to the desired signals. On the other hand, it is not always spectrally efficient if we were to mitigate interference by solely using conventional SDMA processing at the RS [8]. Under this approach, all the signal streams that constitute the RS forwarded signals (2) must be linearly independent, which implies that rank([ W R U, W R D ]) = 2L and the channel matrices must satisfy so the MSs (BS) can only transmit L ≤ N R /2 data streams in the UL (DL). Taking into consideration that each node is capable of canceling the backward propagated self-interference in the received signals, we can allow the self-interference to overlap with the desired signals since ultimately it does not affect the decoding of the desired signals. As such, we can facilitate interference management by perfectly aligning the UL and DL signal spaces to reduce the dimension of the multi-user interference space at each node as exemplified in Fig. 4. Mathematically, aligning the UL and DL signal spaces can be represented as and this can be manifested by constructing the RS forwarded signals (2) such that In order for all the UL and DL signal streams to be linearly independent, the rank of the RS forwarded signals should be and from (13) it suffices that the channel matrices satisfy so the MSs (BS) can transmit L ≤ N R data streams in the UL (DL). Comparing (14b) and (10b) shows that we can achieve superior multiplexing gain by exploiting signal space alignment than by performing conventional SDMA processing. Consider again the UL and DL data stream estimates in (5) and (6). Exploiting signal space alignment, in the DL data stream estimates s (k) D the UL and DL multi-user interference streams span the same signal space and appear as if they were one set of streams. In effect, UL and DL transmissions perform similarly to separated one-way relaying transmissions. Remark 1 (Feasibility of Signal Space Alignment): Aligning the UL and DL signal spaces as per (12) requires that the two-way signals between the BS and the k-th MS be aligned when received by the RS (i.e., a single node), which then broadcasts the aligned signals back to the BS and the k-th MS. Using the arguments of coordinated transmission and reception [32], it can be shown that the alignment operation is feasible if the number of antennas and data streams for each node and the rank of the channel matrices satisfy (14). Note that this is unlike conventional IA for interference channels, which is subject to stringent feasibility conditions [24]- [26], due to the requirement to simultaneously align interference at multiple nodes. Remark 2 (DoF of One-and Two-Way AF Relaying): As per [32], [33], the achievable DoF Thus, the achievable DoF of two-way relaying is given by B. Transceiver Design Optimization In the preceding discussion, we have focused on interference management without regard for QoS considerations. In practice, the users might have different service priorities, and their data streams might have heterogenous requirements (for example, in terms of throughput and reliability). However, as shown in (7)- (9), the achievable data rate for each data stream is intricately related to the channel qualities of all links and the transceiver matrices of all nodes. One issue is that typically the transmit power of the BS is substantially higher than the MSs, and the transceiver design should accommodate for the unequal transmit powers to ensure that both UL and DL transmissions are of satisfactory performance. Toward this end, we seek to optimize the transceiver design to maximize the minimum weighted SINR among all data streams. Specifically, we associate with each UL and DL data stream a weight factor that corresponds to its priority, where the higher the priority of a data stream the larger its weight factor. In the UL let [ ω By maximizing the minimum weighted per stream SINR, we can simultaneously enhance the performance of all data streams while accounting for their relative priorities. Altogether, we formulate the two-way relaying transceiver design problem as follows. Problem 1 (Two-Way Relaying Transceiver Design with QoS Constraints): Given the transmit power constraint of each node and the priority weight factor of each data stream, we design the two-way relaying transceiver processing -exploiting signal space alignment -to maximize the minimum weighted per stream SINR. Note that Problem Q is difficult to solve since it does not lead to closed-form solutions and is non-convex. As we show in the next section, it is also nontrivial to reformulate Problem Q in order to take advantage of its structures and solve it more easily 3 . A. Two-Stage Transceiver Design Paradigm The transceiver design problem, Problem Q, does not lead to closed-form solutions and is nonconvex (since the objective function and constraints are not jointly convex in all the optimization variables); thus, we cannot efficiently solve for all the transceiver matrices in a single-shot manner. We propose to solve the transceiver design problem using a two-stage paradigm: in the first stage we focus on attaining alignment between the UL and DL signal streams, and in the second stage we aim at optimizing the weighted per stream SINRs. To facilitate decomposing the transceiver design problem into two stages, we first extend the signal model as follows. We divide the RS transformation matrix into two components with the signal streams of the k-th MS. Substituting (17) into (2), the RS forwarded signals are Analogous to interference alignment (cf. [26] and references therein), aligning the UL and DL signal streams (16d) can be encompassed by the following conditions: Remark 3 (Interpretation of (18)): We design the precoder matrices W B, k and W k such that the two-way signals between the BS and the k-th MS are perfectly aligned at the RS and linearly independent to other users' signals. Accordingly, the RS forwarded signals can be expressed as where Φ diag(φ (1) , . . . , φ (K) ) and Ψ diag(ψ (1) , . . . , ψ (K) ) represent the effective gains of the UL and DL data streams, respectively. The transmit power constraint of the RS forwarded signals is given by We define the SINRs of the RS forwarded signals as From (19), (3) and (4), the end-to-end received signals of the BS can be expressed as and the end-to-end received signals of the k-th MS can be expressed as . . , φ (K) ) represent the effective gains of the UL interference streams, and Ψ k diag(ψ (1) , . . . , ψ (k−1) , 0 L k ×1 , ψ (k+1) , . . . , ψ (K) ) represent the effective gains of the DL interference streams. Therefore, the UL and DL data stream estimates (5), (6) can be expressed as and the end-to-end SINRs of the data stream estimates (7), (8) are equivalently given by . (26) Lemma 1 (Decomposition of Transceiver Design): The transceiver design problem, Problem Q, can be equivalently decomposed into two stages. The first stage processing finds the BS and MS precoder matrices and the RS equalizer matrix W B , {W k } K k=1 , A R , subject to the alignment conditions (18), to maximize the minimum weighted SINR of the RS forwarded signals. First Stage Processing The second stage processing finds the RS precoder matrix and the BS and MS equalizer matrices to maximize the minimum weighted end-to-end SINR of the data stream estimates. Second Stage Processing Proof: Refer to Appendix A. The top-level steps of the proposed two-stage transceiver design are summarized in Algorithm 1 and illustrated in Fig. 5. We shall elaborate the details of the first and second stage processing in the following subsections. B. First Stage Processing To solve the first stage processing, Problem M, we focus our attention on coordinative eigenmode transmission at the MSs, zero-forcing equalization at the RS, and zero-forcing transmission at the BS [34]. • MS Precoder Matrices: Let G k denote the set of the right singular vectors of the channel matrix k ∈ G k is the beam direction, and λ (l) k is the allocated power satisfying the transmit power constraint • RS Equalizer Matrix: The zero-forcing equalizer matrix is given as As such, the SINRs of the RS forwarded signals (20) can be equivalently expressed as where κ Lemma 2 (Beam Directions and Power Allocation at the BS and MSs): To maximize the minimum weighted SINR of the RS forwarded signals, the power allocation at the BS and MSs are, respectively, given by It follows that the weighted SINRs of the RS forwarded signals can be expressed as and selection of the beam directions to maximize the minimum weighted SINR can be performed using combinatorial search. Proof: Refer to Appendix B. C. Second Stage Processing We now proceed to describe the algorithm for solving the second stage processing, Problem B. As per (25)- (26), the SINRs of the data stream estimates are not jointly convex in the RS precoder matrix and the BS and MS equalizer matrices F R ,V B , {V k } K k=1 . However, for a fixed precoder matrix F R there are closed-form solutions for the equalizer matrices V B and V k ; conversely, for fixed V B and V k we can cast the problem of solving for F R as a quasi-convex problem. This motivates the approach to progressively refine the transceiver matrices by iteratively alternate between solving for the BS and MS equalizer matrices and the RS precoder matrix. In this regard, we alternatingly optimize each one of the RS precoder matrix and the BS and MS equalizer matrices in the form of the following subproblems, and the convergence proof is provided in Appendix C. RS Precoder Matrix First, for Problem B B and Problem B k , the per stream SINRs are maximized with a minimum mean squared error (MMSE) equalizer matrix [35]. Hence, the BS equalizer matrix is given by where is the covariance matrix of the aggregate noise at the BS. Likewise, the k-th MS equalizer matrix is given by where D ] (l) . As per [30,Claim 2], this multigroup multicast problem is NP-hard 4 . We propose to solve for the RS precoder matrix F R using Algorithm 2 as derived in Appendix D. In a nutshell, we cast Problem B R as a quasiconvex problem and solve it using the bisection method [31,Section 4.2.5]. To do so, we define the SOCP feasibility problem of designing F R that achieves a target value of the minimum weighted per stream SINR γ 0 as 5 Starting with an interval that is expected to contain the optimum value of γ 0 , we repeatedly bisect the interval and select the subinterval in which Problem B R is feasible until γ 0 converges. D. Implementation Considerations First, we make the following assumptions about the synchronization requirement on the UL and DL signals. Second, we make the following assumptions on the side information available at each node to facilitate transceiver design and self-interference cancelation. Under these assumptions, the proposed transceiver design problem can be solved in a distributed fashion. As per Assumption 6, the RS can locally solve the transceiver design problem (using Algorithm 1) and broadcast the RS transformation matrix to the BS and MSs. Assumption 7 (Side Information at the BS and MSs): The BS and each MS has knowledge of the channel matrix between itself and the RS, the two-hop effective channel matrix, and the RS transformation matrix. Thus, the side information at the BS and the k-th MS include For instance, the BS and each MS can estimate the channel matrix between itself and the RS by observing the reciprocal reverse channel, and can estimate the two-hop effective channel matrix using pilot-assisted techniques. As per Assumption 7, the BS and MSs can locally determine their precoder and equalizer matrices (using Lemma 2,(36), and (37)), and have sufficient information to deduce and cancel self-interference. V. SIMULATION RESULTS AND DISCUSSIONS In this section, we provide numerical simulation results to assess the performance of the proposed transceiver design. For illustration, we consider the following simulation settings. A. Simulation Settings We consider a system with K = 3 MSs. In particular, we focus on MIMO configurations similar to those defined in the IEEE 802.16m standard [5]: the BS is equipped with up to N B = 8 antennas and the MSs are equipped with N k = {2, 4} antennas. As an example, we investigate the scenario in which the BS exchanges L 1 = 2, L 2 = 1, and L 3 = 1 data streams with the MSs. We evaluate the performance of the proposed scheme using the packet error rate (PER) and the average sum rate 6 as performance metrics. In the PER simulations, we employ the convolutional turbo code (CTC) defined in the IEEE 802.16m standard [5, Section 16.3.10.1.5]: each packet contains eight information bytes coded at rate 1/3 and modulated using QPSK. We compare the performance of the proposed scheme against the following prominent baseline schemes. Since these schemes were originally designed for single-antenna MSs, they do not consider MS precoder and equalizer designs. We extend these schemes to generate the k-th MS precoder matrix W k from the principal right singular vectors of the channel matrix H R , k with equal power allocation across the data streams, and we obtain the k-th MS equalizer matrix as V k = (W k ) T . • Baseline 1 (Bidirectional Channel Inversion Naive Algorithm [19]): The BS precoder and equalizer matrices and the RS transformation matrix are determined using pseudo-inverse methods. • Baseline 2 (Bidirectional Channel Inversion Greedy Algorithm [20]): The BS precoder and equalizer matrices are determined using pseudo-inverse methods. A greedy iterative algorithm is employed to determine the RS transformation matrix that maximizes the asymptotic per stream SINRs. • Baseline 3 (Two-Way Relaying using Conventional SDMA Processing [8]): The RS transformation matrix is devised to spatially multiplex all data streams. Since this scheme does not provide BS precoder and equalizer designs, we generate the BS precoder matrix W B from the principal right singular vectors of the channel matrix H R ,B with equal power allocation across the data streams, and we obtain the BS equalizer matrix as V B = (W B ) T . In the simulation results we define the signal-to-noise ratio (SNR) as P B /N 0 . We set the RS and MS transmit powers such that P B /L = P R /L = P k /L k , so the transmit power per data stream is the same for all nodes. We assume i.i.d. Rayleigh fading, so the channel matrices are B. Performance Comparisons In Fig. 6 and Fig. 8, we present the performance results when the RS is equipped with N R = 4 antennas. Note that in this setting the number of spatial dimensions at the RS does not suffice 6 The average sum rate is defined as E are the UL and DL per stream achievable data rates, respectively, as given in (9). 7 Note that rank(H R ,B ) = min{N B , N R } and rank(H R ,k ) = min{N k , N R } with probability 1. for performing two-way relaying using conventional SDMA processing (i.e., N R < 2L), and so Baseline 3 is not feasible. Moreover, we assume that User 2 has higher service priority than the other users; as an example, we set the priority weight factors to [ ω Fig. 6 we show the PER performance results when the BS is equipped with N B = 4 antennas and the MSs are equipped with N k = 2 antennas. It can be seen that the proposed scheme exhibits better error performance than the baseline schemes. For instance, the proposed scheme achieves in excess of 10 dB SNR gain over the baseline schemes at 10 −2 PER. This is attributed to the fact that the proposed scheme efficiently exploits the multiple spatial dimensions at the MSs, whereas the baseline schemes were originally designed for single antenna MSs and cannot efficiently exploit the available spatial dimensions. On the other hand, reflecting the QoS priority settings, for the proposed scheme User 2 has approximately 3 dB SNR gain over the other users for all PER values smaller than 10 −1 . Second, in Fig. 8 we show the average sum rate performance results. In Fig. 8a, we show the average data rate versus SNR when the BS is equipped with N B = 4 antennas. It can be seen that the proposed scheme achieves significant data rate gain over the baseline schemes. Moreover, the proposed scheme alleviates the half-duplex loss (cf. Remark 2 and Corollary 1) and achieves the DoF equal to min{N B , N R , K k=1 N k } = 4. In Fig. 8b, we show the average sum rate versus the number of BS antennas at 25 dB SNR. It can be seen that the data rate of the proposed scheme improves monotonically with the number of antennas at the BS and MSs. Note that the inferior performance of Baseline 3 is due to the fact this scheme requires more spatial dimensions at the RS to be feasible. D ] (l) = 1. It can be seen that the proposed scheme substantially outperforms Baseline 3 (e.g., up to 19 dB SNR gain at at the RS to mitigate interference as well as to achieve beamforming gain, whereas Baseline 3 uses all the spatial dimensions to null interference. VI. CONCLUSIONS In cellular multi-user two-way AF relaying systems, each node experiences self-induced backward propagated interference as well as multi-user interference. As a result, conventional self-interference cancelation approaches for single-user two-way relay systems do not suffice to mitigate the impact of interference. We applied an interference management model exploiting where γ (k, l) U and γ (k, l) D are the SINRs of the RS forwarded signals (20) and Consider ξ (k, l) U for example. Note that where (a) follows from the fact that the terms [ V As per (41), the minimum weighted end-to-end SINR of the data stream estimates is limited by the minimum weighted SINR of the RS forwarded signals, i.e., By this property, the transceiver design problem, Problem Q, can be decomposed into two stages. In the first stage processing, we find the BS and MS precoder matrices and the RS equalizer matrix that maximize the minimum weighted SINR of the RS forwarded signals, i.e., arg max , and thereby implicitly maximize the achievable minimum weighted end-to-end SINR of the data streams estimates. Then, in the second stage processing, we find the RS precoder matrix and the BS and MS equalizer matrices to holistically maximize the minimum weighted end-to-end SINR of the data streams estimates, i.e., arg max Therefore, for fixed beam directions { g (l) k , g (k, l) B }, the power allocation at each node can be separately determined. For instance, the power allocation at the k-th MS can be determined It can be shown that (44) is satisfied with λ Analogously, the power allocation at the BS is given by λ APPENDIX C: CONVERGENCE OF THE SECOND STAGE PROCESSING At the q-th iteration of the second stage processing, we denote the RS precoder matrix as ≥ min hence the minimum weighted per stream SINR satisfies It follows from (45) and (46) and equality (a) follows from the Kronecker product property vec(XYZ) = ((Z) T ⊗X) vec(Y). With some algebraic manipulations and using the aforementioned Kronecker product property, the UL SINR constraints (33b) can be expressed as In the same manner, the DL SINR constraints can be expressed as Therefore, Problem B R can be equivalently expressed as s.t. α , ∀k ∈ K, ∀l ∈ L k , (51b) Note that the transmit power constraint (51c) is convex in the RS precoder matrix F R , but the SINR constraints (51b) are non-convex in F R and the minimum weighted per stream SINR slack variable γ 0 since α (k, l) U and α (k, l) D are not affine in F R and γ 0 . In order to obtain a mathematically tractable solution to Problem B R , we cast the SINR constraints as convex functions in F R by tightening these constraints as follows. We define The BS exchanges L 1 = 2, L 2 = 1, and L 3 = 1 data streams with the MSs. User 2 has higher service priority than the other users: [ ω
2012-05-13T04:40:07.000Z
2012-05-13T00:00:00.000
{ "year": 2012, "sha1": "171d6f379b61a4cd4c5eca5141d3b72a2ef916a1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1205.2828", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e03f706128ab468ecd456860126e85e40d2c77c8", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
26534349
pes2o/s2orc
v3-fos-license
Effects of hemoperfusion adsorption and/or plasma exchange in treatment of severe viral hepatitis: A comparative study AIM: Non-bioartificial liver has been applied to clinic for quite a long time, but the reported efficacy has been very different. The aim of this study was to compare the efficacy and safety of hemoperfusion adsorption, plasma exchange and plasma exchange plus hemoperfusion adsorption in treatment of severe viral hepatitis. METHODS: Seventy-five patients with severe viral hepatitis were treated with hemoperfusion adsorption therapy (24 cases), plasma exchange therapy (17 cases) and plasma exchange plus hemoperfusion adsorption therapy (34 cases). The data of liver function, renal function, blood routine test, prothrombin time (PT) and prothrombin activity (PTa) pre-and post-therapy were analyzed. RESULTS: Clinical symptoms of patients improved after treatment. The levels of aminotransferase, total bilirubin, direct bilirubin decreased significantly after 3 therapies ( P <0.05 or P <0.01). PT, the level of total serum protein decreased significantly and PTa increased significantly after plasma exchange therapy and plasma exchange plus hemoperfusion adsorption therapy ( P <0.05 or P <0.01). The side effects were few and mild in all patients. Effects of hemoperfusion adsorption and/or plasma exchange in treatment of severe viral hepatitis: A comparative study. INTRODUCTION The treatment of severe viral hepatitis is always intractable in clinic. The previous non-bioartificial liver has widely been applied to clinic treatment [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16] , but its reported efficacy is various. In order to make an objective evaluation and comparison of the effects of non-bioartificial liver in the treatment of severe viral hepatitis, we chose three therapies of hemoperfusion adsorption, plasma exchange and plasma exchange plus hemoperfusion adsorption and compared their efficacy and safety. Materials Sixty-four males and 11 females aged from 23 to 66 (41.3 on average) years with severe viral hepatitis were hospitalized in our section from January 1998 to February 2002. The diagnosis of 75 patients meeting the criteria for severe viral hepatitis established in National Viral Hepatitis Symposium [17] , was all severe chronic hepatitis. The conditions of 75 patients were 4 at the early stage, 31 at the middle stage, 40 Methods Hemoperfusion adsorption therapy In the computercontrolled system of Type HSZ2000 artificial liver device, we chose the program of hemoperfusion adsorption therapy. The blood was pumped out of the body with the flow velocity of 60-80 mL/min and into a new type of activated charcoal column for adsorption and then backed into the body with 100 mL saline through deferens. Meanwhile, the same quantity of protamine was infused to neutralize heparin so that coagulation time (CT) could become normalized. Each patient received hemoperfusion adsorption for 1 to 4 times and all patients received 52 times in total, averaging 2.2 times per person. Plasma exchange therapy In the computer-controlled system of Type HSZ2000 artificial liver device, we chose the program of plasma exchange therapy. The blood was pumped out of the body with the flow velocity of 60-80 mL/min and into a plasma exchange filter to discard the plasma and then mixed up with fresh frozen plasma (FFP) with flow velocity of 30-50 mL/min to be reperfused back into the body with 50 mL 200 g/L albumin solution and 100 mL saline through deferens. The balance of output and input should be controlled closely. Meanwhile, the same quantity of protamine was infused to neutralize heparin so that coagulation time (CT) could get normalized. Each patient received plasma exchange for 1 to 4 times and all patients received 36 times in total, averaging 2.1 times per person. Plasma exchange plus hemoperfusion adsorption therapy In the computer-controlled system of Type HSZ2000 artificial liver device, we chose the program of plasma exchange plus hemoperfusion adsorption therapy. The blood was pumped out of the body with the flow velocity of 60-80 mL/min and into a plasma exchange filter to discard the plasma and then through activated charcoal column for adsorption and then mixed up with fresh frozen plasma (FFP) with flow velocity of 30-50 mL/min to be reperfused back into the body with 50 mL 200 g/L albumin solution and 100 mL saline through deferens. The balance of output and input should be controlled closely. Meanwhile, the same quantity of protamine was infused to neutralize heparin so that coagulation time (CT) could get normalized. Each patient received plasma exchange plus hemoperfusion adsorption from 1 to 4 times and all patients received 65 times in total, averaging 1.91 times per person. This process could last for one and a half to three hours and the exchanged plasma volume was up to 2 500-3 000 mL. Experimental tests The blood samples were collected before and after each treatment to check the liver function, renal function, PT and for blood routine test. Clinical therapeutic efficacy The standard to evaluate the clinical curative effect refers to the references [18,19] . Statistical analysis All data were shown as mean±SD. t test was used to compare the data before and after treatment. The effect of hemoperfusion adsorption therapy Liver function improved significantly after treatment ( Table 1). The levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), total bilirubin (TB) and direct bilirubin (DB) were decreased significantly (P<0.05 or 0.01). Total serum protein (TSP) level was decreased but not significantly (P>0.05). Coagulation function improved after treatment. Prothrombin time decreased from 24.4 s to 21.31 s (t=1.268, P>0.1) and prothrombin activity was increased from 28.61% to 33.14% (t=1.216, P>0.1), but no significant difference was presented on statistical analysis. ALT: alanine aminotransferase, AST: aspartate aminotransferase, TB: total bilirubin, DB: direct bilirubin, TSP: total serum proteins. The effect of plasma exchange therapy Liver function improved greatly after treatment ( The effect of plasma exchange plus hemoperfusion adsorption therapy Liver function improved immensely after treatment ( Table 3). The levels of ALT, AST, TB, DB and TSP were decreased and significant difference was presented (P<0.05 or 0.01). Coagulation function improved greatly. Prothrombin time decreased from 28.0 s to 22.9 s (P<0.05) and prothrombin activity was increased from 25.8% to 30.9% (P<0.05). Both showed significant differences on statistical analysis. The data of renal function and blood routine test pre-and posttherapy Renal electrolytes showed no obvious changes and the levels of urea nitrogen and creatinine were shown no significant difference pre-and post-therapy (P>0.01 or 0.05). There was no significant difference in the levels of white blood cells (WBC), red blood cells (RBC), hemoglobin (Hgb) and platelet (PLT) pre-and post-therapy (P>0.01 or 0.05). Side effects In the group with hemoperfusion adsorption therapy, 2 patients had side effects twice, skin itch and rash once, pyrogen reaction once. In the group with plasma exchange therapy, 3 patients experienced side effects 3 times, skin itch and rash once, hemolytic reaction once and transfusion reaction once. In the group with plasma exchange plus hemoperfusion adsorption therapy, 10 patients had side effects thirteen times, skin itch and rash once, numbed face and 4 limbs 4 times, blood pressure fluctuation once and hypothermia once, hemolytic reaction once, transfusion reaction once, pyrogen reaction once. All side effects were relieved after treatment and had no influence on the whole therapeutic process. DISCUSSION It has been proven that hemoperfusion adsorption plays a role in removal of bilirubin and intermediate molecular substances. Previously owing to the poor technique of activated charcoal filter, there were obvious side effects in application of hemoperfusion adsorption therapy. Especially some severe side effects such as serious hemorrhage following platelet destruction and hemolysis following erythrocyte destruction once made the clinical application and basic research of artificial liver support system home and abroad go to a standstill for a long period of time. In the past few years, because the advance of technique of activated charcoal filter and the particles of charcoal became smaller and their surface was processed, the chance of platelet destruction and erythrocyte destruction was He NH et al. Three artificial liver support systems in treatment of severe viral hepatitis much less when the blood flowed through activated charcoal filter, biocompatibility of activated charcoal improved [20] . Therefore, the therapy of hemoperfusion adsorption has again been applied to clinic to treat the liver failure patients [21][22][23] . That 24 patients had an improvement in clinical symptoms temporarily and in liver function indicates hemoperfusion adsorption therapy has a temporary supportive effect on liver failure caused by severe viral hepatitis. The mechanism of severe viral hepatitis is the cooperation of immunopathological lesion caused by hepatitis virus and the secondary lesion of liver cells that results from the great deal of cytokine such as tumor necrosis factor-alpha (TNF-α), interleukin-1β (IL1-β), interleukin-10(IL10) released by intrahepatic and extrahepatic mononuclear macrophages due to the enteroendotoxemia following the impairment of hepatic barrier function. In this process, the secondary lesion plays an important role [24][25][26][27][28] . Cytokine can induce hepatocyte apoptosis [29] . Cytokine and endotoxin removal can relieve hepatic lesion, reduce leucocyte emigration and platelet aggregation and maintain the intracellular stablization so as to delay or reverse the disease progress and improve the prognosis. Hemoperfusion adsorption therapy can eliminate endotoxin and cytokine nonspecifically and play an important role in supporting treatment of liver failure [30] . Plasma exchange separated and discarded plasma of liver failure patients to remove the toxic substances (especially those binding with proteins) and compensated with normal fresh frozen plasma to supplement some essential substances such as coagulation factors, albumin, immunoglobin so as to ameliorate the microenvironment of liver and accelerate the liver regeneration and the liver function recovery [31,32] . The therapeutic effects of 17 acute liver failure patients who received plasma exchange therapy were similar to those reported abroad [32,33] , but quite different from those reported home [22] . This may be related to the severity of the patients in our series who are all in the intermediate or late stages of liver failure caused by severe chronic hepatitis [18,19] and in addition, the times of plasma exchange should be taken into consideration. Though the prognosis of patients receiving plasma exchange therapy did not live up to our expectation, the clinical symptoms of patients after treatment showed temporary relief and liver function and coagulation function improved obviously. So plasma exchange therapy has temporary supportive effects on liver failure caused by severe viral hepatitis. Plasma exchange plus hemoperfusion adsorption therapy is a combination of plasma exchange and hemoperfusion adsorption, so it is more beneficial to the liver microenvironmental amelioration and liver regeneration and recovery of liver function. The therapeutic effects of 34 patients receiving plasma exchange plus hemoperfusion adsorption therapy showed coincidence with those receiving plasma exchange plus hemoperfusion adsorption therapy abroad [33,34] . That the patients' clinical symptoms after treatment showed temporary relief and liver function and coagulation function improved obviously indicates this therapy has temporary supportive effects on liver failure caused by severe viral hepatitis. In comparison among three groups pre-and post-therapy, the coagulation function amelioration and plasma protein decrease by plasma exchange therapy and plasma exchange plus hemoperfusion adsorption therapy were more obvious than those by hemoperfusion adsorption therapy. The main advantages of hemoperfusion adsorption therapy are low cost and less protein loss. Theoretically, plasma exchange therapy is a relatively complete liver substitutive therapy and its effects have been proven, but a large supply of plasma, high cost, and easy infection of blood-transmitted diseases fail them and plasma exchange deprives patients of hepatocyte growth substances. Consequently it may do harm to liver regeneration and long-term therapeutic effects. The concentrations of plasma proteins will decrease if plasma is not compensated enough after a great loss. Through the liver failure rat model which received total blood exchange, Eguchi [35] found that the hepatocyte regeneration was suppressed. In the fresh frozen plasma for exchange, there are a great deal of citrates which can increase the incidence of side effects after infused into the body and can do harm to hepatocyte energy metabolism and hepatocyte regeneration. How to advance various therapies and combine them with bioartificial liver support system in order to get good effects and reduce the side effects await further study [36][37][38][39][40][41] .
2018-04-03T00:45:31.613Z
2004-04-15T00:00:00.000
{ "year": 2004, "sha1": "1e1e09b5c8818a823a98e8398379059664004191", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v10.i8.1218", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "9446b16976e85c15c97e8f082eb1910ad97896e1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233395805
pes2o/s2orc
v3-fos-license
Application of Broccoli Leaf Powder in Gluten-Free Bread: An Innovative Approach to Improve Its Bioactive Potential and Technological Quality In comparison to conventional bread, gluten-free bread (GF) shows many post-baking defects and a lower nutritional and functional value. Although broccoli leaves are perceived as waste products, they are characterised by a high content of nutrients and bioactive compounds. The present study evaluated the nutritional value, technological quality, antioxidant properties, and inhibitory activity against the formation of advanced glycation end-products (AGEs) of GF enriched with broccoli leaf powder (BLP). Compared to the control, gluten-free bread with BLP (GFB) was characterised by a significantly (p < 0.05) higher content of nutrients (proteins and minerals), as well as improved specific volume and bake loss. However, what needs to be emphasised is that BLP significantly (p < 0.05) improved the antioxidant potential and anti-AGE activity of GFB. The obtained results indicate that BLP can be successfully used as a component of gluten-free baked products. In conclusion, the newly developed GFB with improved technological and functional properties is an added-value bakery product that could provide health benefits to subjects on a gluten-free diet. Introduction Bread is a staple food that is willingly consumed all over the world every day [1]. However, for some individuals suffering from celiac disease and other gluten-related disorders (wheat allergy and non-celiac gluten sensitivity), the consumption of conventional wheat bread and other gluten-containing products is harmful [2]. In those patients, the dietary gluten proteins or, specifically, the gliadin fraction of wheat and the prolamins from barley (hordeins) and rye (secalins) can lead to deleterious health risks and complications. Nowadays, the only available treatment for gluten-related disorders is adherence to a gluten-free diet. Gluten-free breadmaking is a process that varies substantially from conventional breadmaking-in particular, in the ingredients used, batter rheological behaviour, and overall quality of the final product [3]. Due to the absence of the continuous three-dimensional gluten network that is responsible for the rheological properties of the dough and the development of high-quality bread, gluten-free breadmaking is challenging [4]. Therefore, the production of gluten-free bread (GF) requires complex formulations, consisting of a mixture of non-gluten basic ingredients and various additives mimicking the viscoelastic properties of gluten [5], as well as diverse technological solutions. In comparison with conventional bread, a GF shows many post-baking defects, such as unattractive appearance (irregular crust surface and pale colour), poor mouthfeel and flavour, and a shorter shelflife. Over the last decade, considerable advances were made to improve the technological Poland), salt, and water were the main ingredients of GFC (Table 1). Previously characterised BLP [24] was incorporated into the GFB by replacing 5% (w/w) of corn starch in the GFC formula. This level of substitution was based on a preliminary study that showed that 5% was the acceptable replacement level that did not affect the sensory properties of bread, whereas the GFB with 7% BLP had too intense cabbage flavour (data not shown). GFC-Control gluten-free bread, GFB-Gluten-free bread enriched with broccoli leaf powder, and BLP-Broccoli leaf powder. To prepare GFs, all solid ingredients were mixed for 5 min at minimum speed using a KitchenAid Professional K45SS mixer (KitchenAid Europa, Inc, Brussels, Belgium) in the stainless-steel bowl with a flat beater. Yeast, salt, and sugar were dissolved in the water and added to the dry mixture, together with oil. The batter was mixed for 12 min at speed 2. Then, a 240-g sample of the resulting batter was placed in a greased hexagon-shaped bread pan (10 cm × 10 cm × 9 cm length, width, and height, respectively) and proof for 40 min at 35 • C and 70% humidity. Experimental GFs were baked for 30 min at 220 • C in the laboratory oven (AB model DC-21, SVEBA DAHLEN, Fristad, Sweden). Nine loaves were baked from each formula. After baking, all bread loaves were cooled for at least 2 h at room temperature. Then, GFs were packed in clip-on plastic bags and kept in the dark at room temperature for further analysis. Products of two independent batches, fresh (2 h after baking) and/or stored (24 and 72 h after baking), were analysed. Determination of Proximal Chemical Composition and Energy Value The basic chemical composition was determined in freeze-dried GFs according to the standard method [27]: moisture content was analysed using the drying method (AOAC 925.10), proteins content was determined with the Kjeldahl method (N × 6.25 for nitrogen to protein conversion) (AOAC 979.09), and fat content using Soxhlet extraction with hexane (AOAC 923.03); total ash was determined using the gravimetric method by burning in a muffle furnace at 550 • C for 10 h (AOAC 923.03). The total carbohydrate content was calculated by subtracting the values of the moisture, protein, fat, and ash content from 100. The energy values (kJ) were calculated by multiplying the amount of macronutrients by the corresponding conversion factors (17 kJ/g for protein, 37 kJ/g for fat, and 17 kJ/g for carbohydrates) [28]. The conversion factor for calories calculation is 1 kJ = 0.239 kcal. Determination of Physical Parameters The weight of GFs was evaluated using a digital balance with 0.01-g accuracy. The loaf volume was determined using a modified standard rapeseed displacement method, in which millet seeds were used instead of rapeseed. The specific volume (SV) was calculated as a loaf volume divided by its weight. Density (D) was calculated as a loaf weight divided by its volume. Bake loss was calculated as indicated in Equation (1). where: a-the initial weight of batter before baking (g), and b-the weight of baked and cooled GFs (g). The crust and crumb colour of GFs was evaluated using a HunterLab ColorFlex (Hunter Associates Laboratory, Inc, Reston, VA, USA). Crust colour was determined at the middle point of the top of the loaf crust, while crumb colour was analysed at the middle point of the central 2-cm slice. The measurements were performed through a 3-cm diameter diaphragm containing an optical glass. The colour was expressed in accordance with the CIELab system, and the parameters determined were: lightness (L* = 0 (black) and L* = 100 (white) and chromatic components: a* (−a* = greenness and +a* = redness) and b* (−b* = blueness and +b* = yellowness). Values were the mean of at least nine replicates. To present the appearance of crumb and crust of exemplary GFC and GFB scans of the example central slice of each experimental, GF was made using a flatbed scanner (Epson Perfection V200 Photo) supported by Epson Creativity Suite Software Images (Figure 1). Evaluation of Textural Properties The texture profile (TPA test) of fresh (2 h) and stored (for 24 and 72 h after baking) crumbs of GFs were analysed using a TA.HD Plus Texture Analyser (Stable Micro Systems Ltd., Godalming, UK) equipped with a 30-kg load cell. The middle bread slices of 25-mm thickness underwent a double compression cycle up to 40% deformation of its original height with a 35-mm flat-end aluminium compression disc (probe P/35). The selected settings were as follows: pre-test/test/post-test speed, 2.0 mm/s, relaxation time, 5 s, force, 10 g, and trigger, mode auto. Each slice was compressed twice to give a two-bite texture profile curve [29], from which the following textural parameters were obtained: hardness, springiness, chewiness, cohesiveness, and resilience, as calculated by the software of the texturometer. Six replicates were analysed for each kind of fresh and stored GF. The total phenolic content (TPC) was determined with the use of the Folin-Ciocalteu reagent based on the method described previously by Horszwald and Andlauer [30]. Methanol extracts were obtained from 200 mg of freeze-dried GF and 100 mg of BLP with 1 mL of 67% methanol. Samples were subjected to ultrasonic vibration (30 s) and vortexing (30 s), then were centrifuged for 10 min at 13,000 rpm at 4 • C. The above step was repeated five times, and the supernatants were collected into a 5-mL measuring flask. Methanol extracts were prepared in triplicate. The TPC assay was performed in microplates, and aliquots of 15 µL of methanol extracts were placed in microplate wells. Subsequently, 250 µL of the Folin-Ciocalteu reagent (previously diluted with water 1:15, v/v) was added, and the mixture was incubated for 10 min in dark at room temperature. Then, 25 µL of 20% sodium carbonate was added to each well, and the mixture was incubated for 20 min. The microplate was shaken automatically before reading, and absorbance was measured at λ = 755 nm with the Infinite M1000 PRO plate reader (Tecan Group AG, Männedorf, Switzerland). Gallic acid was used for standard calibration (0.03-1.0 mg L −1 ), and the results were expressed in mg of gallic acid equivalents (GAE) per one gram of dry matter (g DM) of GFs or BLP. Trolox Equivalent Antioxidant Capacity by ABTS Assay The Trolox Equivalent Antioxidant Capacity (TEAC) by the 2,2 -azino-bis(3-ethylbenzothiazoline-6-sulfonic acid (ABTS) assay was performed as described by Horszwald and Andlauer [30]. To obtain an ABTS radical cation (ABTS· + ) solution with an absorbance value of 0.70 ± 0.02 at 734 nm, 10 mL of 7-mmoL/L aqueous solution of ABTS and 0.5 mL of 51.4-mmoL/L −1 aqueous solution of K 2 S 2 O 4 were mixed, then stored in the dark at room temperature for 16 h. Next, the ABTS· + solution (1480 µL) was added to 20 µL of methanol extracts of BLP and GF. For the analysis in the microplates, aliquots of 10 µL of sample (the methanol extracts of BLP or GF prepared as described above for the TPC assay), standards, or blanks were placed in microplate wells. The reaction and time measurements were started upon the addition of 270 µL of the ABTS· + solution. The reaction was carried out at 30 • C in dark for 6 min. After the reaction, the absorbance was measured at 734 nm with a microplate reader. Trolox was used for standard calibrations (0.25-1000 µmol/L −1 ), and the results were expressed in µmol Trolox g −1 DM of GFs or BLP. Trolox Equivalent Antioxidant Capacity by DPPH Assay The TEAC by 2-diphenyl-picryl-hydrazyl (DPPH) radical scavenging assay was performed according to Horszwald and Andlauer [30]. To obtain the DPPH solution absorbing in the range from 0.95 to 1.10 at λ = 517 nm, 10 mg of DPPH was dissolved in 250 mL of 80% methanol. The DPPH solution was freshly prepared before analysis. For analysis, 20 µL of methanol extracts of BLP and GF (described in Section 2.4.1), blanks or standard were placed into microplate wells, and then, 300 µL of DPPH· solution was added. The reaction was performed at ambient temperature for 30 min in the dark. Trolox was used for standard calibration (0.005-0.75 mM), and results obtained were expressed as µmol Trolox Equivalents (TE) per g DM of GFs or BLP. Photochemiluminescence Assay A photochemiluminescence (PCL) assay was performed as described by Zieliński, Zielińska, and Kostyra [31]. This method was used to measure the antioxidant capacity of BLP and freeze-dried GF extracts against superoxide anion radicals generated from the luminol photosensitiser under exposure to UV light in the Photochem apparatus (Analytik Jena, Leipzig, Germany). Antioxidant activity was analysed with ACW (hydrophilic condition) and ACL (lipophilic condition) kits according to the manufacturer's protocols. For ACW, a 50-mg sample was extracted with 1 mL of water, and for ACL-a 50-mg sample was extracted with 1 mL of the MeOH and hexane mixture (4:1; v/v). The concentration of the extract solution was adjusted to ensure that the generated luminescence was within the range of the standard curve. Antioxidant capacity was calculated by comparing the delay time of the sample with the Trolox standard curve, and it was expressed in µmol Trolox g −1 DM. Evaluation of Inhibiting Activity Against AGEs The inhibiting activity against advanced glycation end-products (AGEs) was assessed using two in vitro model systems: bovine serum albumine (BSA)-glucose and BSA-methylglyoxal (MGO). The extraction and incubation procedures were adopted from Szawara-Nowak et al. [32]. Briefly, 150 mg of freeze-dried sample was extracted with 67% methanol by shaking at 25 • C for 40 min using a thermomixer (Thermomixer, Eppendorf, Poland). The supernatant obtained after the centrifugation was evaporated to dryness under nitrogen, and the dry residue was dissolved in phosphate buffer (0.1 M, pH 7.4). 0.5 mL of the obtained solution was incubated with 1 mL of the mixture containing BSA (10 mg/mL) and sodium azide (0.1 mg/mL) in phosphate buffer (0.1 M, pH 7.4) and appropriately D-glucose or MGO. For the measurement, 250 µl of the reaction mixture was placed into wells (microplate 96-wells, black, Porvair). The fluorescent intensity of λ excitation 330 nm and λ emission 410 nm (BSA-glucose), and λ excitation 340 nm and λ emission 420 nm (BSA-MGO) were measured. For each extract, the test was run in triplicate. A 1 mM of aminoguanidine was used as a positive control. The results were presented as a percentage of AGEs inhibitory activity. Statistical Analysis Unless otherwise stated, the data reported in all the tables are mean values and standard deviations of triplicate observations. Generally, the differences between experimental GFs were analysed with an unpaired t-test with Weich's correction (p < 0.05), except for the differences between GFs caused by storage time that was analysed with the one-way ANOVA, using GraphPad Prism version 8.0.0 for Windows, GraphPad Software (San Diego, CA, USA). Proximal Chemical Composition and Energy Value of Experimental Gluten-Free Breads The BLP applied in the present study was previously characterised in terms of the proximal chemical composition and the profile of bioactive compounds [24] and was shown to be a good source of proteins. Additionally, a recent study by Sedlar et al. [7] indicated that, among the analysed vegetable byproducts, the broccoli leaves were characterised with the highest content of protein. In comparison with a GFC, the incorporation of BLP into the GFB resulted in a significant (p < 0.05) increase in the protein content (Table 2); however, in practical terms, it was a relatively small increase (1.16 g/100 g). Besides proteins, BLP was abundant in mineral compounds [24]; therefore, a significant (p < 0.05) enrichment in minerals was determined in experimental GFB, compared with GFC ( Table 2). The obtained results are in agreement with the study by Ranawana et al. [33], who investigated the effect of the addition of the freeze-dried vegetable powder on the nutritional and physicochemical properties of wheat bread. The authors indicated that the addition of freeze-dried broccoli significantly (p < 0.05) increased the protein, fat, and total mineral contents in oil-free wheat bread. According to Betoret and Rosell [34], the particle size of vegetable powder affects significantly its physicochemical properties. The concentration of macronutrients (proteins and fat) in the powder of Brassica napobrassica leaves progressively increases as the particle size was reduced (<125 µm); conversely, a fraction of larger particle size (>1 mm) was abundant in dietary fibre. Broccoli leaves used in the present study, after being freeze-dried and ground, were sieved to obtain a homogenous powder of average particle size below 0.60 mm. Therefore, even if the physical properties of BLP could have an impact on the nutritional value of the enriched product, BLP could be recommended as an ingredient enriching GF in nutritional compounds, as similarly indicated by Sedlar et al. [7]. The energy value of GFB was higher than that of unsupplemented GFC (Table 2), mainly due to a higher fat content delivered by BLP [7]. Broccoli leaves are a rich source of polyunsaturated fatty acids, mainly α-linolenic, linoleic, and palmitic acids [35,36], which is their additional important nutritional benefit. However, the profile of fatty acids was not analysed in this study and requires further confirmation. Technological Parameters of Experimental Gluten-Free Bread The effect of BLP on the technological parameters of experimental GFs is shown in Table 3. Moreover, the differences in the appearance between the GFC and GFB can be perceived in Figure 1. The specific volume of the GFC determined in the present study was similar to the results reported previously [26]; however, in comparison with wheat bread, the value of this parameter was meaningfully lower [37]. A specific volume of a conventional wheat bread ranged from 3.5 to 5.5 cm 3 /g [38,39], while its value for GF was meaningfully reduced and fluctuated around 2 cm 3 /g, depending on the ingredients used [26,40]. The use of BLP in the experimental GF formulation influenced the technological parameters of GFB. Compared with a GFC, the specific volume of GFB rose by approximately 30% (Table 3). Besides that, a significant decrease in the bake loss was detected in GFB. The specific volume is one of the most important technological parameters of bread quality; however, it cannot be considered as the most important quality factor itself. In breads baked in pans, high values of specific volume, usually associated with proper aeration of the bread loaves, are required to obtain products able to satisfy the consumers [41]. Therefore, the appropriate gas bubble entrapment together with stabilisation of the foam structure are also essential to achieve an acceptable texture, in which the resulting pores should be small, regular, and spread regularly across the crumb. On the other side, changes determined in both parameters could result from the BLP characteristics as physical parameters of bread depending on the type and amount of protein used in dough formulation, as well as on its interaction with starch. A recent study by Sedlar et al. [7] demonstrated that proteins obtained from broccoli leaves exhibited important functional properties, including a high solubility in the alkaline condition, favourable emulsifying abilities, and water absorption capacities, as well as foaming capacity and stability. Therefore, it is possible that BLP, due to high protein content, could influence the stability of the batter during baking. Consequently, it is possible that proteins of BLP could potentially form a stable network, somewhat mimicking gluten properties. However, the study by Ranawana et al. [33] showed contrary results, indicating that wheat bread with freeze-dried broccoli powder (10%) exhibited a poor degree of leavening and was, therefore, the smallest, compared with loaves of bread with other vegetable powders. The authors explained the reduced volume of broccoli bread by the activity of enzymes present in the cruciferous vegetables [42]. Whilst in the present study, the BLP was prepared from thermally pretreated leaves (blanched). Thus, these enzymes were inactivated, creating optimal conditions for yeast fermentation that resulted in the improvement of the technological quality of GFB. The results of the instrumental colour analysis of experimental GFs are presented in Table 3. The application of BLP influenced significantly (p < 0.05) all the analysed parameters of colour in the experimental bread. Both the crust and crumb of GFB were much darker (50.41 and 34.92, respectively) than the crust and crumb of GFC (75.89 and 71.58, respectively), which were pale and whitish. The crust and crumb colour strongly influence consumer choices [43]. Therefore, the darkening of starchy GFs is desirable and beneficial, as usually, they tended to have a light-coloured crust [26] that, in comparison with wheat flour counterparts, is perceived as unattractive. The visual colour difference between the typically creamy GFC and greenish-brown GFB (Figure 1) was evidenced by a colorimetric analysis. Contrary to the positive a* value indicating a slightly reddish colour of the GFC, a negative value of this coordinate was determined for the crust (−3.65 ± 0.31) and crumb (−1.47 ± 0.14) of GFB, indicating its greenness. The values of the b* coordinate were positive for both experimental GFs; however, GFB-in particular, its crust-was significantly more yellow than GFC ( Table 3). The differences in the colours determined between the experimental GFs resulted from applied freeze-dried BLP, which was characterised with an intensive green hue (a* = −9.10 ± 0.03; b* = 27.67 ± 0.14). Among different techniques of dehydration, freeze-drying contributes to the preservation of colour and appearance and to minimise the degradation of thermolabile compounds, many of them responsible for the aromas and nutritional value of vegetables [44]. Many studies have demonstrated that the use of pigmented by-products of vegetable processing in bakery gluten-free products affected the colour parameters of the final product [12,13]. Therefore, it was expected that the green BLP applied in the present study would confer the colour characteristics of the supplemented GFB. Similarly, our previous study, where BLP was used to partly replace corn and potato starches in gluten-free sponge cake, resulted in a vivid-green end product [23]. However, in the confectionery product, the vividly green colour of the BLP-supplemented sponge cake was maintained mainly due to the high presence of sugar, while in GFB, since the content of sugar was much lower, a more brownish product was obtained. Textural Properties of Fresh and Stored Experimental Gluten-Free Bread The texture profile of crumb of fresh (two hours after baking) and stored (24 and 72 h) experimental GFs is presented in Table 4. Fresh GFC and GFB were similarly soft (13.21 and 13.80 N, respectively); however, fresh GFB was significantly (p < 0.05) springier and more cohesive than GFC. Besides, the chewiness of the GFB was over 50% higher compared with the GFC (Table 4). The chewiness informs about the time required to mastication a piece of food before it is swallowed. The incorporation of BLP into the gluten-free formulation prolonged the chewing time for the GFB crumb. GFC-Control gluten-free bread, GFB-Gluten-free bread enriched with broccoli leaf powder, and BLP-Broccoli leaf powder. a,b -Within each row, and for each factor, values with the same letter do not differ significantly (p < 0.05) when subjected to the unpaired t-test with Weich's correction. A,B,C -Within each column, and for each factor, values with the same letter do not differ significantly (p < 0.05) when subjected to a one-way ANOVA analysis. In general, the storage influenced negatively the texture properties of GFs, independently of the BLP (Table 4). After 24 h, the crumb of experimental GFs was more than two-time harder in comparison with the fresh crumbs. Longer storage (72 h) resulted in a further significant (p < 0.05) increase in the hardness of the GFC and GFB. Moreover, both stored GFs were significantly less cohesive, and their resilience was lower than in the case of the fresh samples ( Table 4). The application of BLP in the gluten-free formulation caused a significant reduction of crumb springiness; thus, the GFB became very crumbly. However, in comparison with fresh GFB, the chewiness of stored crumb did not change meaningfully, contrary to the GFC stored for 72 h (Table 4). Ranawana et al. investigated the effect of the addition of freeze-dried vegetables (carrot, tomato, beetroot, and broccoli) on the storage properties of wheat bread with [45] and without oil [33]. The authors indicated that, among analysed vegetable breads, the broccoli bread was significantly (p < 0.05) harder compared to the control wheat bread both on the day of baking and during storage. However, the deterioration in texture attributes was more pronounced in the oil-free wheat bread [33]. Typically, a GF is characterised by a compact crumb with low cohesiveness and elasticity and, thus, high brittleness [46]. The textural characteristics of GF are strongly influenced by the ingredients used. Thus, if gluten is absent, the improvers (hydrocolloids, gums, and enzymes) become an obligatory element mimicking its functions [47,48], yielding a GF of satisfactory technological quality. Among them, fat-mimetic ingredients could be considered for improving texture, sensory characteristics, and shelf-life of baked products [49]. Antioxidant Capacity of Experimental Gluten-Free Bread The results of the antioxidant capacity of the BLP and experimental GFs are presented in Table 5. The GFC was characterised by a relatively low antioxidant activity evaluated using all assays. Contrary, the BLP was found as a good source of TFC, consequently exerting a high antioxidant capacity. Freeze-drying, which was used to prepare BLP, is a well-known method that allows preserving the nutritional value of the starting material, including bioactive compounds [25]. Therefore, as expected, the fortification of GF with BLP significantly (p < 0.05) increased the antioxidant potential of experimental GFB. Among broccoli parts, leaf tissue had the highest TFC and antioxidant activity (DPPH), compared with florets and stems [19]. ABTS, DPPH, and PCL-ACW assays are associated with the activity of hydrophilic compounds like polyphenols, which have confirmation in TFC. On the other hand, the PCL-ACL assay informs about the activity of lipophilic compounds, like fat-soluble vitamins and carotenoids. The results obtained by Ranawana et al. [33,45] indicated that freeze-dried broccoli significantly increased the vitamin E (α-and γ-tocopherols) content of broccoli breads compared with the wheat bread. Moreover, the authors showed that broccoli bread contained the β-carotene and lutein that are characterised by a strong antioxidant activity. BLP was characterised by very high PCL-ACL activity, and consequently, this assay was the highest among all analysed in GF, suggesting that BLP can be a good source of lipophilic compounds, as similarly suggested by other authors [50]. However, it was not analysed in this study and requires further investigation. A similar finding of increased antioxidant capacity after BLP incorporation was obtained in our previous study with BLP-fortified mini sponge cakes [24]. Moreover, the high antioxidant capacity of broccoli and its by-products was repeatedly reported in the literature [21,51]. Lefarga et al. indicated that wheat-based bread fortified with broccoli by-products was characterised by significantly increased TFC and antioxidant capacity in comparison to control bread without scarifying the sensory quality [21]. Interestingly, the authors reported that the TFC and antioxidant capacity increased after in vitro digestion, suggesting that the health-promoting potential of products fortified with broccoli by-products is even higher. Since the nutritional quality of GFs is relatively low, several successful attempts were performed aiming to improve the nutraceutical potential of these products, also including the vegetable by-products [12,52]. Our study also confirmed that underestimated by-products of broccoli processing can be a valuable additive to GF improving its nutritional and functional quality. Anti-AGEs Activity of Experimental Gluten-Free Bread The presence of phenolic compounds, besides the improvement of antioxidant potential, can contribute also to other bioactive activities. The advanced glycation end-products (AGEs) are formed continuously in the human body, the intensity of AGEs formation is increased by hyperglycemia and oxidative stress status [53]. Moreover, research has shown that dietary AGEs are important contributors to the pool of AGEs formed in the human body [54]. Hence, the challenge is to evaluate food products with natural inhibitors of the AGEs formation. The AGEs inhibitory activity was monitored in two model systems of BSA-MGO and BSA-glucose and presented in Figure 2. We found that extracts of BLP had high activity against the AGE formations (83.53%) in the BSA-MGO study, almost the same as the reference material of aminoguanidine (84.03%). The obtained data were in agreement with Sotokawauchi et al. [55], who noted the positive effect of broccoli sprouts decreased in the AGE formation. Additionally, a high effectiveness against AGE formation was noted in GFs after the addition of BLP (77.60%) in comparison to the control (67.47%). Therefore, the incorporation of BLP resulted in 1.15 times higher anti-AGE activity of the designed gluten-free product. In this study, we also observed that BLP showed a strong antiglycative effect (p < 0.05) in a BSA-glucose system, as is demonstrated in Figure 2. Similarly, in this model, the anti-AGE activity of BLP was high and accounted for 82.37%. No significant difference was observed between samples of GFC and GFB, reaching 49.97 and 49.20%, respectively. The results are presented as the mean ± SD (N = 3). Bars with different letters denote significant differences (p < 0.05) when subjected to Tukey's test. The results obtained in this study are in agreement with other studies utilising byproducts in bread formulation to improve the anti-AGE activity. The study of Peng et al. [56] showed that the incorporation of grape seeds can reduce the level of Nε-(carboxymethyl)lysine (CML), a common advanced glycation end-product in bread. Another solution to reduce the AGEs in bread can be the application of gluten-free flour with a higher content of bioactive compounds. The study of Szawara-Nowak et al. [32] showed that buckwheat bread has higher inhibitory effects against the formation of AGEs than the control one. Conclusions The present study investigated the suitability and functionality of BLP as a GF component based on an analysis of the nutritional, technological, and functional properties of the developed product. Based on the results obtained, it can be noticed that BLP can be successfully used as an additive in gluten-free bakery products. It improved the nutritional value and the technological properties of the obtained bread. In particular, the specific volume and the bake loss of GFB have been significantly improved, compared to GFC. Additionally, the crumb of fresh GFB was as soft as of the GFC, although the inclusion of BLP resulted in the deterioration of the other textural parameters. However, what needs to be emphasised is that BLP improved the antioxidant potential and inhibitory activity against the AGE formations of GFB. In conclusion, the obtained added-value baked product could provide health-promoting benefits for subjects on a gluten-free diet; however, to validate this concept and verify the positive health effects of GFB, human intervention studies are needed.
2021-04-27T05:13:15.577Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "4d41cd3d356f243eaf644ee74df5d7f9ca4c87db", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/10/4/819/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4d41cd3d356f243eaf644ee74df5d7f9ca4c87db", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
3546441
pes2o/s2orc
v3-fos-license
Exploration of unpredictable environments by networked groups Abstract Information sharing is a critical task for group-living animals. The pattern of sharing can be modeled as a network whose structure can affect the decision-making performance of individual members as well as that of the group as a whole. A fully connected network, in which each member can directly transfer information to all other members, ensures rapid sharing of important information, such as a promising foraging location. However, it can also impose costs by amplifying the spread of inaccurate information (if, for example the foraging location is actually not profitable). Thus, an optimal network structure should balance effective sharing of current knowledge with opportunities to discover new information. We used a computer simulation to measure how well groups characterized by different network structures (fully connected, small world, lattice, and random) find and exploit resource peaks in a variable environment. We found that a fully connected network outperformed other structures when resource quality was predictable. When resource quality showed random variation, however, the small world network was better than the fully connected one at avoiding extremely poor outcomes. These results suggest that animal groups may benefit by adjusting their information-sharing network structures depending on the noisiness of their environment. Social animals often share information relevant to foraging behavior, habitat choice, and other critical decisions (Krause and Ruxton 2002;Gordon 2010;Seeley 2010;Sumpter 2010). The pattern of sharing can be modeled as a network in which nodes are group members and edges connect individuals that share information with one another (Wey et al. 2008;Krause et al. 2009;Blonder and Dornhaus 2011;Tokuda et al. 2012;Waters and Fewell 2012;Mann et al. 2012;Cantor and Whitehead 2013;Greening et al. 2015;Pinter-Wollman 2015;Brent, 2015). Sharing may occur via signals produced by natural selection to convey information (e.g., alarm calls (Hollén and Radford 2009), recruitment to food sources (Czaczkes et al. 2015), or fertility signals (Le Conte and Hefetz 2008)) or by incidental cues that animals use opportunistically to guide their behavior (e.g., imitating the actions of a successful forager (Galef and Giraldeau 2001) or responding to the movements of a fellow group member (Meunier et al. 2006;Ward et al. 2008)). The structure of an information-sharing network can affect the decision-making performance of individual members as well as that of the group as a whole (Krause et al. 2009;Sih et al. 2009;Croft et al. 2011;Bode et al. 2012;Pinter-Wollman et al. 2014). For example, harvester ant colonies Pogonomyrmex barbatus have a minority of workers that interact significantly more often with others in the nest (Pinter-Wollman et al. 2011). This skewed distribution of connections expedites information flow, enhancing the colony's ability to make fast and accurate decisions. Analysis of animal social networks can aid in deciphering underlying mechanisms of collective decision making (Wey et al. 2008). Animal groups can vary in the degree to which each member is directly connected to others. For example, a group may be relatively well mixed, with all members equally likely to interact with one another, or it may be subdivided into clusters based on relatedness, age, or sex, with members more likely to interact within than across clusters (Krause et al. 2014). Methods of communication may also vary from broadcast signals that rapidly spread information throughout the group (e.g., Blumstein and Daniel 2004), to more local signals detected by only one or a few members (e.g., Richardson et al. 2007). The resulting differences in network structure are likely to affect how well individuals gather accurate information about their environment. Insofar as individuals benefit from the rapid spread of important information, we might expect the best network structure to be a fully connected one, in which each member can directly transfer information to all other members. Such a network, however, would also rapidly spread inaccurate information that can result from individual assessment errors, leading group members to make suboptimal choices. Thus, an optimal network should balance sharing of currently available knowledge with opportunities to gather new information for a more accurate assessment of the environment. In fact, computer simulations show that less connected network structures can collectively outperform more connected ones in complex environments where the best option is hard to discover (Lazer and Friedman 2007;Mason et al. 2008;Mason and Watts 2012). This is because a fully connected network allows the rapid spread of information about easily discovered suboptimal options, settling everyone's choice before the best option becomes known. In contrast, slower information spread in sparser networks makes them less likely to get stuck on suboptimal local peaks before finding the global optimum. These results suggest that the relative performance of different network structures depends highly on the environment. In this study, we examined how social network structure affects a group's ability to discover resource peaks. We tested idealized structures that differ in features important to real animal social networks, particularly the degree of local clustering and the number of fellow group members directly contacted by each individual (Pinter-Wollman et al. 2014;Krause et al. 2014). Our goal was to explore the effects of these general network attributes on a group's ability to thoroughly explore its environment. Although we did not model a specific biological context, the problem we examined is similar to that faced by a social group looking for food, water, nesting sites, or other resources that are distributed unevenly in the environment. In addition to resource distribution, we further explored the role of resource predictability. If animals make error-prone assessments of resource quality, or if the environment varies randomly over time, then an individual's assessment of current gains may have limited ability to predict future gains. For example, consider 2 foraging areas, one of which is more profitable than the other. When future gains are perfectly predicted by current experience, it is relatively easy to differentiate them, because gains at one area always exceed those at the other. When gains are less predictable, however, it becomes hard to discriminate between sites because the inferior one can sometimes be more profitable than the superior one. This could affect the value of information sharing and thus the efficacy of different network structures in maximizing resource acquisition. To test this possibility, we manipulated environmental predictability and investigated how it affected the performance of different network structures. We first measured collective performance of 4 distinct network structures exploring 3 different payoff distributions in a perfectly predictable environment. We then repeated the same analysis in an environment made less predictable by the addition of assessment error. Purpose The purpose of this study was to explore the relative performance of different network structures in situations where group members receive unreliable information about their environment. Our model was based on the self-, social-, and exploration-based choices (SSEC) model developed by Goldstone et al. (2008) and Mason et al. (2008). Our methods (described below) followed theirs, except where noted. Network types We used 4 types of networks: fully connected, small world, lattice, and random ( Figure 1). In the fully connected network, every agent was connected to every other agent. In the small world and lattice networks, all agents were connected to their immediate 2 neighbors, and some agents were also connected to a third agent at either a far distance (small world) or a close distance (lattice). In the random network, agents were connected randomly. Each network had 10 agents and a total of 12 connections, except the fully connected network, which had 45 connections. Payoff distributions In each round, an agent chose a number between 0 and 100. Each number was associated with a specific payoff according to 1 of 3 continuous payoff distributions: unimodal, trimodal, and needle ( Figure 2). Each distribution had a single global maximum, and thus 1 "correct" choice, but the trimodal and needle distributions had additional lower peaks. All 3 distributions can be mathematically described as: The parameter values for each distribution are summarized in Table 1. The unimodal, trimodal, and needle payoffs represent successively greater challenges to discovery of the best resource: for the unimodal distribution, agents will find the peak as long as they move up a gradient of performance. For the trimodal distribution, they face the risk of getting stuck on a local peak and missing the global maximum. For the needle distribution, the global maximum is still harder to find because it is much narrower than the competing local maximum. In the first experiment, the payoff function determined the exact payoff received by an agent choosing value x. In the second experiment, the function's output was added to a noise term drawn from a normal distribution with mean zero and standard deviation 10. This random component modeled resource unpredictability resulting from assessment noise or environmental change over time. Agent strategies On every round, each agent probabilistically chose 1 of 3 strategies: Stay: The agent chooses the same number it did on the previous round. Best: The agent chooses the number that paid the most among its directly connected neighbors in the previous round. Random: The agent chooses a number randomly. In the first round, all agents used the random strategy. As the simulation progressed, agents updated their probabilities of choosing each strategy according to their own payoff history. That is, the higher the payoffs previously earned using a given strategy, the more likely that strategy was to be used again. Probabilities were calculated from a baseline of 45% for each of the first 2 strategies and 10% for the third. Process overview and scheduling Each simulation started with creation of 1 of the 4 network types. It then progressed through 15 rounds, during which each agent in the network chose a decision strategy and then used it to make a choice. After each round, the agents updated their strategy probabilities according to the outcome of their choice. After every 15-round session, a new network was generated and all the parameters were re-initialized. For each network structure, 500 15-round simulations were run for each of the 3 payoff distributions. At the end of each simulation, we measured the group's performance by counting the number of agents that came within a specified distance of the global maximum. This distance was 8 for the unimodal and trimodal distributions and 4 for the needle distribution. We conducted 2 experiments. In the first experiment, there was no noise, and we measured performance of all 4 network types for all 3 payoff distributions. In the second experiment, we added noise and similarly measured network performance. Statistical analysis Data were analyzed via Kruskal-Wallis, Nemenyi, Mann-Whitney-Wilcoxon, and v 2 tests, as detailed in the results. The statistical package R (v. 3.1.1) was used for all analyses. Results In the absence of noise, the fully connected network outperformed the other networks for the unimodal and trimodal distributions and performed statistically indifferently for the needle distribution ( Figure 3A). That is, agents in the fully connected network reached Table 1). Parameters a i , b i , and c i determine, respectively, the payoff for peak i, the variance around the peak, and its position. Parameter b is inversely related to variance, so larger values indicate narrower peaks. the global payoff maximum at least as often as agents in the other networks, regardless of the distribution. As expected, performance varied across payoff distributions, with the highest proportion of agents finding the peak in the unimodal distribution, a somewhat lower proportion doing so in the trimodal distribution, and a much lower proportion succeeding in the needle distribution. Agents most often used the Best strategy, and very few used the Random strategy ( Figure 4A). Payoff distribution had little effect on strategy choice, except that agents were more likely to choose the Stay strategy under the trimodal distribution. Strategy choice varied little among the different network types ( Figure 4A). We performed the second experiment to determine whether the dominance of the fully connected network would persist in a noisy environment. The results showed that it did, except for the trimodal payoff distribution, where the small world network did about as well ( Figure 3B). Looking more closely at the trimodal case, the 2 network types had the same median performance (Nemenyi test: q ¼ 1.9, P ¼ 0.20), but a significantly different distribution of performance (Chi-squared test: v 2 ¼ 211.0, df ¼ 9, P < 0.01) ( Figure 5B). The fully connected network often performed very well-in one-third of simulations over 80% of agents reached the global maximum. However, it also often missed the peak completely-in another one-third of simulations fewer than 10% of agents reached the peak. In contrast, the small world network rarely performed at either extreme. Instead, in over two-thirds of simulations 50-80% of agents reached the peak. These distributions are different from those seen in the environment without noise, where both network types showed similar left-skewed frequency distributions ( Figure 5A). Strategy choice followed the same pattern seen in the absence of noise (Figure 4). For the noisy environment, we also looked at how performance changed over 15 rounds. With the unimodal payoff distribution ( Figure 6A), all networks showed improved performance over time, with the fully connected network improving more rapidly at first but reaching a plateau after 5-6 rounds. The small world network eventually caught up in performance, and the lattice and random networks lagged somewhat behind. A similar pattern was seen with the trimodal distribution, but the plateau was lower and was reached more slowly ( Figure 6B). For the needle network, all networks started at a low level of performance and declined similarly over the 15 rounds ( Figure 6C). Discussion The principal result of this study is that a fully connected network is always at least as good as other network structures at maximizing , and Random (dark gray) strategies, for each combination of payoff distribution and network type. For each combination, the number of times each strategy was used was summed over all agents, rounds, and runs and the proportion of each strategy was calculated. The first round (when all agents were required to use the Random strategy) was not included in the calculation. payoff, regardless of how resources are distributed in the environment. This differs from the findings of earlier studies that used the same network topologies and payoff distributions examined by us Mason et al. 2008). Those studies reported that groups are better at finding obscure global peaks when their information-sharing networks have high levels of local connectivity: i.e., clusters of individuals that are well connected with each other but weakly connected to members of other clusters. This clustering is argued to enhance exploration by dividing the group into relatively independent subsets that more effectively search the space of possible solutions. That is, each subset has time to find distinct solutions rather than being rapidly converted to the first local peak that is found. Thus, according to these studies, the fully connected network performs best for the unimodal payoff, in which the single peak can be easily found with relatively little exploration. The more clustered small world network does best for the more challenging trimodal distribution, whereas the highly clustered lattice network does best for the needle distribution, where the hard-to-find global maximum places a premium on thorough exploration. Our simulations did not replicate the pattern seen in these previous studies Mason et al. 2008). Instead we found that the fully connected network, on average, performed as well as or better than the other networks for all distributions. We saw a similar pattern to the earlier studies for the unimodal case, but Figure 5. Frequency distribution of performance (i.e., proportion of agents at the global maximum) when noise was absent (A) or present (B). When noise was absent, both the fully connected and small world networks showed leftskewed frequency distributions, though the patterns were different (v 2 9 ¼ 244.0, P < 0.01). When the noise was present, however, the fully connected network showed a bimodal distribution, with peaks at very high and very low performance. In contrast, the small world network showed a single peak at moderately high performance (v 2 9 ¼ 211.0, P < 0.01). Figure 6. Performance of the 4 network structures over 15 rounds of search in a noisy environment, for 3 different payoff distributions: (A) unimodal, (B) trimodal, and (C) needle. For the unimodal and trimodal distributions, the fully connected network initially performed better, but the small world network eventually caught up. In the trimodal distribution, however, the small world network showed less variation in performance than did the full network. For the needle distribution, all networks performed similarly, and declined in performance over time. Boxes indicate the lower and upper quartiles, and horizontal lines within boxes indicate the median. Brackets indicate the range, except for outliers (omitted for clarity). a very different outcome for the needle distribution, where all network types performed at a similarly low level. For the trimodal case, we saw some advantage for the small world network, but different from that seen in the previous work, which found that the small world network rose in performance more rapidly in early rounds. In our simulations, the median performance of the small world network did not exceed that of the fully connected network at any point. Instead, we found that it achieved a lower variance in performance, consistently achieving a moderately good outcome without either of the extremes that were common for the fully connected network. In short, the fully connected network achieved the best average performance for all distributions, but the small world network showed lower variance in performance for a more challenging payoff distribution (trimodal). We attribute the difference between our results and those of Goldstone et al. (2008) and Mason et al. (2008) to their use of different distributions of local and global maxima for different networks. Specifically, they placed the global maximum for the small world network in the middle of 2 local maxima and relatively close to them. Therefore, when agents reached the local maxima, they could easily move on to the global maximum. In contrast, the peak for the fully connected network was far from the local maxima. Agents were therefore more likely to get stuck at the isolated local peak. Because we used the same payoff distribution for all networks, our results did not confound network effects with distribution effects. Despite the difference between our results and those of the earlier studies Mason et al. 2008), our findings also support some advantage of greater clustering in environments that reward exploration. When local maxima were present, the fully connected network performed very badly a significant proportion of the time. This can be interpreted as too-rapid propagation of the discovery of a local peak, cutting short the group's search and preventing discovery of the best solution (Lazer and Friedman 2007). This effect was most obvious for the trimodal distribution. An even more pronounced effect might have been expected for the needle distribution, with its better-hidden global maximum. This was not the case, but this may have been due to the extremely low performance of all networks for this distribution, making it difficult to distinguish relative performance. Besides the interaction between payoff distribution and network structure, our other major finding was the importance of assessment noise. In the absence of noise, the small world network showed clearly inferior performance, meaning that groups gained no advantage from the more thorough exploration afforded by highly local connections. High locality comes at the cost of slower propagation, because each agent has limited connectivity with agents outside its local group, and thus cannot rapidly learn if an outsider finds the best solution. When assessments are not obscured by noise, groups do better to rapidly share information in a fully connected network, regardless of payoff distribution. Our finding of a strong influence of assessment noise implies that animal groups face context-dependent trade-offs in the best way to share information. When assessment noise is low, thorough information sharing over a dense network may be best. When noise is high and getting trapped on a suboptimal local maximum is a danger, then a less-connected, small world network may be better rewarded. The latter may be especially the case when poor outcomes are disproportionately costly, making it better to reduce variance of outcomes, even at the cost of sometimes falling short of the very best performance (Kacelnik and El Mouden 2013). If the best network structure depends on environmental context, then we predict that animal groups may adaptively change their behavior to achieve different structures according to their current circumstances. Several species show evidence of different network structures across years or seasons (Smith et al. 2010;de Silva et al. 2011;Brent et al. 2013;Godfrey et al. 2013). It is not clear whether these changes have anything to do with information sharing, but there is evidence that an individual's place within a social network can influence its ability to acquire new information about its environment (Lusseau 2007;Aplin et al. 2012;Brent 2015). Our results suggest that future research would benefit from considering how network structure as a whole influences information gathering, and whether this structure varies adaptively according to environmental predictability. Funding This work was supported by the National Science Foundation (Grant 1012029).
2017-12-31T19:07:21.869Z
2016-04-28T00:00:00.000
{ "year": 2016, "sha1": "293719edd730793d305886f73992148acb669130", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/cz/article-pdf/62/3/207/23653168/zow052.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "293719edd730793d305886f73992148acb669130", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
203349327
pes2o/s2orc
v3-fos-license
Diffuse Idiopathic Pulmonary Neuroendocrine Cell Hyperplasia: Review of the Literature and a Single-center Experience Diffuse idiopathic pulmonary neuroendocrine cell hyperplasia (DIPNECH) is a rare disorder that is commonly underdiagnosed. In 2015, it was recognized by the World Health Organization (WHO) classification of lung tumors as a premalignant lesion. DIPNECH syndrome is characterized by cough, exertional dyspnea, wheezing, and, less frequently, hemoptysis. We report the clinical and histological features and imaging findings in four cases of DIPNECH from our institution (Torrejon University Hospital, Madrid, Spain) between the years 2012 and 2019. DIPNECH represents a rare and poorly understood pulmonary disorder. Our limited single-center experience shows the slow and stable evolution of the disease. However, some exceptional cases may progress poorly if distant metastases occur. Introduction Diffuse idiopathic pulmonary neuroendocrine cell hyperplasia (DIPNECH) is a rare disorder that is commonly underdiagnosed. In 2015, it was recognized by the World Health Organization (WHO) classification of lung tumors as a premalignant lesion. Aguayo et al. [1] were the first to describe this entity, showing hyperplasia confined to the respiratory epithelium layer without penetration of the basement membrane. It occurs predominantly in females (ratio 10:1) with a median age of 58 years at diagnosis, and it is not associated with smoking [2]. DIPNECH syndrome is characterized by cough, exertional dyspnea, wheezing, and, less frequently, hemoptysis. Histologically, it manifests as a generalized proliferation of scattered neuroendocrine cells, tiny nodular aggregates, or a linear proliferation of neuroendocrine cells [3]. We report the clinical evolution, histological features, and imaging findings of four cases of DIPNECH diagnosed in our institution (Hospital Universitario de Torrejón, Madrid, Spain) between the years 2012 and 2019. Case 1 A 48-year-old female patient was admitted in October 2013 to the digestive surgery department of our hospital due to one year of proctalgia and perianal tumor growth. She was a smoker of 40 cigarettes/day and presented non-productive cough and great effort dyspnea. The extension computed tomography (CT) study showed diffuse interstitial pulmonary involvement with multiple bilateral pulmonary nodules, the largest being 1.1 cm in the lower left lobe (LLL) ( Figure 1A). Positron emission tomography (PET)-CT showed pathological uptake with a standardized uptake value (SUV) of 2.2 in the nodular lesion of LLL. The others of the subcentimeter nodules did not show glucose uptake. An Indio (111 In) pentetreotide scintigraphy (Octreoscan) was performed, with a negative result. Bronchoscopy did not reveal any endobronchial finding. Chromogranin A (CgA) serum level and 24-hour urine analysis for 5-hydroxyl-indole-acetic (u 5-HIAA) were in the normal range. She was diagnosed with stage I perianal squamous cell carcinoma, positive human papilloma virus-positive, and negative human immunodeficiency virus-negative. She underwent wide local excision of the anal carcinoma with negative margins. A CT-guided thick-needle biopsy of the LLL lesion was performed and a low-grade neuroendocrine tumor (Ki 67 <1%) was observed. Functional respiratory tests were in the normal range (forced expiratory volume in one second (FEV1) 60%, forced vital capacity (FVC) 54%, and single-breath carbon monoxide diffusing capacity (DLCO) 61%) and did not contraindicate surgery. The patient underwent left lower lobectomy and the dissection of levels 9, 11, 7, and 5 lymph nodes due to the suspicion of DIPNECH with a carcinoid tumor in LLL. The histological sections showed at the level of the lobar bronchus, as well as in the peripheral lesion of 9 mm, a neoplastic proliferation of epithelial character constituted by a monomorphic population of cells of neuroendocrine habit with monomorphic nuclei of finely granular chromatin and amphiphilic cytoplasm of undefined edges. There were arranged delimiting nests and trabeculae, without necrosis, mitosis, or significant atypia. The bronchial lesion respected the mucosal surface of the bronchus and was 1 cm from the surgical margin of resection. Lymphatic vascular invasion was identified. The rest of the parenchyma presented multiple nodular lesions of 1 mm to 3 mm in diameter constituted by the same cellularity of neuroendocrine habit. Both the tumor nodules described and the hyperplasia foci show positivity for TTF1 and CD56, as well as positivity for chromogranin, synaptophysin, and proliferative activity with Ki67 of 0%. Up to eight lobar and segmental nodes were isolated, three of them with neuroendocrine tumor metastasis. With the diagnosis of DIPNECH with a typical multifocal pulmonary carcinoid tumor with R0 resection in January 2014 (mpT1N1 pathological staging), clinical-radiological follow-up was initiated. The last revision was made in June 2019 with the stabilization of interstitial lung involvement. Cg A and u 5-HIAA were within the range of normality. The patient continues with persistent, irritative dry cough with moderate effort dyspnea (basal oxygen saturation of 96%). Case 2 A 55-year-old female was referred in January 2012 to our pneumology department in the context of a dry, irritative cough that did not improve with drugs and without seasonal variation. She was a former smoker for three years of 20 cigarettes/day and suffered chronic obstructive pulmonary disease (COPD). There were alterations in the pulmonary function test (FEV1 42%, FVC 48%, and DLCO 32%) and in the physical examination (global hypoventilation and roncus). Chest CT showed a bilateral mosaic pattern and subcentimeter bilateral lung nodules (less than 6 mm) and calcified linear tracts in the pulmonary vertices and the apical segment of the lower left lobe ( Figure 1B). Bronchoscopy did not reveal any endobronchial finding. CgA, u 5-HIAA, and Octreoscan were normal. A surgical biopsy was performed. The histological surgical specimen showed pulmonary parenchyma with the presence of a carcinoid tumor, tumorlet, and hyperplasia of neuroendocrine cells. Immunohistochemistry was positive for CD56, TTF1, and CK7 and negative for CK20, HMB45, and EMA; and the proliferative index Ki67 (MID1) was 3%. In November 2014, she received treatment with radiosurgery for venous hemangioma of the left cavernous sinus and in 2015, was diagnosed with Hashimoto's thyroiditis. The last revision was made in February 2019, with stabilization of interstitial lung involvement and normal levels of Cg A and u 5-HIAA. She continues with chronic cough without dyspnea. Case 3 A 60-year-old female was referred in May 2014 to our pneumology department in the context of a history of more than 30 years of evolution with dry cough and exertional dyspnea for two years. She was a nonsmoker. The chest radiograph showed a bilateral micronodular pattern with a paracardiac nodular image in the right lower lobe (RLL). She is the sister of the patient in Case 2. The pulmonary function test was altered (FEV1 27%, FVC 52%, and DLCO 49%). CT showed bilateral apical pleural thickenings and opacity with loss of volume in the middle lobe (ML), with multiple pulmonary nodules in both lobes up to 11 mm ( Figure 1C). Bronchoscopy did not show any finding, but a transbronchial biopsy was performed on RLL and ML. The definitive pathology results are summarized in Figure 2. D. Neoplastic cells stained with Synaptofisin (200x) Histologic sections showed a small nodule (< 0.5 cm) composed of a relatively uniform population of cells with oval or spindle nuclei. The tumor cells were arranged in nests and cords with finely granular chromatin, inconspicuous nucleoli, and moderate amounts of clear cytoplasm. Mitotic activity and necrosis were absent. There was no evidence of airway inflammation, interstitial fibrosis, and remodeling of vascular structure in the remaining lung tissue. Neuroendocrine markers, such as chromogranin A, synaptophysin, neuron-specific enolase (NSE), and CD56, were strongly positive in these tumoral cells. We diagnosed it as carcinoid tumorlet. The study was completed with a PET-CT (pathological uptake in the ML lesion with SUV max of 3.87 and uptake of the rest of the bilateral pulmonary nodules with SUV max of 3.52), Octreoscan (with pathologic uptake in ML, Figure 3), plasma CgA (normal), and u-5-HIAA (high levels of 25 micrograms in 24 hours). Despite the elevated levels of u-5-HIAA, the patient didń t have clinical signs compatible with carcinoid syndrome. The study was extended with an echocardiogram that was normal. FIGURE 3: Showing slight metabolic activity located in the basal medial segment of the right inferior pulmonary lobe (red arrows) in a probable relationship with viable tumor lesion expressing (mild grade) somatostatin receptors (rSS-2 and rSS-5) In the context of uncontrolled cough with habitual medication, positive Octreoscan, and elevated levels of u-5-HIAA, it was decided to start treatment with lanreotide depot 120 mg subcutaneously every 28 days. During the first three months, cough improvement and decrease of u 5-HIAA to 14 micrograms in 24 hours were achieved. Grade 2 diarrhea, abdominal pain, and headache presented as side effects of the lanreotide treatment, so the dose was lowered to 90 mg every 28 days with a slight improvement in tolerability. After nine months of treatment, the lanreotide was discontinued at the request of the patient. There was no improvement in the pulmonary interstitial involvement in the posterior CT scans. The Octreoscan was repeated one year after the end of the lanreotide, with negativization of the pulmonary uptake. She continued with chronic cough and moderate effort dyspnea. The CT scan in April 2019 showed stabilization of the interstitial lung involvement. Cg A was negative and u-5-HIAA was 12 micrograms in 24 hours. Case 4 A 67-year-old female was evaluated in November 2015 in the pneumology department for dyspnea of years of evolution and chest tightness. She was a non-smoker, suffered obesity and osteoporosis, and had surgical resection for hyperparathyroidism. Thoracic CT showed linear and reticular opacities, some of them tending to coalesce and consolidate, with a predominantly peripheral distribution ( Figure 1D). Bronchoscopy provided no significant information but a transbronchial biopsy was performed on RLL and ML. The pathology results were multifocal, micronodular, with linear cell proliferation, immunohistochemistry (Synaptophysin +) and morphologically suggestive of neuroendocrine hyperplasia, limited to the bronchiolar epithelium and peripheral peri-adjacent pulmonary parenchyma, which respects the basal membrane of the remitted respiratory epithelium. There were alterations in the pulmonary function test (FEV1 44%, FVC 65%, and DLCO 29%). CgA, u-5-HIAA, and Octreoscan were normal. About three months later, the patient started with pyrosis, postprandial pain, anorexia, and weight loss. An endoscopy and new body CT revealed a stenosing pyloric neoplasm. Biopsies confirmed the diagnosis of HER2-negative gastric adenocarcinoma. The staging was uT3 N1 Mx. She started neoadjuvant chemotherapy treatment with the EOX scheme (epirubicin, oxaliplatin, and capecitabine). She was admitted to the intensive care unit after two weeks (February 2016) due to influenza A virus bilateral pneumonia and died days later due to respiratory distress syndrome. Discussion DIPNECH is an entity that becomes more common in the literature every year [4]. Most patients are females between the fifth and sixth decades, non-smokers, with obstructive lung symptoms and peripheral lesions [5]. The demographics and presentation of disease in our cases support the current literature with the exception of two patients (50%) who were smokers/former smokers ( Table 1). The classical histopathological features of DIPNECH include a widespread proliferation of small bland uniform cells within the epithelium, showing a lack of mitoses or necrosis, accompanied by neuroendocrine features, such as the fine soft chromatin pattern and immunoreactivity to neuroendocrine markers [6]. A pathology-based approach by Marchevsky et al. [7] aimed at distinguishing DIPNECH from reactive neuroendocrine cell hyperplasia (NECH) suggested that the presence of multifocal NECH associated with ≥ three tumorlets could represent a pathological criterion for the diagnosis of DIPNECH. Patient Sex Although in our series, there is a family association of two sisters affected by DIPNECH, there is no clear evidence of family aggregation/germline genetic alterations in this entity. The possible association between DIPNECH and pulmonary adenocarcinoma is described [8], but not with other types of tumors. However, two of our patients (50%) have a metachronous tumor (perianal squamous cell carcinoma and gastric adenocarcinoma). DIPNECH is usually diagnosed as an incidental finding in asymptomatic patients without radiological abnormalities [3]. However, patients usually show attenuated symptoms of years of evolution, such as exertional dyspnea, wheezing, and dry cough. All of our patients had clinical symptoms (cough and dyspnea) of years of evolution. Careful integration of clinical, functional, and imaging data, along with the histological demonstration of constrictive bronchiolitis akin to neuroendocrine cell proliferation, is mandatory to establish a diagnosis of DIPNECH [2]. Treatment and prognosis depend on the severity of constrictive obliterative bronchiolitis. In our series, 100% of cases (4/4) were symptomatic, mainly due to cough and exertional dyspnea. Long-term follow-up is recommended to exclude nodular growth and the development of carcinoid tumors or invasive carcinoma. Two patients had a concomitant carcinoid tumor. Three patients kept the disease stable during follow-up (median follow-up of 74 months) while one patient died six months after diagnosis due to metachronous gastric adenocarcinoma. In addition, the disorder can progress to severe airway obstruction, and several deaths have been reported from the progressive decline of pulmonary function associated with pulmonary fibrosis [9]. Limited data are available regarding the use of somatostatin analogs (SSA) in DIPNECH. Gorshtein et al., in their review of 11 DIPNECH patients, suggested the affirmative role of SSA in the symptom management of DIPNECH [5]. In the American single-center experience, most of their patients responded to treatment with SSA and had significant improvement in their presenting symptoms [10]. In one patient, we currently use lanreotide depot 120 mg deep subcutaneous injection every 28 days. There is no evidence that SSA will retard growth or transformation to carcinoid; however, treatment can be lifelong if patients respond favorably without adverse side effects to SSA. Although significant cough improvement was obtained, dyspnea remained stable and mild side effects appeared (diarrhea and headache), so the treatment was interrupted. Conclusions In summary, DIPNECH represents a rare and poorly understood pulmonary disorder that includes incidental cases of neuroendocrine cell proliferation observed in the context of various pulmonary disease forms associated with carcinoid tumors. It affects females over 60 years of age, and dry cough is the most common presenting symptom. Our limited single-center experience shows the slow and stable evolution of the disease. SSA can be useful for the management of symptoms. However, some exceptional cases may progress poorly if distant metastases or progressive decline of pulmonary function occur.
2019-09-19T09:08:39.001Z
2019-09-01T00:00:00.000
{ "year": 2019, "sha1": "f5c18794ca7d1a5afbffe9d9e2ec571ddedcdd11", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/22186-diffuse-idiopathic-pulmonary-neuroendocrine-cell-hyperplasia-review-of-the-literature-and-a-single-center-experience.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3de27745499f19822c00011f05de20b170e456a7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
139512112
pes2o/s2orc
v3-fos-license
The Influence of Heat Input to Mechanical Properties and Microstructures of API 5L-X65 Steel Using Submerged Arc Welding Process . API 5L-X65 steel is the type of high strength low alloy (HSLA) steel, widely used in the manufacture of pipe. Submerged arc welding (SAW) is widely used for the fabrication of the pipe, the extent of use submerged arc welding caused it could be done automatically and high reliability. The results of the welding process will lead to differences and changes in the microstructure in heat affected zone (HAZ) and weld metal that will affect the mechanical properties of the output, so as to obtain good welding results required the selection of welding parameters accordingly. As the use of the heat input during welding is very important influence on the mechanical properties and microstructure of the weld. The purpose of this study to determine the effect of heat input on the microstructure, hardness and toughness of welds in submerged arc welding. Welding currents used were 200, 300, 400 and 500 Ampere with a voltage were used 25, 27 and 30 Volt. The results showed that the higher heat input will result in a growing area of HAZ region width and grain size increased. Highest hardness values are the results of the weld heat input with a low of 244.69 HVN caused by the rapid cooling rate of the weld area. The highest toughness values are the results of the highest heat input that was dominated by acicular ferrite phase. Introduction At present, welding is one of the most widely used techniques of plumbing in piping, for example in the construction of pipe fittings, oil pipes and gas pipelines. The API 5l-X65 steel is included in HSLA steel which is widely used in pipe making applications. The welded product will have good quality when the resulting weld area can provide perfect continuity between the parts connected with each part of the joint, so that the joints and the parent metal do not show any obvious difference. Therefore, there are conditions that must be met in the welding process among others, the existence of heat input. HSLA steel is developed with special chemical composition to achieve higher mechanical properties. HSLA steels are manufactured to produce good mechanical properties and have greater corrosion resistance than conventional carbon steels. HSLA steel contains a low carbon (0.05% -0.25%C) so that it is able to shape and weld from this steel better than mild steel and has a manganese content up to 2% [1]. The API 5L-X65 steel is included in HSLA steel standards where this steel is a type of steel that is widely applied to pipes designed specifically for subsea pipes, with the 5L specification being a standard specification by the conference of manufacture and distribution of pipes channels for oil, water and gas with exceptional usage conditions. And grade X65 shows this underwater pipe has minimum allowable stress of 65,000 psi or 448 MPa which is widely used in the structure of the oil and gas platforms. Submerged arc welding is one type of electric welding by the process of combining the welded material by heating and liquefying the parent metal and electrode by an electric are located between the parent metal and the electrode. Metal melt currents and arcs are covered with fluid granules over the welded area. Submerged arc welding does not require pressure and filler metal is mechanically supplied continuously into a metal arc formed between the tip of electrode filler and the parent metal deposited by the flux [2]. In arc welding, the source of energy come from electricity that converted to heat energy. This heat energy is actually the results of collaboration of current, voltage and speed of welding. The third parameter is the speed of welding also affect the welding energy because the heating process is not still but moving with a certain speed. The relationship between the three parameters that produce welding energy is often called heat input that can be written in the following equation [3]: (1) where: HI = Heat Input (joule/mm) f i = Weld Heat Efficiency (SAW  0.9 -1.0) E = Arc Voltage (volt) I = Welding Current (ampere) v = Welding speed (mm/min) During welding, the welded metal and heat affected zone (HAZ) will undergo a series of thermal cycles, is heating to maximum temperature followed by cooling to room temperature. The thermal cycle affects the microstructure of the welding metal and HAZ, in which the weld metal will undergo a series of phase transformations during the cooling process, is from molten metal turns to ferrite- then austenite- and ultimately becomes ferrite- or dependent bainite at its cooling speed. According to Abson and Pargeter [5], the microstructure of welding metal comprises combinations of two or more of the following phases arranged according to their forming temperatures: 1. Fine grain boundaries are formed between the temperature of 1000-650 o C along the austenite grain boundary. 2. Ferrite widmanstatten is formed between the temperature 750-650 o C. 3. Ferrite acicular is formed between the temperature 650 o C. 4. Bainite is formed between the temperature 400-500 o C 5. Martensite is formed when the cooling process is very fast. Experimental Submerged arc welding process was conducted to study the influence of heat input on mechanical properties and microstructure of API 5L-X65 steel. The base metal used in the present investigation was in the form API 5L-X65 plate with sizes 150 mm X 75 mm X 11 mm and the filler was CHW-S11 solid electrode of 3.2 mm diameter. Table 1 show the chemical composition of the base and the filler used. Here, double V-groove was used so that welding could be accomplished ensuring full penetration. Before welding process, all the edge of plate were thoroughly cleaned to avoid any source of contamination, that could result weld defect. Bead-on-plate submerged arc welding was performed along the centre line of plate. Submerged arc welding process was conducted by varying current and voltage i.e. 200, 300, 400 and 500 Ampere and 25, 27 and 30 Volt, respectively. While the welding speed was kept constant to obtained heat input value as independent variables. The samples were denoted by A and B as shown in Fig. 4. Sample A was polished at various grit papers and after cloth polishing microhardness was performed perpendicularly along weld pool, HAZ region and base metal. Sample B were used for Charpy impact testing at different testing temperatures. In order to observe the change of microstructure that take place during welding, related to each heat input combination; specimens were machined from the weld pads. Standard polishing procedures were used for general observations of microstructure. Microstructure of different zones i.e. weld metal, HAZ, and fusion boundary under different heat input combination were viewed and captured with an optical microscope (Zeiss Axiolab) coupled with an image analysis software. The machined specimens of different heat input condition were used measuring micro-hardness of different zones of the weldment. The 200 gf Vickers hardness testing machine was used. Charpy impact testing was used to measure toughness and ductile-brittle transition temperature. This testing was conducted at different temperature i. Influence of Heat Input on Microstructures Microstructure is a major factor in determining the mechanical properties of welded material, so this has an important role in analysing the results of welds. In this study the discussion of microstructure of the weld metal area for all variations of heat input 0.75 kJ/mm to 2.25 kJ/mm can be seen in Fig. 5. At the lowest heat input of 0.75 kJ/mm the microstructure is formed as lower bainite because at low thermal input causes faster cooling rate so that formed in the form of lower bainite, a long with the increased heat input of the upper bainite formed structure Fig. 5b-5d because the cooling rate is slower than the 0.75 kJ/mm heat input. Increased subsequent heat input from 1.215 kJ/mm to 2.25 kJ/mm heat input are the dominant increase of acicular ferrite but there are also ferrite grain boundaries and widmanstatten ferrite, this is due to the slower cooling rate, such conditions are shown in Fig. 5e-5l. The percentage of acicular ferrite structure is seen more and there is a decrease in the percentage of the widmanstatten ferrite structure and the grain boundary ferrite, this condition is caused by a large current strength will increase the heat input thus slowing the corresponding cooling rate for the acicular ferrite formation. Influence of Heat Input on Hardness Effect of heat input to the average hardness weld are is shown in Fig. 6 below. The micro-hardness distribution data in Fig. 7 shows a uniform tendency for each heat input to increase the hardness value of the parent metal to the fine HAZ area then to the coarse HAZ to the welding metal, after which it decreases to the fine HAZ region to the parent metal as a result of cooling rate. This is consistent with the microstructure formed that the coarse HAZ has a bainite structure. Besides, the hardness value does not show the tendency of linear line in the same area. This is because the identified microstructure is not always the same even in one region. In areas of weld metal for example, which have sufficiently random data levels, the identified microstructure of an acicular ferrite, a widmanstatten ferrite or a grain boundary ferrite. Influence of Heat Input on Toughness In the following section illustrate the influence of heat input on toughness of API 5L-X65 to determine transition temperature of the overall heat input which can be seen in Fig. 8. Fig. 8. Toughness at different Heat Input The transition temperature is the temperature where there is a change in the toughness of the metal from brittle to ductile. The smaller transition temperature lead to the better toughness. The transition temperature at the metal of the welding area by using a heat input of 0.75 kJ/mm, 0.9 kJ/mm, 1.215 kJ/mm. 1.35 kJ/mm and 2.25 kJ/mm has range (-20 o C) to 0 o C while in welding with heat input 1.125 kJ/mm, 1.8 kJ/mm and 1.875 kJ/mm occur in temperature range (-20 o C) to 10 o C whereas at heat input 0.81 kJ/mm and 1.62 kJ/mm has range 0 o C to 10 o C. From these data indicates that the lowest transition temperature occurs in range (-20 o C) to 0 o C i.e. at -10 o C temperatures occurring at a heat input of 0.75 kJ/mm, 0.9 kJ/mm, 1.215 kJ/mm, 1.35 kJ/mm and 2.25 kJ/mm. The following sections will explain how the influence of the heat input to the toughness value of the weld metal at a temperature of -20 o C is suitable for the application of the electrodes used as welding metal filler. The energy standard at -20 o C from the 100 Joule CHW-S11 electrode as shown in Table 1. The toughness value at -20 o C has a tendency to increase as heat input increases but fluctuate increasing because in some samples there are defects of incomplete penetration and microhomogenicity of the microstructure on the weld metal where there are some re-infected or reheated zone. The increase in the toughness value that occurs due to the microstructure formed in dominated by acicular ferrites, as described by [6] that acicular ferrite phases are formed intergranularly in the form of short ferrite needles in random directions so that when there is a force from outside the crack fracture that occurs will not quickly propagate because of the interlocking mechanism. Fig. 9. Influence of Heat Input on Toughness at -20 o C In Fig. 9 it is seen that those included in the impact test standard at -20 o C are the heat inputs of 1.62 kJ/mm, 1.875 kJ/mm, 2.025 kJ/mm and 2.25 kJ/mm respectively totalling 110 joules, 148 joules, 112 joules and 160 joules that have a toughness value above 100 joule so that it complies with applicable standards in the application of API 5l-X65 steel welding joints with CHW-S11 filler metal. The highest toughness value occurs at the 2.25 kJ/mm heat input which corresponds to the dominant microstructure formed on the weld metal in the form of acicular ferrite phase.
2019-04-30T13:09:14.706Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "ef477674fcbea11602353cc3f1f41af680f93b78", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/18/matecconf_iiw18_01009.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "dd7f91cdcec0ece8591cd17e9623bc5dbed632cd", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
259863657
pes2o/s2orc
v3-fos-license
A microfluidic approach for early prediction of thrombosis in patients with cancer Li and colleagues have made a notable advancement in predicting cancer-associated thrombosis with a microfluidic device that monitors circulating platelet activity.1 This tool could improve the management of thrombotic events in patients with cancer, guiding timely treatment and potentially reducing mortality. Patients with cancer face a significantly higher risk of thrombosis-including venous thromboembolism (VTE) and arterial events-than the general population, with VTE occurring in approximately 15% of cancer cases, contributing to substantial mortality. 2 Thrombotic events not only increase mortality risk but also promote cancer progression. Because platelet activation plays a crucial role in the correlation between cancer and thrombosis, a reliable method to monitor tumor-associated platelet activity would help to predict and prevent thrombotic events in patients with cancer. In the current issue of Cell Reports Methods, Li et al. present a device aimed at addressing this need. The device developed by Li and colleagues is a microfluidic-based ''chromatograph'' designed to analyze circulating platelets. It leverages the interaction between platelets and a fibrin matrix, a crucial component of blood clots, to assess platelet activation. The device incorporates microchannels filled with a stationary phase matrix. When platelets flow through these microchannels, their interaction with the fibrin matrix is tracked via high-sensitivity fluorescence microscopy. Notably, Li and colleagues have strived to ensure their model's physiological relevance. Unlike many other existing techniques, this microfluidic device aims to recapitulate the bio-rheological conditions that platelets face within the living vessels, accounting for variables such as pressure, flow, and shear stress. By doing so, it enhances the physiological applicability of the obtained data. Li and colleagues demonstrated the effectiveness of the microfluidic device by detecting changes in platelet activity in tumor-bearing mice and patients with cancer. Their findings revealed a strong correlation between platelet activation status and tumor progression, with an increased risk of thrombosis observed in lung, breast, and liver cancer as the disease advanced. Follow-up studies conducted over 6 months on patients with advanced lung, breast, and liver cancer further supported the link between platelet activity level and thrombus occurrence rate, highlighting the device's predictive potential. Interestingly, the device's predictive capacity appeared to surpass that of conventional blood coagulation parameters, such as activated partial thromboplastin time (APTT), thrombin time (TT), and prothrombin time (PT). This suggests that the device could serve as a more reliable prognostic marker for cancer-associated thrombosis. The study conducted by Li and colleagues represents a significant advancement in the field of cancer-associated thrombosis. Their innovative device offers valuable insights into the dynamic changes in platelet activity associated with tumor progression. However, several important questions still need to be addressed to fully comprehend its implications. For instance, the reproducibility of the device's performance across diverse patient populations and its integration with existing diagnostic protocols are essential considerations. Furthermore, identifying the specific platelet activation markers that the device is sensitive to could provide deeper insights into the molecular mechanisms underlying cancer-associated thrombosis. Answering these critical questions will be pivotal in harnessing the device's full potential. Microfluidic, or lab-on-chip, technologies have rapidly advanced across various disciplines, including chemistry, physics, biology, medicine, and clinical fields. Although certain applications have gained commercial success, the potential to accelerate cancer-associated thrombosis research has been largely overlooked. However, several studies, like Li et al., suggest emerging development in this area. For instance, Zhao et al. presented the CVS-on-a-chip (Vein-Chip), which mimics the venous geometry of patients with cerebral venous sinus thrombosis (CVST). 3 By reconstructing patient-specific 3D vascular geometries, the Vein-Chip provides a personalized and cost-effective platform, bridging the gap between cerebrovascular imaging diagnostics and patient-specific blood clot testing. This approach has the potential to enhance diagnostic platforms, facilitate cardiovascular patient monitoring, and advance personalized antithrombotic therapies. Other recent developments have allowed for the coculture of multiple cell types and the construction of in vitro blood vessels/tissues, such as the 3D spheroid-microvasculature-on-a-chip This model offers a controlled hydrodynamic microenvironment and physiologically relevant parameters to study the mechanobiology interplay between tumor spheroids and the endothelium during metastasis. 4 Such techniques provide a valuable platform for investigating cancer-associated interventions with procoagulant properties. Individual cancer patients exhibit complex and distinct abnormalities in components of Virchow's triad (blood flow stasis, endothelial injury, and hypercoagulability) as well as their hypercoagulable or prothrombotic state. Microfluidics offers accessible modification and testing of these factors. Platforms such as droplet blood tests, 5,6 image-based vessel reconstruction models, 3 and endothelialized microchips 4,7 simplify and personalize testing procedures and have the potential for translation into user-friendly products. The success of point-of-care testing (POCT) in the commercialization of COVID-19 rapid antigen testing demonstrates the impact of POCT on healthcare delivery. Microfluidic-based POCT for cancer-associated thrombosis diagnosis could significantly benefit patients, as anticoagulant therapy is a cornerstone of treatment and necessitates real-time diagnosis and rapid management during critical illnesses. Building off of the Li et al. design, key areas of development should focus on identification and validation of specific biomarkers that can enable rapid and accurate detection of thrombotic events at the point of care, the optimization of risk assessment that includes the current gold standards of bleeding risk assessment, the discovery of the next-generation anticoagulant drug, and biomedical engineering translation. Integration of multiple assays targeting various aspects of the disease, such as coagulation function and cancer-specific biomarkers associated with thrombosis, could provide a comprehensive assessment of thrombotic risk, facilitating personalized treatment decisions. Translation into miniaturized, portable devices would allow for bedside or outpatient testing, enabling Lab-on-chip platforms can perform factor-separated investigations and assessment of patient-specific Virchow's triad for a better understanding of cancerassociated thrombosis, potentially developing from organ/human-on-chip to standardized models and intelligent healthcare in the future. Created with BioRender.com. 2 Cell Reports Methods 3, July 24, 2023 ll OPEN ACCESS Preview real-time monitoring and prompt intervention. We hope to see a future for cancerassociated thrombosis POCT that revolutionizes patient care by improving early diagnosis, risk assessment, and management strategies (Figure 1).
2023-07-15T15:10:49.979Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "797237ac6fa0df59801e2f1756ad13bed119ca08", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.crmeth.2023.100536", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b1e2539d8e5d42f4e5afb085e99c17344e8420f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4027124
pes2o/s2orc
v3-fos-license
Extracted metabolite from Streptomyces Levis ABRIINW111 altered the gene expression in colon cancer. Aim In this study we attempt to indicate anti-carcinogenic influence of ether extracted metabolites of Streptomyces Levis sp. on gene expression in colon cancer. Background Colon cancer is one of the most prevalent cancers worldwide. In recent decades, researchers have been seeking the treatment for cancer. Natural products are valuable compounds with fewer side effects in comparison to chemotherapy drugs. Methods Secondary metabolites were extracted with the inoculation of bacterial sample in Mueller Hinton Broth. MTT assay was done to evaluate the cytotoxicity effect of metabolites on SW480 cells. qRT-PCR was performed to observe effects of metabolites on Bcl-2, P53, SOX2, KLF4, β-Catenin, SMAD4, K-ras, BRAF genes expression in colon cancer. Results The metabolites exhibited cytotoxic effects on colon cancer in a dose/time dependent manner (P < 0.001). After 48 h treatment, fold expression of Bcl-2, SOX2, β-catenin, K-ras, BRAF genes fold of expression were decreased, whereas P53, KLF4, SMAD4 genes were increased in treated cells (P < 0.001). Conclusion These findings indicate that ether extracted metabolites of Streptomyces Levis ABRIINW111 have anti-carcinogenic effects on colon cancer. Genetic and epigenetic alterations in the normal colonic epithelium lead to colon adenocarcinoma (4). Benign adenomatous polyp is the first step, then polyps develop into a malignant adenoma with high-grade dysplasia which subsequently transform into invasive cancer (5). Changes in genes expression is a significant prognosis factor for initiation and progression of colon cancer which could be a biomarker for targeting colon cancer. Colon cancer is most commonly initiated by changes in the Wingless/Wnt signaling pathway. Inactivation of tumor suppressor genes and activation of oncogenes such as Bcl-2, P53, SMAD4, BRAF, Kras, Beta-catenin, SOX2, and Klf4 lead to development of colon cancer. Some of the predominant alterations that have been demonstrated to play an important role in initiation of colon cancer include K-ras, P53; TGFBR2 and SMAD4 as elements of the TGF-p signaling pathway are involved too. Bcl-2 family control the integrity of mitochondrial membrane. The anti-apoptotic proteins including Bcl-2, BAG, Bcl-x, Bcl-XS, Bcl-XL, Bcl-w and the proapoptotic proteins like Bax, Bid, Bak, Bad, NOXA, and ORIGINAL ARTICLE PUMA are members of this family (6)(7)(8)(9)(10). Bcl-2 acts as an anti-apoptotic member and controls the apoptosis by several mechanisms such as releasing of ions into the cytoplasm through altering the permeability of the intracellular membranes (11,12). P53 as a transcription factor has a suppressor activity and it is mutated in 50% of primary colon cancers (13). Expression of several pro-apoptotic genes such as Bax, NOXA, and PUMA is controlled by P53 (14)(15)(16). Sox2 is a member of the Sox gene family, belongs to the SOX B1 subgroup. It acts to preserve development potential and encode transcription factors with a single HMG DNA-binding domain (17). The Kruppel-like factor (KLF) family of genes regulates a wide range of cellular processes such as differentiation, migration, apoptosis, proliferation, tumor formation and inflammation. KLF4 is exceeding and has observed in the gastrointestinal epithelial cells, skin, and endothelial cells in vascular system (18)(19)(20)(21). KLF4 as a regulator of cell proliferation, induces cell cycle arrest at G1 to S phase in a p53-dependent manner by activation of p21WAF/Cip1 gene as the negative cellcycle-regulatory cyclin-dependent kinase inhibitor (22,23). ß-catenin is one of the elements of the APC/ßcatenin/TCF/Lef pathway and its expression is increased by activation of the Wnt signaling pathway. It plays a main role in cancers such as melanoma, and gastric cancer (24)(25)(26). SMAD4, mutated mostly in colon cancers, belongs to the SMAD family of genes and acts as a tumor suppressor gene. In the transforming growth factor-p (TGF-p) signaling pathway, SMAD4 codes cytoplasmic mediators (27,28). K-ras is one of the important elements in the Ras/MAPK signaling pathway. This signaling pathway, by inducing the synthesis of cyclin D1, plays a key role in apoptosis, differentiation and cell proliferation. Mutation of the K-ras as a proto-oncogene activates this pathway, which is found in 36% of colorectal cancers (29)(30)(31)(32)(33). Three RAF genes that are regulated by binding to RAS, mediate the RAS-induced cellular response to growth signals by encoding cytoplasmic serine-threonine kinases. BRAF is one of the three known RAF genes that have resulted from gene duplication (30). The SW480 cell line is obtained from the colon adenocarcinoma with moderate level of differentiation. Previous studies have illustrated that SW480 cell line displays most of the genetic changes which are seen in aggressive colon cancers, including a K-ras mutation (34), p53 mutation (35), loss of the DCC gene on chromosome 18 (36). Streptomyces sp as the largest genus among actinomycets, produces a wide range of important secondary metabolites, including antimicrobial and anticancer (37). For example, Rapamycinisolated from the soil bacteria Streptomyces hygroscopicus -has revealed anticancer activity (38)(39)(40). Recent studies are focused on microbial natural products as the most promising source for developing better antibiotics (41). In our screening program for producing bioactive compounds, the diethyl ether extracted from Streptomyces Levis ABRIINW111 has shown strong activity against colon cancer cells (unpublished data). One of best methods for cancer therapy is using natural products. They act as anti-cancer agents without serious side effects. They can induce apoptosis and change genes expression in cancer cells (42). Because of these advantages, metabolites as natural products can be a good choice for cancer therapy. In this study we evaluated the Streptomyces Levis ABRIINW111 metabolites effect on the pro-apoptic, anti-apoptotic and several oncogenes to understand how these metabolites could be effective products in cancer therapy. Methods Streptomyces Levis ABRIINW111 was purchased from the Department of Microbial Biotechnology, AREEO, Tabriz, Iran. Metabolites were extracted as described, bacteria was cultured in Nutrient agar medium (Sigma /70148) at 29 °C for 7 days. loop full of bacteria was inoculated into 25 ml of Mueller Hinton Broth (Sigma /70192) and incubated while agitating on shaker incubator set at 70 rpm at 29 °C for 36 h (43). As previously described, we used spectrophotometrical reading and chose turbidity 620 nm, 0.08 O.D, as an appropriate concentration for inoculation (43). After fermentation time, 1 ml of pre-culture was used to inoculate 1,000-ml Erlenmeyer flasks; each contained 150 ml of fresh Mueller Hinton Broth medium. The fermentation was carried out at 29 °C for 7 days on shaker incubator set at 70 rpm, centrifuged at 4000 rpm for 20 minutes. The Cell free filtrate was mixed with equal volume of Diethyl ether (1:1 V/V) shaken for 1 h at 175 rpm, extracted by Diethyl ether (100921/ Merck), using separating funnel. Finally, the obtained organic extract was concentrated at room temperature until 0.01 gr reddish brown crude extract obtained; the resulting extract was kept at 44 °C until used (43). Also, Streptomyces Levis ABRIINW111 metabolites fractions were analyzed by HPLC method (44). Metabolites were dissolved in final concentrations of 100, 500, 1000, 2000, 5000 ng/ml in DMSO (43,45). Cell culture and MTT assay SW480, a human colon cancer cell line, was obtained from Pasteur Institute (Tehran, Iran). Cells were cultured in RPMI 1640 medium supplemented with 10% FBS, 1% penicillin and streptomycin in 5% CO2 at 37 ˚C˙. Real time PCR The 1×10 6 cells were cultured in RPMI 1640 medium supplemented with 10% FBS, 1% penicillin and streptomycin in 5% CO2 at 37 ˚C. After 24 h, supernatant was removed and cells were treated in 1000 ng/ml of metabolites and incubated in 5% CO2 at 37 °C for 48 h. Thereafter, the cells were harvested by using Trypsin-EDTA solution (Sigma, T4049) and collected via centrifugation in 1000 g for 5 minutes. RNA extraction from the harvested cells was performed using RNX plus kit (RN7713C, Sina clon, IRAN). Briefly, 1 ml ice cold RNXTM -PLUS solution was added to the harvested cells in 2ml microtubes. Samples were vortexed for 10 seconds and incubated 10 minutes at room temperature (RN7713C, Sina clon, IRAN). 200μl chloroform was added to the samples and resuspended, the samples were then incubated on ice for 5 min. Samples were centrifuged at 12000 rpm at 4 ˚C for 15 min. The aqueous phase was transferred to new RNase-free 1.5 ml tube, and an equal volume of Isopropanol was added to the solution, gently mixed and incubated on ice for 15 min. The mixture was centrifuged at 12000 rpm at 4 ˚C for 15 min. Supernatant was discarded and 1 ml of 75% Ethanol was added to the mix, briefly vortexed to dislodge the pellet and then centrifuged at 4 ˚C for 8 min at 7500 rpm. The supernatant was discarded and the pellet was allowed to dry at room temperature for a few minutes. Pellet was dissolved in 30 μl of DEPC treated water. To help dissolve the pellet, the tube was placed in a 55-60 ˚C water bath for 10 min. A NanoDrop 2000c spectrophotometer was employed for concentration and OD measurements. Samples with acceptable OD 260/280 and 260/230 values (~1.8 -2) were subjected to cDNA synthesis. The single stranded cDNA was synthesized by using cDNA synthesis kit (K-2261-6, Bioneer, Korea) according to manufacturer's instructions. Briefly, 5μg of RNA was added to cDNA synthesis tube in a final volume of 20 μl DEPC-treated water. The cDNA synthesis tube was placed in a 60 ˚C water bath for 1 h and finally, it was placed in a 95 ˚water bath for 5 min. qRT-PCR was performed by using the SYBR Green master mix real-time PCR kit (75675 500 RXN ebioscience, USA) according to the manufacturer's instructions. Briefly, 7 μl of SYBR Green Master Mix PCR, 0.35 μl forward and reverse primers from a 4μmol stoke, 0.7 μl of diluted cDNA template and 5.95 μl of DEPC treated water were added to tube. qRT-PCR was done as follows: initial denaturation at 95 °C for 3 min, 40 cycles of denaturation at 95 °C for 15 sec, annealing at 60 °C for 60 sec and elongation at 72 °C for 5 min. The GAPDH (endogenous housekeeping gene) gene was used as an internal control. Quantitative real-time PCR was performed with Rotor-Gene 6000(version: 1.7) to determine CT values and the threshold was adjusted to 0.1 (inside the exponential phase). Delta CT values were calculated in relation to GAPDH CT values by the 2method, in which ΔCt represents the difference between the CT value of target genes and the CT value of GAPDH (46). Streptomyces Levis ABRIINW111 killed SW480 colon cancer cells. MTT assay was performed for evaluating the cytotoxicity and cell viability of SW480 colon cancer following incubation with KLF4tumor suppressor -gene expression was significantly increased to 4.5 fold of expression in treated cells ( Figure 2C) and also, SOX2 gene expression-oncogenic gene -was significantly decreased to 0.4 fold of expression in treated cells( Figure 2D). SMAD4 gene expression -tumor suppressor-was significantly increased 5 fold of expression in treated cells (Figure2E) and β-catenin gene expressionproto oncogene gene -was significantly decreased to 0.6 fold of expression in treated cells ( Figure 2F). BRAF gene expressionproto-oncogene -was significantly decreased to 0.7 fold of expression in treated cells (Figure2G) and K-ras gene expressionthe most common mutated gene in ras family-was significantly decreased to 0.4 fold of expression in treated cells ( Figure 2H). Discussion Actinomycets, especially Streptomyces sp., the most important source for bioactive compounds are gram positive bacteria found in fresh water, plants surface, marine and terrestrial environments. The exploration of new bioactive compounds has led to the discovery of a new strain which can produce novel useful bioactive compounds (38,47,48). In this study, we focused on anti-cancer activity of diethyl ether extracted compounds of Streptomyces Levis ABRIINW111 on colon cancer. We showed that diethyl ether extracted compounds have an effect on Bcl-2, P53, SOX2, KLF4, β-catenin, Smad4, K-ras and BRAF genes expression. P53 is a tumor suppressor and by controlling cell cycle progression, apoptosis and by inhibiting angiogenesis is able to maintain genomic stability. Also, studies revealed that the Bcl-2 family control the apoptosis by activation of Bax or inhibition of Bcl-2. P53 expression can inhibit Bcl-2 and Bcl-XL expression (13,(49)(50)(51). Our result showed that over expression of P53 in treated colon cancer cells with extracted metabolites could downregulate Bcl-2 as an anti-apoptotic, so it could induce apoptosis in P53 dependent pathways. SOX2 as a member of the SOX gene family is expressed in human colon cancer. High expression of SOX2 is correlated with a poor prognosis, relapse, and lower survival of patients with colon cancer (52,53). In the other hand, studies reported that Klf4 as a tumor suppressor plays key roles during the differentiation, proliferation and apoptosis (54)(55)(56)(57)(58)(59). There is strong evidence that in colonic adenomas and carcinomas reduction of protein and mRNA level of Klf4 is observed in comparison with normal colonic tissues (60). Our result showed that after 48 h, extracted metabolites could decrease the expression of SOX2, whereas the fold expression of KLF4 was increased. All of the tumors exhibited increased β-catenin protein compared with normal tissues. It was demonstrated that with the stimulation of epithelial cells through epidermal growth factor (EGF), βand γ-catenin become tyrosine-phosphorylated. Additionally, a direct association of β-catenin with the EGF-receptor (EGF-R) was shown in vitro (61). TGF-p signaling pathway transits growth inhibitory signals from the cell surface to the nucleus and Smad4/Dpc4 is a key element of TGF-p signaling pathway. Mutations in SMAD4 have been reported in human pancreatic and colorectal tumors (27,28,62). In this study, after 48 h treatment, fold expression of β-catenin was decreased and fold expression of SMAD4 was increased, significantly. K-ras, a member of the RAS family of genes, is one of the most noticeable proto-oncogenes in colon cancer. The activated K-ras activates BRAF as a primary downstream target protein. BRAF, serine-threonine protein kinase, acts as a mediator of the K-ras signal toward the downstream effectors such as Mitogenactivated protein (MAP) to increase cell proliferation. Thus, alteration of K-ras seems to promote coloncancer formation (63,64). Here we showed that K-ras and BRAF fold expression were decreased in colon cancer cells treated with extracted metabolites. These findings show that the crude extracted metabolites have antiproliferative activity and can inhibit cancer cells proliferation. In summary, we have demonstrated that diethyl ether extracted metabolites of Streptomyces Levis ABRIINW111 have anti-carcinogenic effects on colon cancer and can alter anti-apoptotic, pro-apoptotic and oncogenes genes expression in treated cells. Also, extracted metabolites as natural products can be a good choice for cancer therapy but more studies are required to characterize the exact structure of metabolites and validate the clinical significance of our findings.
2018-04-03T00:37:46.097Z
2018-02-06T00:00:00.000
{ "year": 2018, "sha1": "74e4c9731ee438ffd40dfa4c1332e3602c573c26", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "74e4c9731ee438ffd40dfa4c1332e3602c573c26", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
145090880
pes2o/s2orc
v3-fos-license
Using Applied Theatre as a Tool to Address Netizenship This paper charts the ways in which a researcher uses applied theatre practice as a tool to address netizenship issues in the advancement of digital age by documenting a workshop he co-facilitated with graduate students at the University of Porto during the Future Places conference in 2013. The workshop used applied theatre both to catalyze intellectual discussion on netizenship and to create creative performance that embodied the concepts discussed. Applied Theatre approach is used to exploring issues of netizenship because, similar to social media, it blurs the lines between performer and audience. The results and analysis of the workshop demonstrate that applied theatre was a successful tool to address netizenship issues. Introduction Traditionally speaking, citizenship implies a legal status within a nation/state, and political, social and civil rights as defined by that nation's laws.An array of neighborhoods, political parties and communal/social clubs were established to bring together affiliated citizens face to face to form NGOs and civic interaction that would enhance/foster connecting citizens in local communities.The spread of digital media, however, and internet invite citizens to reconsider their citizenship behaviours (Bimber, 2006;Hardy & Scheufele, 2007).Citizens start to assert their legal rights and build their social connections through social media platforms and thus forming a new form of citizenship that will be addressed in this paper as netizenship.A netizen is defined generally as a user/citizen of the internet (Omotoyinbo, 2014).Nowadays, cyberspace becomes a site for forming new identities, new social connections, and new meanings for belonging-a site for A "new social institution" (M.Hauben & R. Hauben, 1998).With social media being the epitome of communication and connection to others and their lives (Castro, 2008), we start to find netizens coming together through their media connection at times to replace and at other times to complement lively face-to-face participation with a digital one. While the notion of netizen is being developed and practiced, theatre in general-and applied theatre in particular -can be used as a creative instrument that can help us build a narrative for this phenomena and allow us to question its effects.Applied theatre offers a site that can ask, embody, and stimulate dialogue about an issue.Although the two forms seem drastically different, applied theatre shares core characteristics of operating within digital world.First, both deal with the realm of representation, as people present a persona to their public in both applied theatre and the internet.Also, both merge the performance site with the spectators' site.In applied theatre, more than in conventional theatre, the lines between performer and spectator are blurred, which is why the influential proponent Boal (1985) called participants "spect/actors".Similarly, interdisciplinary scholar Kathleen Irwin (2011) states that the internet offers a new site for representation that blurs the distinction between the watched and the watcher.These reasons suggest why using applied theatre as an instrument might contribute to the growing discourse on netizenship.This article traces a project, a workshop in a festival, which asks the following questions: Can applied theatre be helpful in initiating a discussion about the effects of internet on social lives?In what ways can applied theatre interweave art and netizenship, or expressive communication and netizenship? Applied Theatre Applied Theatre is broadly defined as the use of theatre techniques among a specific community to encourage the members of that community to rethink collectively about, and embody, a specific issue.Applied Theatre researcher Taylor (2003) proposes that Applied Theatre helps in facilitating a dialogue, healing an emotional wound, or processing a specific issue of significant importance within a community.Taylor argues that applied theatre "operates from a central transformative principle that shares much with other participatory and community theatre movements, where a central emphasis is on the applications of theatre to help people reflect more critically on the kind of society in which they want to live" (Taylor, 2003, p. 1).As a teaching method, the form can therefore be useful for stimulating thinking about netizenship and community.In applied theatre, even though there might be a final product at the end, the focus is still centered on the process and on the journey of learning and discovery.The pedagogical approach emphasizes asking the learners questions, rather than presenting them with answers, as happens more in traditional didactic approaches to teaching.Most scholars and practitioners who write about applied theatre agree that the function of Applied Theatre is not necessarily to create a performance at the end of the work, but rather to create dialogue and opportunities for the community to reflect upon the issue or concern being discussed through that performance.It serves as a means of giving voice where people are not heard and to make them more articulate and expressive about their problems (Taylor, 2003).In the words of Taylor, "Applied Theatre offers a space where "individuals connect with and support one another and where opportunities are provided for groups to voice who they are and what they aspire to become" (Taylor, 2003, p. xviii). Future Places (Note 1) An applied theatre workshop took place in Future Places, a festival that has recurred every year since 2008 in Porto, Portugal (Note 2), which explored the relation of Digital Media to culture, and the evolution and development of this relationship.The festival is a meeting point for widely diverse participants, drawn from all disciplines and fields of knowledge and experience, which showcases their artistic talents and encourages artistic performances that reflect perspectives on the conference theme.Performances aim to transform both the approach of the performance makers and the attending audience on certain issues.In the 2013 Future Days Festival (Note 3), for instance, the theme was citizenship and how it emerges digitally and socially, as well as how the two interplay to yield certain emotions and perspectives.Future Places is a journey, showcasing the journeys of others by bringing in an entire new understanding that will lead to future places.As an Applied Theatre practitioner, I presented a workshop at the Festival with a co-facilitator, Dr. Susan Todd, a lecturer in St. Edwards University in Austin, Texas, and the Director of Weird Sisters Women Collective Theater in Austin. Key Concepts and Literature Review The following two concepts are used to lay the foundational themes of the workshop, and complement the concepts participants discussed during the workshops: Participatory Theatre The Progressive use of theatre for social purposes has been theorized mainly by the artists or community organizers who use the theatre techniques in their practice, although they have always been influenced by social movements, activism, and social theory.One key model is Augusto Boal's (1985) Theatre of the Oppressed technique in which theatre can be used as a tool to address social issues.In Theatre of the Oppressed, Boal challenges some theatre theories of the past, especially the relationship between the spectator and the stage.Boal shows how ruling classes employed traditional theatre to take control out of the hands of ordinary citizens, using the example of Greek drama.He then presents Brechtian and Marxian theatre and explains how they reverse this trend.Theatre of the Oppressed is divided into two parts: the first part is theoretical where he critiques the theories of Western Drama based on Aristotle and lays out his own theory and concepts; the second part is an account of Boal's work with the People's Theater in Peru, and his experiments in the Arena Theater of Sao Paulo, Brazil.Boal regards his Theatre of the Oppressed techniques as stimuli for change: "The theatre is not revolutionary itself, but it is surely a rehearsal for the revolution" (Boal, 1985, p. 122).In Boal's work, the community usually is facing a socio/political challenge, and theatre becomes a tool to allow citizens to civically dialogue about this challenge.Similar to Boal's work is Rohd's (1998), which offers an outline for creating an applied theatre program.Rohd believes that everyone can participate in this kind of work, and provides a model that can be adapted across range of subject matter.According to Rohd, theatre training is usually done in three parts: warm up, bridge-building activities and then activating material.The focus of this kind of theatre is on the community, using theatre as a tool to create social bonds and belonging; all of these themes were examined in the Future Places workshop within the framework of social media, and the advancement of the internet. Netizenship Netizenship can be described as the practices (social, political, etc) that internet users engage in on the internet.Film scholar Mark Poster defines netizen by comparing it to citizen of the nation whereas a netizen is the subject of the cyber state.Media scholar Mackinnon (2012) claims that it is "no longer sufficient for people to assert their rights and responsibilities as citizens of nation-states" (p.201).She calls for the need for people to assert themselves as netizens.Her call came as an acknowledgment of the growth of the internet and its effects on people's social and political behavior.Social studies scholar Chachage (2010) also acknowledges netizenship practice as an empowerment tool for civil society organizations.Education scholar Özlem (2014) goes further by connecting netizenship to civic virtues.He argues, for example, that social studies teachers should be role models as both citizens and netizens.Netizenship practice is not restricted to political obligations or education; there are many channels netizens can use to affect public policies, such as blogs, online campaigns, and others.Also, because netizens form communities using social media platforms such as Facebook and Twitter, such communities are international since they do not, for the most part, limit themselves to geographical locations; netizens also use social media platforms to meet new people and expand their social circles.Social connections most recent site is the virtual where netizens build social networks, maintain social ties and develop social empathy for each others, all within the realm of the virtual.The question is, to what extent was netizenship practice able to replace traditional live face-to face social capital practice developed by citizens? Methodology The two facilitators in this workshop adopted the work of both Boal and Rohd to create a systematic method for helping participants approach their questions: first, by getting them engaged in physical activities that force them to focus on human interaction instead of electronic devices; then asking them to be immersed in actual chat sessions in real time during the workshop, which were used for constructing stories based on their chats; and finally, embodying the stories in performances.The workshop was divided into two parts: the first part lasted for two days and was dedicated to brainstorming and framing concepts and themes.The second part was stretched over three days where participants acted out their ideas into stories, to collectively create a performance based on the ideas that were generated in the first two days.There were about 20 participants on the first day and 10 who participated in the performance.Most of the participants were recruited through the conference organizers, and varied in being graduate students (MA and Ph.D students at the Multimedia Design program at University of Porto) and theatre artists from the Porto community.Expertise and professionalism was not a criterion, as variety and diversity were crucially needed to reflect as many layers of societies as possible.Workshop facilitation was divided between the author who focused on leading the session for generating ideas, and Dr. Todd who helped in writing the final script of the stories produced by participants.Both facilitators collaborated with participants to direct the performance.While leading the workshop, the facilitators asked the participants to engage in real live "chats" during the workshop, meaning that participants were allowed to chat on social media platform during the time of the workshop when they were building up a story for their performance.These live chats were used to be included in the story.During the time of the workshop and while participants were engaged real life chats, the facilitators noted the participants' involvement in what Charnet and Veyrier (2008) call "parallel activities", where participants were checking their emails and browsing websites while they were engaged in executing the instructions of the facilitators. Personality Signature Improvisation Theatre workshops begin with warm up exercises, which resemble group games.For the first warm-up of this workshop, the object of the game is for each person to move her body in a distinct and unusual way while saying her name.Participants stand in a circle, and each will perform her name and movement and all other participants will repeat her sound and movement.The simple exercise usually creates laughter among the participants and in turn generates familiarity, collectivity and bonding.This game forces participants to engage in a bodily experience that moves them away from their electronic devices.This exercise allows participants to temporarily drift away from practicing their habitual virtual "parallel" activities on line, which usually includes checking their emails and social media updates. Join Me If In the next exercise, another common warm-up, the entire group stands in one line on one side of the room, side by side, facing the opposite wall.Anyone in the group crosses the room and says "Join me if you……like me!".The blank could be any statement, simple-"Join me if you chat for more than three hours every day like me!"-or complex-" Join me if you believe that the meaning of citizenship will be changed because of the advancement of new technology like me".Others follow the initiator across the room.Each participant will explain their positions.The exercise is important since it will bring people together based on certain similarities among a widely diverse group of participants.In this instance, participants raised and discussed topics about netizenship and virtual media: -what makes chatting in the virtual world different from a live conversation, the definition of citizenship and what defines active citizenship?Unlike the first exercise, which forces participants to go back to their inner selves to find out the way to present their bodies, this exercise is meant to bring them together and forms a sense of community among the group.They are invited to think about what connects them as a small group, and to find common threads that create a bond between them as a small group. Frozen Image In the next exercise, a group of participants would assume a collective position and freeze their bodies to express an idea, a concept, or a story.For example, three people might arrange themselves in postures that together express an image of "……".This activity will start as a general idea, a concept, or a story that can be built upon from the previous activity's discussion.Other participants observe each small group's collective pose and, with facilitator guidance, articulate what they observe, whether concrete (two people are blocking a third from moving forward) or abstract (it shows oppression).Such image work is always a safe start for participants who are not actors.It allows them to get into their bodies and be expressive in presenting/representing their ideas.Observing images and talking about them is the core of the game since each participant interprets the image in a way that is consistent with her own beliefs and not necessarily according to what the creator of the image intended.The facilitator's role is to keep guiding participants toward focused, objective observation, asking, "Is this really what you see?" Or, "are you projecting your ideas on what you see?" Participants are required to use frozen images as the backbone for their scene work later.This exercise is a step further in building a stronger bond among participants.They are not only instructed to build a frozen image, but they are also asked to construct a meaning for it, and they hear how clearly that meaning was conveyed to the rest of the participants who observe and interpret it.In this way, they are led to achieve the goal of the exercise in a collective way that furthers their social connections. Dramatizing the Stories After the warm-ups, the workshop advances to the next phase, which differs somewhat according to the emphasis of the event?In this case, the second part turned to themes related to digital media.Though in more ordinary applied theatre workshops, participants are urged to leave their devices off and aside, here devices were part of the exercises.The facilitators divided participants into three groups, asking each group to actually get involved in an online chat on one of the social media platforms with their friends/families.The mood of the group instantly shifted; the minute participants were given instructions to start chatting, each one went to their own electronic devices and got engaged in their virtual social interaction outside the room.The two facilitators started checking on participants and speaking to them one by one ("side coaching"), for example by asking the participants to divert the chat into asking their chat partners about whether this chat session seemed similar to their live interaction. Leaving the creative work up to participants themselves, facilitators ask participants to use the earlier theatre games, especially the frozen image one, to help in dramatizing their scenes.Each group was responsible for developing a scene. First Scene The final performance consisted of three major scenes: The first scene was about David, who decides to create a Facebook profile for himself, but with a different character-a profile that resembles what he wants to be in real life.In this scene, David, makes an online connection through his profile with a 17-year-old girl (Wanda).During the scene, we see the character building his Facebook profile with all the characteristics that he is not.He decides to meet her and give her a book as a gift.At the last minute, he decides not to meet her and instead leaves the book at a coffee shop.Not knowing that the coffee shop is owned by her mother, he goes there and strikes an instant connection with her mother.This scene introduces us to a netizen who steps into the real world.As digital scholar Poster (2002) explains, the netizen status is temporary because no one lives in the digital world permanently; eventually, any netizen will leave the virtual world and go back to their "normal" social status-and it is that encounter where the scene starts.We witness a virtual connection in the making between a man and a teenage girl, and a life connection is in process between the same man and the mother of the girl.The scene presents the participants' view of how social interactions works in both virtual life and reality: While it takes a long time and effort to develop the connection between the man and the girl in virtual life, the connection in real life was instant and more natural.Also, virtual life in the scene was characterized as being superficial and thus misrepresenting the self intentionally. Second Scene In the second scene, we meet an international student who just moved to Europe, where we see her trying to talk to people in public spaces, in the street, in coffee shops, etc.However, she only faces disappointment and disregard from community members and decides to resort to online dating and strikes up a connection with a man.When she goes to see him, he does not appear.The scene then evolves in a random way where the international student gets to know a local friend who starts introducing her to the community and teaching her how to interact in the host community.The themes this scene poses are that of exclusion and inclusion; the character resorts to the digital world to find a solution for her exclusion in real life.Her main issue in the scene is that of assimilating herself in a live community that marginalizes her for being 'different'.Real life in this scene emerges as harsher than that on the internet.Unlike the first scene, where the character is dissembling as another persona online to be connected in the virtual world, the character in this scene uses her own personality in both the net and real life. Third Scene The third scene is about a person with a neutral mask.The character's name is simply 'person', and this person does not have a gender.Although he/she is in a perfectly healthy shape, he/she is sick from something.When he/she finally sees a doctor, the doctor diagnoses a "need to belong to someone" to cure him/herself.In the third scene, it's determined by a doctor that the problem 'person' is facing is a problem of belonging, and a problem of deciding what he or she needs to define him/her self now.A man?A woman? Christian?Muslim?etc. Discussion This project, in addition to its investigation on issues of netizenship, presents an innovative methodology to research a new form of interaction that is governing modern contemporary life.This interaction, based on virtual social connections, has the potential to divert citizens from their daily face-to-face connections towards online connections.While leading the workshop, facilitators invited participants to alternate between immersing themselves in a physical embodiment experience using applied theatre and theatre of the oppressed exercises (personality signature improvisations, and frozen image), and jumping back into the virtual world by asking them to engage and practice their netizenship during the workshop time. In a modern life where everything is marked by "being pressured", social media becomes a platform where individuals have less social communication pressure.On such a platform, netizens are encouraged to open up, such as in the case of David in the first scene, and boost their self-confidence to form a new community.The dramatization of the stories of this workshop demonstrates that social media can foster virtual community belonging and the yearning to become part of a virtual community.Being immersed in social media indeed helps participants to keep up with old friends, build a social capital, and use online as a tool to get information; yet, human interaction was still the most powerful and the most efficacious, according to the participants' stories.This was particularly evident in the first story where the character was engaged for the whole time with a netizen online; however, the minute he met someone real, he forgot about his virtual connection. The second story focuses on the issue of social inclusion, as a foreign student uses social media to both connect with her roots in her home and to break into her social life in her host community.The story was dramatized by the foreign student herself who acknowledged the many occasions that the workshop allowed her to create new social connections and thus build her social capital during the workshop, offering her a space for a personal transformation. The third story presented a bleak image for someone who is fully immersed in the virtual.In that story, netizen's question of belonging becomes very evident where the character is not able to connect, the human desire to be belonging to something/someone was stronger than the virtual belonging that was presented in the story.This same metaphor of belonging was very evident in the performance where selected audience members were invited to see a presentation of the three scenes. Facilitators work with a firm belief that art has always been a tool for change and in a world driven by digital world, the transformative effect of theater remains an authentic one.One of the participants said right after the performance ended that it is through physical embodiment, eye-to-eye contact, non-verbal communication that the performances came to life as the connections developed between everyone present that day (participants and audience members).Participants' embodiment and their stories created a heartwarming effect that left the facilitators, participants and audience in awe, and created what performance scholar Dolan (2001) calls utopian performative.In Dolan's description of utopian performative, she argues that theatre in general, whether applied or presented in a traditional form, offers tremendous communicative value.Theatre invites audience/participants to converse together and to engage in meaningful dialogue.It also allows audiences and participants to experience the unity that they share by being in the same place.Jill Dolan calls this communal feeling the utopian performative.According to Dolan, the utopian performative is a "feeling during performance that provides us with experiences of an ideal society" (Dolan, 2001, p. 455).When experienced in a community gathering, the utopian performative also enhances the feeling of belonging to a group; and according to Dolan, it is the combination of the liveliness, gathering, participating in the event and the immediacy of performance that create a utopian performative moment.Participants in this workshop acknowledge utopian performative moments that unite them as a group during theatre exercises and performance; such moments, help them build their social bond and belonging to the workshop community.Beyond the utopian performative and the transcendent experience of theatre, there is beauty of the real moment in time and space where people come together in a reality free from electronic devices.In a reflection session, participants were keen on stating that applied theatre and theatre of the oppressed techniques proved to be successful vehicles to investigate changing notions of netizenship and to challenge participants to rethink the role of social media in a rapidly changed world As the doctor diagnoses the person in the last scene, it was clear for everyone who participated in the workshop that the question of belonging is the determinant issue of netizenship.We strive to belong and if we fail to belong in real life, we go to the virtual reality in order to belong.As the group expressed in the workshop, and in stories, the digital world has not yet shifted the focus of community members from real life to virtual life.As stated by workshop participants, at its best, the digital world is being used as a new way of connecting people and as a vast storage of information.In a final reflection session, participants expressed that applied theatre was an excellent tool for them to communicate their ideas about citizenship, belonging and social media.Most of the participants did not have theatre experience and yet that did not hinder them from participating in theatre activities.They attributed that to the encouraging and accepting environment that the facilitators created.One participant, an international student from Cambodia, had been having challenges assimilating within her surroundings and was feeling alienated in her new environment.She reported that the workshop helped her build social bonds and establishes meaningful connections with her colleagues that continue to be part of her new reality.This student's story becomes an example of what might lead to a transformative and positive change in the lives of a person who feels excluded and who is looking for opportunities to move from being a netizen to real life interaction.
2017-09-07T21:21:32.934Z
2015-01-29T00:00:00.000
{ "year": 2015, "sha1": "cae9af5177ca0118c0ec6c311f70c91a87ace072", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ies/article/download/44818/24426", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "cae9af5177ca0118c0ec6c311f70c91a87ace072", "s2fieldsofstudy": [ "Education", "Art" ], "extfieldsofstudy": [ "Sociology" ] }
233702986
pes2o/s2orc
v3-fos-license
Randomized double blind clinical trial evaluating the Ellagic acid effects on insulin resistance, oxidative stress and sex hormones levels in women with polycystic ovarian syndrome Objective The design of this study was due to the report of the antioxidant properties of Ellagic acid (EA) for its evaluation on the Insulin resistance (IR), oxidative stress and sex hormones levels in women with polycystic ovarian syndrome (PCOS). Methods In this randomized, double-blind, placebo-controlled clinical trial, 60 patients were recruited. Patients were randomly allocated consumed a capsule containing 200 mg of EA per day (n = 30) or placebo (n = 30) for 8 weeks. The fasting blood sugar (FBS), insulin, IR, total cholesterol (TC), triglycerides (TG), low density lipoprotein (LDL), high density lipoprotein (HDL), total antioxidant capacity (TAC), Malondialdehyde (MDA), C-reactive protein (CRP), Tumor necrosis factor-alpha (TNF-α), sex hormones and anti-mullerian hormone (AMH) were measured at the beginning and end of the study. Result At the end of the study, the mean of FBS, insulin, IR, TC, TG, LDL, MDA, CRP, TNF-α, total testosterone, prolactin and AMH were significantly decreased in the intervention group compared to the placebo group (P < 0.05). Also, there was a significant increase in the mean of TAC after supplementation with EA (P < 0.05). At the end of the study, no significant changes were observed in the mean of anthropometric factors, physical activity and food intake (P > 0.05). Conclusion EA supplementation can be helpful as a diet supplement in women with PCOS through improvement in insulin resistance. This supplement may be used to reduce metabolic disorders in women. Trial registration This study was retrospectively (07–07-2019) registered in the Iranian website (www.irct.ir) for registration of clinical trials (IRCT20141025019669N12). Introduction Polycystic ovary syndrome (PCOS) is a common endocrine disorder that affects about 5-10% of women before menopause [1]. The prevalence of this disorder varies from 2 to 26% in different countries, which can be due to differences in the population under study, the variety of criteria used to define it, the inconsistent cut-off points, and the method used to define each criterion [2]. In the Rotterdam area of the Netherlands, the prevalence is estimated to be up to 20%. The prevalence of this disease in Iran according to Rotterdam criteria is 15.2% [3]. Lack of ovulation or limited ovulation with elevated biological testosterone levels and increased production of ovarian androgens are symptoms of this disorder [4]. In this condition, the patient is more likely to develop insulin resistance (IR), obesity and an increased risk of type 2 diabetes [5]. According the scientific studies, IR can cause oxidative stress condition in these patients. Oxidative stress is effective in increasing the production of androgens, disruption of the stages of development of ovarian follicles and damage of ovarian tissue in patients with polycystic ovary [6]. Oxidative stress indices in patients with PCOS are increased and the total antioxidant capacity of blood is decreased [7]. Also, increased pro-inflammatory cytokines also play an important role in causing systemic IR and thereby worsening the syndrome [8]. The use of antioxidant compounds in reducing IR and chronic inflammation and consequently better management of PCOS syndrome has been of particular interest in recent research [9]. Numerous studies today have demonstrated the impact of non-pharmacological treatments by modifying lifestyle on reproductive performance improvement and reducing cardiovascular metabolic risk factors [10]. Polyphenols as secondary plant metabolites are found in vegetables and fruits. Scientific evidence confirms the beneficial effects of polyphenols in reducing the complications of metabolic diseases. As potent antioxidants, they have protective and therapeutic effects in managing the effects of oxidative stress by regulating inflammatory cytokines and enzymes, enhancing antioxidant defense and suppressing inflammatory pathways and their cellular signaling mechanisms [11]. Ellagic acid is one of the types of polyphenols in which the strong hydrogen bonding network acts as an electron acceptor, which in turn enables EA to participate in a number of reactions. This polyphenol is naturally found in numerous fruits and vegetables, including strawberries, red raspberries, pomegranates and grapes [12]. Ellagic acid can reduce the symptoms of chronic diseases such as dyslipidemia, IR in type 2 diabetes, and nonalcoholic fatty liver disease [13]. Despite advances in information about EA, the mechanism of its activity has not yet been discovered, which may be due to the complexity of its metabolism and depends on various factors. Due to the antioxidant and inflammatory effects that have been reported about EA and the lack of human studies of this polyphenol supplementation in PCOS, the present study aimed to investigate the effect of EA on blood glucose, IR, lipid profile, oxidative stress status, inflammatory factors, sex hormone levels and anti-mullerian hormone in women with PCOS. Participants This randomized, double-blinded, placebo-controlled clinical trial was done on 60 subjects, aged 18-45 years old. In this study, women with PCOS are referred to the Qazvin Kosar Hospital Specialist Center (from 2019-07-15 to 2019-10-20) with study clinical consultant (Gynecologist) meeting the inclusion criteria, the research topic, goals and method of the study were explained, then they received informed consent form if they wish to participate in this study. Women entered the study with at least two of the three Rotterdam criteria [14] to diagnose the syndrome as well as having a Body Mass Index (BMI) of less than 30 kg / m 2 . Patients with a history of abdominal surgery, as well as pregnant and lactating women and those who have been taking supplements in the last three months were not included in the study. Also having underlying illnesses like diabetes, severe psychiatric and behavioral disorders and usage of aspirin, warfarin, heparin and anti-inflammatory drugs (including non-steroids, steroids, antihistamines, and mast cell stabilizers) have been other exclusion criteria. The protocol of the study after approving with the ethic committee of Qazvin University of Medical Sciences (ethic code: IR.QUMS.REC.1398.033), Qazvin, Iran, was registered in the Iranian Registry of Clinical Trials website by the IRCT20141025019669N12 code. Design All patients met the inclusion criteria were randomly allocated consumed a capsule containing 200 mg of EA per day (n = 30) or placebo (n = 30) for 8 weeks. The shape, color and size of placebo were similar to the supplement capsule. Supplement was purchased from Supplement Spot and the placebo was made by School of Pharmacy, Tabriz University of Medical Sciences. It Keywords: Ellagic acid, Insulin resistance, Stress oxidative, Anti-mullerian hormone, Polycystic ovarian syndrome should be noted that the effective selective dose for EA supplementation was taken from Falsaperela M et al. [15]. Since oral supplementation with EA has been shown to reduce inflammation (one of the main goals of this research project), this dose was chosen as the dose of choice in this study. Recruitment of patients in this scientific project was done by simple random sampling using a table of random numbers. Based on BMI criteria, participants were divided into two groups using random blocks. According to the double-blind study, the patient, researcher and the specialist physician were blind to the contents of the cans in terms of supplements and placebo. Questionnaires were filled with questions about the basic demographic information and clinical records of the individual and also the participants were evaluated for height and weight. BMI was calculated by dividing the weight in kilograms by height in meters squared. To examine more closely the effect of EA supplementation in this study, all patients were advised not to alter their diet and physical activity habits along the study and to avoid foods high in EA. These foods were introduced to people in a list. To control for confounding effects of diet and physical activity, at the beginning and at the end of the study, patients were interviewed by a 3-day dietary recall questionnaire and subjects with moderate physical activity level were enrolled. In this study, relevant questionnaires were filled out to control the diet and physical activity as a confounding factors. Three-day food recalls questionnaire and Nutritionist IV program (San Bruno, CA) modified for Iranian food composition were used to calculate food intake and dietary intake, respectively. Also, the International Physical Activity Questionnaire (IPAQ) was filled out to estimate the amount of physical activity. The conversion of raw data from the IPAQ was done using existing guidelines and converted to metabolic equivalent-minutes/week [16]. Patients were followed up to control their consumption of capsules and prevented from falling out once every 7 days by telephone. In order to fully monitor the use of supplements, participants were asked to hand over a bottle of supplements to the researcher at the end of the study, which would be excluded from the study if used improperly. Laboratory methods After 10-12 h of overnight fasting, blood samples were collected from patients. Blood samples were taken two to three days after the capsules were taken. Each sample contains 10 cc of blood. Temperature of − 20 °C was used to freeze the serums and then samples were stored at a − 80 °C for future laboratory evaluations. Fasting Blood Glucose (FBS) concentration was measured by the enzymatic method using an Abbot ModelAclyon 300, USA auto analyzer with Pars-Azmone kit (Tehran, Iran). Plasma insulin was measured by using a chemiluminescent immunoassay method (LIAISON analyzer (310,360) Diasorin S.P.A. Vercelli, Italy). HOMA-IR was calculated according to the following formula: HOMA-IR = (fasting insulin (U/ml) × FBS (mg/dl))/405 [17]. Reproductive hormones assay Serum testosterone and prolactin were assayed using commercial radioimmunoassay kits (Kavoshyar Co., Tehran, Iran). This commercial kit had been previously used with an inter-assay and intra-assay variation of less than 10%. The reference range for testosterone and Prolactin (PRL) are 10 to 35 nmol/l. Luteinizing hormone (LH) was measured by immunochemiluminometric assay, in which intra-assay and interassay coefficients of variation were 3.4% and 3.8%, respectively. The normal LH range is 1.5 to 9.3 IU/l. Follicle-stimulating hormone (FSH) was also measured using immunochemiluminometric assay with an inter-assay and intra-assay coefficient of variation of 3.2% and 6.7%, respectively. The normal FSH range is 1.4 to18.1 IU/l. Serum levels of the Anti-mullerian hormone (AMH) were measured by ELISA (Beckman's kit). The normal range for serum levels of the AMH is 0.08-16 ng / ml. The mean coefficient of inter-assay and intra-assay for this method was 5.4 and 5.6 percent, respectively. Sample size calculation The level of the Malondialdehyde factor was used to calculate sample size before and after the administration of pomegranate extract in the study of Hosseini B et al. [18] using the following formula: Where a (type 1 error) is 0.05, b (type 2 error) is 0.2, S1 and S2 are the variances of MDA, and ∆ represent the difference means of MDA. (MDA before supplementation: 3.3 ± 1, MDA after supplementation: 2.1 ± 0.7). Thus, the power for detecting differences between the 2 groups for various outcomes in the present study was 80%. The sample size was obtained 18 in each group. Considering the drop out in participants during the study, 30 people were considered for each group. Statistical Analysis Statistical analyses were conducted using SPSS version 20. All data were presented as mean ± SD and were checked for normality by the Kolmogorov-Smirnov test. Due to the normal distribution of variables, the paired sample t-test and the independent sample t-test were applied to analyze differences in variables within and between groups, respectively. The p < 0.05 was considered statistically significant. Results Among 70 volunteered patients, ten women dropped out because they did not meet the requirements. A total of 60 people were included in study, and 30 were equally involved in the intervention and placebo groups. During this investigation, two patients of the placebo group and one of the intervention group did not complete the research process and dropped out of the study for personal reasons (Fig. 1). Patient compliance in this study was 95%. The final analysis was done on the subjects who finished the study. The characteristics of the participants are shown in Table 1. There was no statistically significant difference in the baseline characteristics of the participants between the two groups. The mean age of participants in intervention and placebo groups were 25.74 ± 1.19 and 26.09 ± 1.53 years old, respectively (P > 0.05). Also, there was no significant difference between the two groups in terms of anthropometric factors in the first study. The mean and standard deviation of weight (70.63 ± 4.15 vs 69.71 ± 5.11 (kg)), Height (162.09 ± 8.33 vs 160.71 ± 9.28 cm) and BMI (26.88 ± 0.59 vs 26.99 ± 0.61 (K g/m 2 )) were in the intervention and placebo groups, respectively. Also, there was no significant difference in the amount of physical activity (37 ± 3.29 vs 36.01 ± 3.5 (met-h/week)) between groups at the beginning of the study (p > 0.05; Table 1). It is also noteworthy that at the end of the study, there was no difference in terms of weight, BMI and physical activity between the two groups as well as within the group Table 1). The mean of energy and macronutrient intake at baseline and the end of the study were shown in Table 2. As shown, there were no statistically significant difference between the groups in terms of average daily intake the energy, protein, fat, saturated fatty acids, unsaturated fatty acids and some micronutrients at the beginning and the end of the study (P > 0.05). The effect of EA supplementation on insulin resistance in women with PCOS have been presented in Table 3. As shown in the table, there were no significant differences between these factors at the beginning of the study. In the end of the study, EA reduced the FBS, Insulin and IR, significantly compared the beginning the study (P < 0.05). In the placebo group, mean changes in FBS and IR at the end of the study were not significant compared to the beginning study (P > 0.05, Table 3). Also the effect of EA supplementation on lipid profile in PCO patients have been presented in Table 4. As shown in this table, there were no significant differences between blood fat components at the beginning of the study. In the end of the study, EA reduced the TC, TG and LDL significantly compared the beginning the study (P < 0.05). However, changes in mean HDL at the end of study compared to the first of it, were not significant in the intervention group (P > 0.05). The effects of EA oral supplement on stress oxidative status and inflammatory factors in patients were summarized in Table 5. Reduction of MDA, CRP and TNF-α levels in intervention group after supplementation was significant (P < 0.05). Also, TAC levels were significantly increased in group that received EA (P < 0.05). These differences were not significant in placebo group at the end of the study (P > 0.05). As the results of the study show, changes in stress oxidative and inflammatory factors in the intervention group were statistically significant compared to the placebo group at the end of the study (P < 0.05, Table 5). Pre-and post-study data on serum sex hormones levels in the two intervention groups and placebo can be seen in Table 6. There were no significant differences in baseline levels of LH, FSH, PRL, total testosterone and AMH between the two groups. In the intervention group, EA supplementation at the end of the study resulted in a statistically significant decrease in total testosterone, PRL and AMH hormone levels compared with the beginning of the study (P < 0.05). Changes in the mean of the FSH and LH levels at the end of the study were not significant compared to the beginning of the study (P > 0.05). These differences were not significant in placebo group at the end of the study (P > 0.05, Table 6). Safety and Adverse Events No side effects were observed due to the oral administration of EA in any participants. EA resulted in no clinically significant changes in vital signs, urinalyses, serum chemical values or hematological values. Discussion Changing lifestyle and dietary pattern towards sedentary lifestyle and poor nutrition can lead to insulin resistance. Compared to people with normal physiological condition, it is perhaps the most important aspect of IR, the worsening of the disease condition, and a significant increase in mortality [19]. One of the disorders in which IR is a key part of its pathological mechanism is PCOS. The mechanism and main cause of this disorder have not been explained in general, but considering the results of scientific studies, IR, oxidative stress and inflammation are among the first-degree defendants [20]. The aim of this investigation, was to evaluate the EA effects on blood glucose, insulin resistance, lipid profile, oxidative stress status, inflammatory factors, sex hormone levels and anti-mullerian hormone in women with PCOS. After 8 weeks, supplementation with EA, significantly decreased the FBS, IR. Also, at the end of the study, reduction of TC, TG and LDL changes in the intervention group was significant. Some scientific studies have reported an increase in insulin due to an increase in androgens, and some scientific sources have assumed the exact opposite of this Eq. [21]. However, a decrease in insulin levels and, consequently, a decrease in IR has reduced androgens and better ovarian function [4]. Elevated insulin levels usually cause hyperlipidemia in women with this syndrome. Continuity of these conditions and lack of improvement in biochemical factors can lead to cardiovascular disease [3]. According to the World Health Organization, women with PCOS are more likely to develop myocardial infarction [5]. It seems that the core of all these disorders is IR [22]. The function of genes involved in the secretion and modulation of insulin role, such as genes associated with Sirtuin1 and glucose transporter 2 (Glut2), as well as their effect on insulin signaling, such as glucose transporter 4 (Glut4) in muscle and peroxisome proliferator-activated receptor-gamma (PPARγ) in fat cells, is mainly significantly influenced by dietary polyphenols [23]. EA as a polyphenol and strong antioxidant, has not been studied as a dietary supplement in women with PCOS (according to a search on a scientific database), but its helpful effects on glycemic status have been shown in other metabolic disorders. The clinical trial study of Babaeian et al. [24], that conducted on patients with type 2 diabetes, intervention group drank 240 ml unsweetened pomegranate juice daily. The results of the study showed a significant decrease in insulin resistance at the end of the study, whereas no significant changes were found for serum glucose in this group. Low dose of EA in pomegranate juice or short study time for this dose, may be reasons for the lack of significant effect on glycemic indexes. Esmaeilinezhad et al. [25] investigated the effect of pomegranate juice on cardiovascular risk factors in women with PCOS. Participants received daily pomegranate juice or placebo beverage. Daily consumption of pomegranate juice improved the metabolic outcomes of TG, LDL, HDL and TC, in patients. The possible mechanism of EA that lowers blood cholesterol may be due to its effect on reducing absorption and increasing cholesterol excretion through the feces. The effect of this polyphenol on important and key enzymes in cholesterol metabolism, including hydroxy-methyl-glutaryl-CoA (HMG-COA) reductase and Acyltransferase, has also been reported in laboratory and clinical studies. EA, on the other hand, increases the persistence of beneficial bacteria in the gastrointestinal tract by reducing oxidative stress products, and thus reducing the excess plasma fat by beneficial bacteria can be helpful [26,27]. On the other hand, the condition that worsens IR, exacerbation of oxidative stress status and increased inflammation in these patients. After glycation reactions and formation of advanced glycation end products (AGEs), production of ROS occurs rapidly increased. This reaction can damage insulin-secreting cells in the pancreas [28]. According to the results of studies, receiving polyphenols can increase the prescription of PPAR-γ and in this way, they can help reduce the chronic complications of PCOS [29]. By summarizing the cellular, experimental, and clinical studies, it can be concluded that relationship between IR and oxidative stress is mutual. In the meantime, inflammation can make both sides of the equation worse [30]. The results of our study indicated that EA significantly improved the stress oxidative index and decreased the inflammatory factors. Abnormalities in oxidative stress index in women with PCOS were reported in the meta-analysis study of Murri et al. [31]. Also, the results of many studies showed high biomarker such as MDA and low indicators of antioxidant system such as TAC in these patients [32]. Goudarzi et al. [33]. investigated the protective effect of EA on sodium arsenic-induced neurotoxicity in rats. They observed that EA administration significantly increased MDA levels, IL1β levels and TNF α levels in the brain compared to the control group. EA administration also increased TAC levels compared to the control group. DNA damage and subsequent harmful genetic changes occur as a result of free radical attacks on DNA. This can lead to DNA methylation and silence of tumor suppressor genes. Therefore, oxidative stress can be a factor in worsening PCOS and even increasing the risk of other metabolic diseases such as cancer in women with this syndrome. One of the preinflammatory mediators is nitric oxide (NO), which can cause damage and inflammation due to overproduction. Increased synthesis of the NO synthase enzyme, which is present in macrophages, is increased in PCOS, which can lead to inflammation and increased insulin resistance [34]. In cellular and animal studies, EA has been reported to reduce NO production [35]. Also, one of the enzymes produced by pre-inflammatory cytokines is the Cyclooxygenase 2 (COX-2) enzyme, which in itself accelerates cascading reactions and releases large amounts of Prostaglandin E2 (PGE2) into the inflamed tissue. However, by inhibiting COX-2 production, inflammation can be reduced [36]. The inhibitory effects of EA on PGE2 production have been reported. Pomegranate-derived polyphenols have also reduced COX-2 production from macrophage cells [37]. Also, EA supplementation resulted in a statistically significant decrease in total testosterone, PRL and AMH hormone levels compared with the beginning of the study. Changes in the mean of the FSH and LH levels at the end of the study were not significant. Hyperandrogenism women usually occur after an increase in insulin [22]. Excessive increase in sex hormones impairs ovulation and increases AMH. Together, these biochemical symptoms will be a prognosis for PCOS [38]. Taking into account the above mechanism and the results of animal and clinical studies, the normalization of the ovulation cycle can be achieved by increasing insulin sensitivity and ultimately reducing sex hormones and lowering AMH [39]. One of the most effective factors in reducing IR is the use of plant polyphenols. So far, no clinical studies have examined the effects of EA polyphenol on mentioned factors, but some micronutrients have been studied with antioxidant properties. The results of Shokrpour et al. [40]. study indicated that receiving CoQ10 in women with PCOS significantly decreased the level of AMH. In this clinical trial, 30 women with PCOS consumed CoQ10 pills 100 mg daily for 3 months. Also, in AbdulameerYahya et al. [41]. study, taking the vitamin D and CoQ10 oral supplement in PCOS patients ameliorated the hormonal profile, oxidative marker, and ovulation outcome. Their results showed that these antioxidants significantly decreased the LH and AMH after eight weeks. Also, studies that have examined the effects of IR-reducing drugs such as metformin in women with this syndrome have also reported a significant reduction in sex hormones such as LH, testosterone and AMH at the end of the study [42]. This clinical study, like other studies, can have strengths and weaknesses. One of the strengths of this study is that for the first time the effect of pure supplement of EA was investigated in women with PCOS. Also, the design of this study as a double-blind randomized clinical trial that had parallel groups, making the results of this study remarkable. It is also important to control confounder factors such as weight, physical activity, and food intake in studies that conducted on metabolic diseases, which was done in this research. However, due to the low budget and the limited number of participants and the duration of the intervention, the results of this study have been statistically analyzed, it should be noted that in order to draw clinical conclusions and examine the clinical effects, it is necessary to conduct studies with a larger number of participants and intervention period. Conclusion In conclusion, the results of this study indicated that 8 weeks of supplementation with EA, 200 mg/day, reduced the levels of blood sugar, blood lipids and IR in PCOS patients. Also, with the ameliorating in the status of oxidative stress and inflammatory status, at the end of the study, we saw a significant decrease in the amount of AMH in these patients. These results provide evidence to support the view that polyphenol antioxidant group with reducing the biochemical factors, can play an important role in helping to control the condition of this syndrome. Nevertheless, further studies are needed to provide additional evidences.
2021-05-05T00:08:32.252Z
2021-03-22T00:00:00.000
{ "year": 2021, "sha1": "eb5ebbab36b4b4f7c9e33bf5bfdf372108d23442", "oa_license": "CCBY", "oa_url": "https://ovarianresearch.biomedcentral.com/track/pdf/10.1186/s13048-021-00849-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "26e7bfdd099ff395999e61eae87493a7abd0b55f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255614188
pes2o/s2orc
v3-fos-license
Emotionally Informed Hate Speech Detection: A Multi-target Perspective Hate Speech and harassment are widespread in online communication, due to users' freedom and anonymity and the lack of regulation provided by social media platforms. Hate speech is topically focused (misogyny, sexism, racism, xenophobia, homophobia, etc.), and each specific manifestation of hate speech targets different vulnerable groups based on characteristics such as gender (misogyny, sexism), ethnicity, race, religion (xenophobia, racism, Islamophobia), sexual orientation (homophobia), and so on. Most automatic hate speech detection approaches cast the problem into a binary classification task without addressing either the topical focus or the target-oriented nature of hate speech. In this paper, we propose to tackle, for the first time, hate speech detection from a multi-target perspective. We leverage manually annotated datasets, to investigate the problem of transferring knowledge from different datasets with different topical focuses and targets. Our contribution is threefold: (1) we explore the ability of hate speech detection models to capture common properties from topic-generic datasets and transfer this knowledge to recognize specific manifestations of hate speech; (2) we experiment with the development of models to detect both topics (racism, xenophobia, sexism, misogyny) and hate speech targets, going beyond standard binary classification, to investigate how to detect hate speech at a finer level of granularity and how to transfer knowledge across different topics and targets; and (3) we study the impact of affective knowledge encoded in sentic computing resources (SenticNet, EmoSenticNet) and in semantically structured hate lexicons (HurtLex) in determining specific manifestations of hate speech. We experimented with different neural models including multitask approaches. Our study shows that: (1) training a model on a combination of several (training sets from several) topic-specific datasets is more effective than training a model on a topic-generic dataset; (2) the multi-task approach outperforms a single-task model when detecting both the hatefulness of a tweet and its topical focus in the context of a multi-label classification approach; and (3) the models incorporating EmoSenticNet emotions, the first level emotions of SenticNet, a blend of SenticNet and EmoSenticNet emotions or affective features based on Hurtlex, obtained the best results. Our results demonstrate that multi-target hate speech detection from existing datasets is feasible, which is a first step towards hate speech detection for a specific topic/target when dedicated annotated data are missing. Moreover, we prove that domain-independent affective knowledge, injected into our models, helps finer-grained hate speech detection. Introduction Nowadays, people increasingly use social networking sites, not only as their main source of information, but also as media to post content, sharing their feelings and opinions. Social media is convenient, as sites allow users to reach people worldwide, which could potentially facilitate a positive and constructive conversation between users. However, this phenomenon has a downside, as there are more and more episodes of hate speech (HS hereafter) and harassment in online communication [10]. This is due especially to the freedom and anonymity given to users and to the lack of effective regulations provided by the social network platforms. There has been a growing interest in using artificial intelligence and Natural Language Processing (NLP) to address social and ethical issues. Let us mention the latest trends on AI for social good [40,41], where the emphasis is on developing applications to maximize "good" social impacts while minimizing the likelihood of harm and disparagement to those belonging to vulnerable categories. See, for example, the literature on suicidal ideation detection, devoted to early intervention [48]. There are also recent works on the prevention of sexual harassment [68], sexual discrimination [67], cyberbullying and trolling [81], devoted to contrasting different kinds of abusive behavior targeting different groups and preventing unfair discrimination. In spite of there being no universally accepted definition of HS, this study employs the most common one. HS is defined here as any type of communication that is abusive, insulting, intimidating, and/or that incites violence or discrimination, and that disparages a person or a vulnerable group based on characteristics such as ethnicity, gender, sexual orientation and religion [33]. Accordingly, HS may have different topical focuses: misogyny, sexism, racism, xenophobia and homophobia or Islamophobia, which we refer to as topics. For each topic, hateful content is directed towards specific targets that represent the community (individuals or groups) receiving the hatred. For example, black people and white people are possible targets when the topical focus is racism [117], while women are the targets when the topical focus is misogyny or sexism [78]. HS is thus, by definition, target-oriented, as shown in the following tweets taken from [5,25,133], where the targets are underlined. These examples also show that different targets involve different ways of linguistically expressing hateful content such as references to racial or sexist stereotypes, the use of negative and positive emotions, swearing terms, and the presence of other phenomena such as envy and ugliness. 1 (1) Women who are feminist are the ugly bitches who cant find a man for themselves (2) Islam is 1000 years of contributing nothing to mankind but murder and hatred. Given the vast amount of social media data produced every minute 2 , manually monitoring social media content is impossible. It is, instead, necessary to detect HS automatically. To this end, many studies in the field exploit supervised approaches generally casting HS detection as a binary classification problem (i.e., abusive/hateful vs. not abusive/not hateful) [43,64,115] relying on several manually annotated datasets that can be grouped into one of these categories: -Topic-generic datasets, with a broad range of HS without limiting it to specific targets [21,44,52]. For example, [21] consider aggressive and bullying in their annotation scheme, while [44] looks, in addition, for other expressions of online abuse such as offensive, abusive and hateful speech. -Topic-specific datasets, where the HS category (racism, sexism, etc.) is known in advance (i.e., drives the data gathering process) and is often labeled. The HS targets, either person-directed or group-directed 3 , can be considered as oriented, containing, as they do, hateful content towards groups of targets or specific targets. For example, in [132] scholars sampled data for multiple targets, that is racism and sexism for, respectively, religious/ethnic minorities HS and sexual/gender (male and female) HS. Others focus on single targets including, for instance, sampling for the misogyny topic, targeting women [23,38,39]. Similarly, for the xenophobia and racism topics the target are groups discriminated against on the grounds of ethnicity (e.g., immigrants [5], ethnic minorities [125,133], religious communities [128], Jewish communities [145], etc.). Independently from the datasets that are used, all existing systems share two common characteristics. First, they are trained to predict the presence of general, target-independent HS, without addressing the problem of the variety of aspects related to both the topical focus and target-oriented nature of HS. Second, systems are built, optimized, and evaluated based on a single dataset, one that is either topic-generic or topic-specific. In order to address this issue and in order to improve the performance of the models, recent studies propose cross-domain classification, where the domain is used synonymously with dataset [65,99,134,137]. The idea consists in using a one-to-one configuration by training a system on a given dataset and testing the system on another one, using domain adaptation techniques. Most existing works map between fine-grained schemes (that are specific for each dataset) and a unified set of tags, usually composed of a positive and negative label to account for the heterogeneity of labels across datasets. Again, this binarization fails to discriminate among the multiple HS targets. Thus, it has become difficult to measure the generalization power of such systems and, more specifically, their ability to adapt their predictions in the presence of novel or different topics and targets [126]. An immediate but rather expensive solution for handling a new specific target is that of building new target-oriented datasets from scratch; as has been done in previous studies [61]. In this paper, we propose instead a novel multi-target HS detection approach by leveraging existing manually annotated datasets. These will enable the model to transfer knowledge from different datasets with different topics and targets. In the context of offensive content moderation, identifying the topical focus and the targeted community of hateful contents would be of great interest for two important reasons. First, it will allow us to detect HS for specific topics/targets when dedicated data are missing. Second, it will prevent widespread stereotypes and help to develop social policies for protecting victims, especially in response to trigger events [69]. For example, with the recent outbreak of COVID-19, a spike in racist and xenophobic messages targeting Asians in Western countries was observed. A system specifically designed to detect HS that targets migrants in a pre-COVID-19 context would most likely have failed at picking out this post-COVID-19 HS. Indeed, most of the messages would not have been moderated as the type of language learned during training was for other groups, the most frequent targets of HS in pre-COVID times. In this paper, we consider different manifestations of HS with different topical focuses, including sexism, misogyny, racism, and xenophobia. Each specific instance targets different vulnerable groups based on characteristics such as gender (sexism and misogyny), ethnicity, religion and race (xenophobia and racism). The focus on gendered and ethnicity-based HS is due, in part, to the wide availability of English corpora developed by the computational linguistics community for those targets. But it also depends on the fact that most monitoring exercises by institutions countering online HS in different countries and territories (e.g., European Commission [34]) report ethnic-based hatred (including anti-migrant hatred) and gender-based hatred as the most common type of online HS [22]. We propose to undertake the following challenges: 1. Explore the ability of HS detection models to capture common properties from generic HS datasets and to transfer this knowledge to recognize specific manifestations of hate. We propose several deep learning models and experiment with binary classification using two generic corpora. We evaluate their ability to detect HS in four topically focused datasets: sexism, misogyny, racism, and xenophobia. Our results show that training on topic-generic datasets generally fails to account for topic-specific linguistic properties. 2. Experiment with the development of models for detecting both the topics (racism, xenophobia, sexism, misogyny) and the targets (gender, ethnicity) of HS going beyond standard binary classification. We aim to investigate (a) how to detect HS at a finer level of granularity and (b) how to transfer knowledge across different types of HS. We rely on multiple topicspecific datasets and develop, in addition to the deep learning models designed to address the first challenge, a multitask architecture that has been shown to be quite effective in cross-domain sentiment analysis [12,146]. We consider several experimental scenarios: first, ones where the topics/targets that will be classified in a multilabel fashion are present in the training data; and second, in cross-topic/target scenarios, where we try to predict a specific target/topic, training on data where that particular topic/target is unseen. Our results demonstrate that learning HS classification (main task) and the topic/target of HS (auxiliary task) simultaneously achieves very good results. This result is an encouraging first step, demonstrating that multi-target HS detection from existing datasets is feasible. This is true even in the absence of target-specific data towards a given target, something which can be of crucial importance when annotated data about the target are missing. 3. Study the impact of affective semantic resources in determining specific manifestations of HS. Affects and emotions were proven to be useful in many NLP tasks such as irony and sarcasm detection [57,98,120], stance classification [71,72], information credibility assessment [49,50], and also sentiment analysis [20,76] in general. In this work, we also want to explore the affective characteristics of the language used in HS, continuing the very recent work by [109], which suggests a strong relationship between abusive behavior and the emotional state of the speaker. We experiment with three affect resources as extra-features on top of several deep learning architectures: sentic computing [14] resources (SenticNet [18], EmoSenticNet [106]) and semantically structured hate lexicons (HurtLex [6]). SenticNet has not, to the best of our knowledge, been used in HS detection. For each resource, we propose a systematic evaluation of the emotional categories that are the most productive for our tasks. Our results show that injecting domain-independent affective knowledge into our models helps finer-grained HS detection. Related Work We present the related work in four parts. First, we briefly introduce the affective computing and sentiment analysis research field, in order to provide readers with a broader context for NLP literature related to the analysis and to the recognition of affective states and emotions in texts. Second, relevant prior works specifically related to HS detection are presented. Third, we review the domain adaptation study in sentiment analysis and abusive language detection, something particularly important in bringing out the novelty of our contribution. Finally, we provide an overview of the few attempts to exploit affective information in improving abusive language detection. Affective Computing and Sentiment Analysis Affective computing, a development of the last decades, is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects: i.e., the experience of feelings or emotions. Today, identifying affective states from text is regarded as being fundamental for several domains, from human-computer interaction to artificial intelligence, from the social sciences to software engineering [13]. The wide popularity of social media, which facilitates users publishing and sharing contents-providing accessible ways for expressing feelings and opinions about anything, anytime-also gave a major boost to this research area. This was especially true within the NLP field. Here, the abundance of data allowed the research community to tackle more in-depth, longstanding questions such as understanding, measuring and monitoring the sentiment of users towards certain topics or events, expressed in mere texts or through visual and vocal modalities [107]. Indeed, robust and effective approaches are made possible by the rapid progress in supervised learning technologies and the huge amount of user-generated content available online. Such techniques are typically motivated by the need to extract user opinions on a given product or, say, in surveying political views and they often exploit knowledge encoded in affective resources, such as sentiment and emotion lexicons and ontologies. The interest in lexical knowledge about the multi-faceted and the fine-grained facets of affect encoded in such resources is, by no means, limited to sentiment analysis. The use of such affective resources has also recently been explored in other related tasks, such as personality [80,86] and irony detection [35,120] or author profiling [100]. Concerning abusive language detection, which is the specific task of interest here, there are attempts at exploiting emotion signals to improve the detection of this kind of phenomena (cf. Affective Information in Abusive Language Detection Tasks). No one has investigated the impact of emotion features on HS detection, which is one of the challenges tackled in our paper. Supervised and Semi-Supervised Learning for Social Data Analysis The field has recently been surveyed in [7,142]. The vast majority of the analyzed papers describe approaches to sentiment analysis based on supervised learning, where there is a text classification task at the sentence or message level, focused mostly on detecting from text valence or sentiment, either using a binary value or with a strength/intensity component coupled with the sentiment [123]. In particular, deep learning-based methods are becoming very popular due to their high performance, and they have been increasingly applied in sentiment analysis [82,142]. Furthermore, there is an ever-increasing awareness of the need to take a holistic approach to sentiment analysis [17] by handling the many finer-grained tasks involved in extracting meaning, polarity and specific emotions from texts. This includes the detection of irony and sarcasm [57,66,120]. Due to a large amount of available (but unlabeled) data, many studies have recently highlighted the importance of exploring unsupervised and semi-supervised machine learning techniques for sentiment analysis tasks. For example in [60], the authors exploited both labeled and unlabeled commonsense data. Their proposed affective reasoning architecture is based on Support Vector Machines (SVM) and the merged use of random projection scaling in a vector space model and was exploited for emotion recognition tasks. Emotion Categorization Models and Affective Resources Still, despite the maturity of the field, choosing the right model for operationalizing affective states is not a trivial task. Research in sensing sentiment from texts has put the major emphasis on recognizing polarities (positive, negative, neutral orientation). However, comments and opinions are usually directed toward a specific target or aspect of interest, and as such, finer-grained tasks can be envisioned. For instance, aspect-based sentiment analysis identifies the aspects of given target entities and the sentiment expressed for each aspect [105]. At the same time, the stance detection emerging task focuses on detecting what particular stance a user takes toward a specific target, something that is particularly interesting in political debates [89]. Moreover, given the wide variety of affective states, recent studies advocate a finer-grained investigation of the role of emotions, as well as the importance of other affect dimensions such as emotional intensity or activation. Depending on the specific research goals addressed, one might be interested in issuing a discrete label describing the affective state expressed (frustration, anger, joy, etc.) in accordance with different contexts of interaction and tasks. Emotions are transient and typically episodic, in the sense that, over time, they can come and go. This depends, of course, on all sorts of factors, factors which researchers might be interested in understanding and modeling according to a domain or task-specific research objectives. Both basic emotion theories, in the Plutchik-Ekman tradition [32,104], and dimensional models of emotions [112] provide a precious theoretical grounding for the development of lexical resources and computational models for affect extraction. Sentiment-related information is, indeed, often encoded in lexical resources, such as affective lists and corpora, where different nuances of affect are captured, such as sentiment polarity, emotional categories, and emotional dimensions [18,90,106]. These kinds of lexicons are usually lists of words to which a positive or negative or/and an emotion-related label (or score) is associated. Besides flat lists of affective words, lexical taxonomies have also been proposed, enriched with sentiment and/or emotion information [3,106]. However, there is a general tendency to go towards richer, finer-grained models. These will very possibly include complex emotions. This is especially the case in the context of data-driven and task-driven approaches, where restricting automatic detection to only a small set of basic emotions is too limited, not least in terms of actionable affective knowledge. This general tendency is also reflected in the development of semantically richer resources. These include and model semantic, conceptual, and affective information associated with multi-word natural language expressions, by enabling the concept-level analysis of sentiment and emotions conveyed in texts, like the ones belonging to the Sentic-Net family [15,18]. Moreover, when the task addressed is related to a specific portion of the affective space, domain-specific affective resources and lexicons can be envisioned. This is the case with abusive language detection, where the use of lexicons of hateful words [6] can lead to interesting results. Word Intensity and Polarity Disambiguation All such resources represent a rich and varied lexical knowledge about affect, under different perspectives, and virtually all sentiment analysis systems may incorporate lexical information derived from them 4 . However, many opinion keywords carry varying polarities in different contexts, posing huge challenges for sentiment analysis research. Contextual polarity ambiguity is an important still little studied problem in sentiment analysis. This has recently been addressed in [140], where a Bayesian model is proposed that uses opinion-level features to solve the polarity problem of sentiment-ambiguous words: intra-opinion features (i.e., the information that helps in thoroughly conveying the opinion); and inter-opinion features (i.e., the information connecting two or more opinions). The intra-opinion features resolve the polarity of most sentiment words. The inter-opinion features usually play a secondary role, either by improving the confidence of a good prediction or by assisting in calculations when some of the features are missing. Another interesting challenge for the field is related to the possibility of measuring sentiment and emotion intensity, which is of paramount importance in analyzing the finerlevel details of emotions and sentiments [85] in real-world applications. A novel solution to this problem is proposed in [2], where, in order to leverage the various advantages of different supervised systems, a Multi-Layer Perceptron (MLP)-based ensemble framework for predicting the intensity of sentiments (in financial microblog messages and news headlines) and emotions (in tweets) is proposed. The ensemble model combines the output of three deep learning models (Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU)) and a feature-based Support Vector Regression (SVR) model. The SVR model utilizes word and character TF-IDF, TF-IDF weighted word vectors, and a diverse set of lexicon features, such as the positive and negative word count (extracted from MPQA [135] and Bing Liu [29]), the positive, negative, and aggregate scores of each word extracted from NRC Hashtag Sentiment and NRC Sentiment140 [88], as well as the sum of the positive, negative and aggregate scores of each word computed from SentiWordNet [3]. For emotion intensity prediction, the authors also include: the word count of each of the emotions from NRC Word-Emotion Association lexicon [87]; the sum of association scores for the words with the emotions extracted from NRC Hashtag Emotion [84]; the aggregate of positive and negative word scores computed from AFINN [94]; and the sentiment score of each sentence returned by VADER [51]. The proposed framework shows good results with comparatively better performance over state-of-the-art systems. Hate Speech Detection in Online Communication The automatic detection of online HS is not a simple task, especially because of the thin line between abusive language and freedom of speech. For example, the use of swear words could become an issue in HS detection [96,122], where their presence might lead to false positives: for instance, when they are used in a non-abusive way in humor, emphasis, catharsis, and when conveying informality. But they could also become a strong signal for spotting HS, when they are used in an abusive context. A fair amount of works that deals with HS detection have come from teams that participated in recently shared tasks such as HatEval [5], Automatic Misogyny Identification (AMI) [38,39], and Hate Speech and Offensive Content Identification (HASOC) [77]. HatEval was introduced at SemEval 2019 and focused on the detection of hateful messages on Twitter directed towards two specific targets: immigrants and women. This was done from a multilingual 7 perspective (English and Spanish). The best-performing system in English HatEval [62] exploited a straightforward SVM with a Radial Basis Function (RBF) kernel that uses Google's Universal Sentence Encoder [19] feature representation. AMI, another shared task in two different evaluation campaigns in 2018 (IberEval and Evalita 8 ), focuses on detecting HS that targets women. In English, the best results were achieved by traditional models for both AMI-IberEval (SVM with several handcrafted features [97]) and AMI-Evalita (LR coupled with vector representation that concatenates sentence embedding, TF-IDF and average word embeddings [113]). Finally, HASOC, an HS and offensive language identification shared task at FIRE 2019, covers three languages: English, German, and Hindi. For English, the best performance was achieved by an LSTM network with ordered neurons and an attention mechanism [130]. All the aforementioned shared tasks provided datasets in languages other than English: i.e., Italian, Spanish, Hindi, and German. Other languages used in shared tasks include Italian (HasSpeeDe [8] which focuses on detecting HS towards immigrants) and German (GermEval [138] which focuses on offensive language identification). Most of the works listed here model their tasks as a binary classification, with the aim of predicting the abusiveness of a given utterance per se (i.e., without specifying either a topic or a target). In this work, we classify a message as hateful or not-hateful. But we go further. We want also to detect the HS topic and the target to whom the message is addressed. To the best of our knowledge, we are the first to address target-based computational HS detection, continuing recent corpus-based linguistic studies on categorizing HS and their associated targets [117]. Domain Adaptation in Abusive Language Detection The study of HS detection is multifaceted, and available datasets feature different focuses and targets. Despite limitations, some works have tried to bridge this range by proposing a domain adaptation approach to transfer knowledge from one dataset to other datasets with different topical focuses. 5 https:// fastt ext. cc/ 6 https:// code. google. com/ archi ve/p/ word2 vec/ 7 In this case, "multilingual" refers to the fact that two datasets were made available as part of the competition. The submitted systems were trained and tested separately on each language. 8 For more details regarding the collection and annotation of the data, the reader is invited to refer to Datasets. The first attempt to deal with this issue was reported in [134]. They used the multi-task learning (MTL) approach, arguing that it would be possible to share knowledge between two or more objective functions to leverage information encoded in one abusive language dataset to better-fit others. [65] proposed using a traditional machine learning approach for classifying abusive language in a cross-domain setting, in order to get better system interpretability. This work also explored the use of the frustratingly simple domain adaptation (FEDA) framework [24] to facilitate domain sharing between different datasets. The main finding of this work is that the model did not generalize well when applied to various domains, even when trained on a much bigger out-domain dataset. [111] adopted transfer learning as a domain adaptation approach by exploiting the LSTM network coupled with ELMo embeddings. LSTM has also been used by [99], who employed it with a list of abusive keywords from the Hurtlex lexicon [6], as a proxy for transferring knowledge across different datasets. Their main findings are: (i) that the model trained on more than one general abusive language dataset will produce more robust predictions; and (ii) that HurtLex is able to boost the system performance in the cross-domain setting. Bidirectional Encoder Representations from Transformers (BERT) [28] was also applied in cross-domain abusive language detection [122]. This work found that BERT can share knowledge between one domain dataset and other domains, in the context of transfer learning. They argue that the main difficulty in the cross-domain classification of abusive language is caused by dataset issues and their biases. It is consequently impossible for datasets to capture the phenomenon of abusive language in its entirety. [92] also investigated BERT by using new fine-tuning methods based on transfer learning, relying on Waseem [133] and Davidson [26] datasets in their experiments. Finally, Hat-Eval, a recently shared task [5], also provided an HS dataset that covers two different targets, women and immigrants. Therefore, participants are required to build a target-agnostic model able to detect HS with more than one target (cf. Hate Speech Detection in Online Communication). Cross-domain classification approaches in abusive language detection share three common characteristics: (1) Dataset labels are aligned to deal with the varieties of annotation schemes. Hence, all datasets (be they topic-generic or topic-specific) share the same coarse-grained characterization of HS (i.e., hateful vs. non-hateful). (2) Systems follow a one-to-one configuration (i.e., they are trained on one dataset and tested on another) in order to analyze their robustness in generalizing the different phenomena contained in each dataset. (3) Predictions are binary, ignoring the target/topic nature of HS. In this work, we intend to focus on the different topics/targets in several datasets by proposing a multi-target HS classification task. To this end, instead of using the typical one-to-one configuration, we propose to solve the problem using a many-to-many configuration capable of identifying a given topic/target when trained in topic-generic or topic-specific datasets. The many-tomany configuration has already been shown to be quite effective in cross-domain aspect-based sentiment analysis [12,46,53,74,102,146] and is used here for the first time in an HS detection task. Affective Information in Abusive Language Detection Tasks Recently, some works exploiting emotion signals to improve abusive language detection have been carried out. The study by [114] proposed an architecture that uses the Emotion-Aware Attention (EA) mechanism to quantify the importance of each word based on the emotion conveyed by the text. They used DeepMoji model [37] and NRC Emotion Lexicon [87] to extract emotion information from the given texts. Their analysis of the results shows the importance of affective information in augmenting system performance. Similar conclusions have been drawn in [96] who exploited the NRC Emotion Lexicon [87] and EmoSenticNet [106]. Finally, the most recent work by [109] came up with a joint model of emotion and abusive language detection in a MTL setting. This led to significant improvements in abuse detection performance when evaluated in both the OffensEval 2019 [144] and Waseem and Hovy datasets [133]. As far as we know, no previous work has explored the impact of emotion features in predicting HS targets in a multi-target setting. We propose to employ EmoSenticNet, HurtLex, and for the first time, SenticNet. For each resource, we identify the emotion categories that are the most suitable for predicting a given topic/ target of HS detection. Datasets We experiment with seven available HS corpora from previous studies among which two are topic-generic (Davidson [26] and Founta [44]), and four are topic-specific about four different topics: misogyny (the AMI dataset collection from both IberEval [39] and Evalita [38]), misogyny and xenophobia (the HatEval dataset [5]), and racism and sexism (the Waseem dataset [133]). Each of these topics target either gender (sexism and misogyny) and/or ethnicity, religion or race (xenophobia and racism). In this section, we first detail the characteristics of each of the seven datasets, then provide general statistics. Datasets Description -Davidson. The dataset has been built by [26] and contains 24,783 tweets 9 manually annotated with three labels including hate speech, offensive, and neither. These tweets were sampled from a collection of 85.4 million tweets gathered using the Twitter search API, focusing on tweets containing keywords from HateBase 10 . The dataset was manually labeled by using the CrowdFlower platforms 11 , where at least three annotators annotated each tweet. With an inter-annotator agreement of 92%, the final label for each instance was assigned according to a majority vote. Only 5.8% of the total tweets were labeled as hate speech (cf. (5)) and 77.4% as offensive (cf. (6)), while the remaining 16.8% were labeled as not offensive. (5) #DTLA is trash because of non-Europeans are allowed to live there (6) What would y'all lil ugly bald headed bitches do if they stop making make-up & weave? -Founta. The dataset consists of 80,000 tweets 12 annotated with four mutually exclusive labels including abusive, hateful, spam and normal [44]. The original corpus of 30 millions tweets was collected from 30 March 2017 to 9 April 2017 by using the Twitter Stream API. For each tweet, the authors also extracted the meta-information and linguistic features in order to facilitate the filtering and sampling process. Annotation was done by five crowdworkers and the final dataset was composed of 11% tweets labeled as abusive (cf. (7)), 7.5% as hateful (cf. (8)), 59% as normal, and 22.5% as spam (cf. (9)). -HatEval. The dataset consists of 13,000 tweets distributed across two different targets: immigrants (cf. (14)) and women (cf. (15)) [5]. Most of the tweets that target women were derived from the AMI corpora, while the remainder of the dataset was collected over a period of three months (from July to September 2018) by employing the same approaches as AMI. The dataset was annotated by using the Figure Eight crowdsourcing platform. In each instance, the annotators were asked to specify whether a tweet conveys HS or not towards any given targets. The annotators were also asked to indicate whether the author of the tweet was aggressive and to identify the target of the tweet (i.e., a specific individual or a group of people). Although the inter-annotator agreement obtained for each category (0.83, 0.73, and 0.70, respectively) was quite high, the final label was assigned based on a majority vote by adding two expert annotations to the crowd-annotated data. The final distribution of the dataset includes 13,000 tweets (6,500 for each target). Table 1 provides a general overview of the datasets, along with the labels used in their annotation schemes. We can observe that the classes are imbalanced in most datasets, where the majority class is the negative class (non-HS), except for the AMI collection (AMI-IberEval and AMI-Evalita) and Davidson. Datasets Statistics For our experiments, the corpora have been divided into train and test sets keeping the same tweet distribution as the original papers. This was done in order to make better comparisons with the state-of-the-art results 14 . Table 2 and Table 3 provide the distribution of instances in these two sets. As one of the research questions that we want to address involves the possibility of transferring knowledge from several topic-specific datasets into another topic-specific dataset where the topic is unseen, we decided to merge under the same topic (i.e., misogyny) both the AMI corpora and HatEval dataset 15 . In the next three sections, we show how these datasets have been used to develop models that are able to generalize HS across multiple datasets (cf. Generalizing Hate Speech Phenomena Across Multiple Datasets); transfer knowledge across topics and targets (cf. Multi-target Hate Speech Detection); and leverage emotions to improve multitarget HS detection (cf. Emotion-aware Multi-target Hate Speech Detection). The various forms of bias introduced when building these datasets are discussed in Discussions and Error Analysis, as they may have a strong impact on the multi-target experiments proposed in this paper. Methodology We aim to answer two main research questions: -Are models able to capture common properties of HS and transfer this knowledge from topic-generic datasets to topic-specific datasets? -How do these models compare with ones that are trained on topic-specific datasets? To this end, we propose the following two configurations: - These two configurations are cast as a binary classification task, where the system needs to predict whether a given tweet is hateful (1) or not (0). To this end, we experiment with several performing state of the art models for HS detection. This is a necessary first step in measuring to what extent existing models are capable of transferring knowledge across different HS datasets, be they topic-generic or topic-specific. Models Our models are as follows 17 : -Baseline. This model is straight-forward based on a linear support vector classifier (LSVC). The use of linear kernel is based on [63], who argue that the linear kernel has an advantage for text classification. They observe that text representation features are frequently linearly separable. Hereby, the baseline is an LSVC with unigrams, bigrams, and trigrams TF-IDF. -LSTM. This model uses a LSTM network [59] with an architecture consisting of several layers, starting with an embedding layer representing the input to the LSTM network (128 units), followed by a dense layer (64 units) with ReLU activation function. The final layer consists of a dense layer with sigmoid activation producing the final prediction. In order to get the best possible results, we optimized the batch size (16,32,64,128) and the number of epochs (1)(2)(3)(4)(5). We used as input either randomly initialized embeddings (LSTM) or FastText 18 English word vectors with an embedding dimension of 300 [54] pre-trained on Wikipedia and Common Crawl (LSTM FastText ). LSTM, a type of Recurrent Neural Network, has already been proven as a robust architecture in HS detection [4]. -CNN FastText . This model was inspired by [4,45]. It uses FastText English word vectors (with the dimension of 300) and three 1D convolutional layers, each one using 100 filters and a stride of 1, but with different window sizes (respectively, 2, 3, and 4) in order to capture different scales of correlation between words, with a ReLU activation function. We further downsample the output of these layers by a 1D max-pooling layer and we feed its output into the final dense layer. All the experiments run for a maximum of 100 epochs, with a patience of 10 and a batch size of 32 19 . -ELMo. This model employs ELMo [103], a deep contextualized word representation, which shows a significant improvement in the study of HS [111]. Since we implement ELMo as a Keras layer 20 , we were able to add more layers after the word embedding layer. The latter is followed by a dense layer (256 units) and a dropout rate of 0.1, before being passed to another dense layer (2 units) with a sigmoid activation function, which produces the final prediction. This architecture is fine-tuned based on the number of epochs (1-15) and batch size (16, 32, 64, and 128), and optimized by using Adam optimizer. 21 -BERT. This model uses the pre-trained BERT model (BERT-Base, Cased), [28] on top of which we added an untrained layer of neurons. We then used the HuggingFace's PyTorch implementation of BERT [139] that we trained for three epochs with a learning rate of 2e-5 and AdamW optimizer. It is based on [122] where it achieved the best results for the task of abusive language detection. Table 4 and Table 5 present our results when training, respectively, on Founta and Davidson. We provide our results in terms of accuracy (A), macro-averaged F-score ( F 1 ), precision (P) and recall (R) with the best results in terms of F 1 presented in bold. Results for the Top G ⟶ Top S Configuration We recall here that we focus on learning topic-generic HS properties and test how neural models are able to extrapolate this information in order to detect topicspecific HS. The results show that ELMo outperformed other models in the Waseem dataset (Racism Waseem , Sexism Waseem ) when trained on Davidson. When trained on Founta, CNN FastText obtained the best results for Sexism Waseem and BERT for Racism Waseem . For most of the topic-specific testing datasets (AMI corpora in particular), the results are comparable across the two general HS training datasets (Davidson and Founta), with higher disparities being observed in the Waseem results. Table 6 presents the results obtained when focusing on learning topic-specific HS properties by combining all training sets of all datasets. The overall picture of the results shows that our baseline (i.e., LSVC) performed quite well when compared to other models: it presents a decrease of anywhere in between 1% and 11% in terms of F1 score, when compared to the best-performing models for a specific topic. For most topics, the best results were obtained by BERT, with the only exception being for the Misogyny HatEval dataset, where ELMo obtained the best results (with a difference of almost 2% in terms of F1 score). We note that Misogyny HatEval is the only dataset for which ELMo achieved good results. For all the other datasets, the results are low, even lower than the baseline 22 . We also note that state of the art models achieved good results for both topics in the Waseem dataset, whereas they attain lower results when tested on the xenophobia topic from the HatEval dataset. However, our results are similar to the ones obtained by state-of-theart baselines for Waseem (F1=0.739 [133]) and HatEval (F1=0.451 [5]) 23 . Results for the Top S ⟶ Top S Configuration In order to assess whether training on topic-specific data improves the results beyond those achieved by training on topic-generic data, we compare our results with both the baselines and the best-submitted systems in the shared task competition where these data have been used (only available for AMI corpora). The comparison was made by training either on a topic-general dataset (i.e., Top G ⟶ Top S ) or on all topic-specific datasets (i.e., Top S ⟶ Top S ), and testing the test data provided by the organizers of AMI-IberEval and AMI-Evalita. Table 7 shows our results. When compared to the AMI Misogyny Evalita and Misogyny IberEval baselines 24 provided in terms of accuracy (respectively, 0.605 and 0.783), we observe that using a topic-specific training approach, BERT achieved more than a 10% increase for both datasets, while for the topic-generic training approach the only improvement of (0.5%) is brought by BERT trained on the Davidson dataset (for Misogyny Evalita ). When comparing the results with the best-submitted systems (0.704 and 0.913 25 ) we still observe a small improvement achieved by BERT trained on topic-specific data for the Misogyny Evalita task, though all the other system results were lower. These results confirm that a model trained with a combination of several datasets with different topical focuses is more robust than a model trained on a topic-generic dataset. Methodology Now that we have established that the topic-generic datasets are not adequate for capturing specific instances of HS Let T be either a topic (Top) or a target (Tag). We propose the following configurations: -T S ⟶ T S seen : We model the task as a multi-label classification problem with two sub-configurations: (a) Top S ⟶ Top S seen : Detect the hatefulness of a given tweet and the topic to which the HS belongs. Each tweet is thus classified into eight different classes, representing the combination of the four topics (racism, sexism, misogyny, xenophobia) and two HS classes (hate speech vs. non hate speech). As in the previous experiments (cf. Methodology), we combine all the training sets of the topic-specific datasets for training. Then, all the models are tested on the test set of each topic-specific datasets. (b) Tag S ⟶ Tag S seen : It is similar to (a), except that it concerns the multi-label classification of targets. Therefore, we merge topic-specific train and test sets that share the same target (i.e. women: Sexism Waseem and Misogyny all and ethnicity: Racism- Waseem and Xenophobia HatEval ). -T S ⟶ T S unseen : We model the task as a binary classification task to predict the topic/target not previously seen during training time. We also design two experiments here: (iii) Top S ⟶ Top S unseen : It uses three out of the four topic datasets for training and the remaining topic dataset for testing (i.e., the dataset left out at training time). For example, to detect the hatefulness of misogynistic messages, we train on the following topics: racism (Racism-Waseem ), sexism (Sexism Waseem ) and xenophobia (Xenophobia HatEval ), then we test on the misogyny topic (i.e., comprising AMI corpora and Misogyny HatEval ). (iv) Tag S ⟶ Tag S unseen : It is similar to (c), except that it concerns targets. For example, to detect the hateful messages that target women, we train by using the datasets related to the target race (i.e., Racism Waseem and Xenophobia HatEval ) and test on the four datasets related to the target women (i.e., Sexism Waseem , the two AMI corpora and Misogyny HatEval ). Both T S ⟶ T S seen (multi-label classification) and T S ⟶ T S unseen (binary classification) rely on the six models presented in Methodology (i.e., LSVC, LSTM, LST-M FastText , CNN FastText , ELMo, and BERT). In addition, for T S ⟶ T S seen we propose a multi-task setting that consists of two classifiers that are trained jointly by multi-task objectives. The first classifier predicts whether the tweet is hateful or not (0 and 1), while the second one the topic of HS (racism (0), sexism (1), misogyny (2), and xenophobia (3)). The final label prediction is broken down into eight classes (cf. Table 8). The multi-task systems are compared to the previous six models used here as strong baselines. MTL has already been successfully applied in crossdomain aspect-based sentiment analysis (cf. Affective Computing and Sentiment Analysis and Domain Adaptation in Abusive Language Detection for related work in the field) and is used here for the first time in an HS detection task, making a parallel between the sentiment domain (e.g., restaurant, book, hotel, etc.) and the topic/ (1) Hate Speech towards immigrants (7) target of HS. Indeed, the main problem in sentiment analysis is the big performance decline in the out-domain setting (when a system is trained and tested with different dataset domains) compared to the in-domain setting (when a system is trained and tested on dataset within the same domain). Similar challenges also arise in the abusive language detection task, where a system is struggling to obtain a robust performance when trained and tested with different datasets. These usually have different focuses on the phenomena they want to capture. Models We experiment with state of the art models (i.e., LSVC, LSTM, LSTM FastText , CNN FastText , ELMo, and BERT, as described in Models) and extend them with a multi-task architecture, as described below: -LSTM multi-task . First, we investigate successful approaches in multi-domain sentiment analysis, a research area that is more mature in dealing with multi-domain classification. For example, [74] used Bi-LSTM networks with adversarial training [46,53] for learning general representation from all domains data. [102] proposed a co-training approach for jointly learning the representation from both domain-invariant and domain-specific representations, while [12,146] adopted a MTL approach. Among existing models, we decided to re-implement the system proposed in [12], as it has been shown to outperform existing models in one of the most used multi-domain sentiment classification benchmark dataset [73]. This system consists of two Bi-LSTM classifiers, each of them classifying the domain (domain classifier) and the sentiment (sentiment classifier) of the tweets at the same time, with the loss of both tasks being added up. The output of the Bi-LSTM domain classifier is concatenated to the word embedding layer of the sentiment classifier to acquire a domain-aware representation. Then, the output of average pooling (after Bi-LSTMs) of the domain classifier is also concatenated to the sentiment classifier to obtain domain-aware attention. We extend the architecture proposed in [12]. The first Bi-LSTM predicts whether a given tweet is hateful or not, while the second one predicts the topic/target of HS. In this way, we obtain both topic/target-aware representation and topic/ target-aware attention when predicting whether the tweet is hateful or not. For experiments, we fine-tune this model by varying the number of epochs (1-15) and batch-sizes (16, 32, 64, and 128) while keeping the same configurations as in [12]. The model input is either embeddings randomly initialized (LSTM multi-task ) or FastText pre-trained embeddings, (LSTM multi-task (FastText) ) 26 . -ELMo multi-task . We also modify our ELMo system (cf. Methodology) in order to be able to use it in multi-task setting. Therefore, we built two ELMo-based architectures to predict the hatefulness and topic/target of tweets. Each architecture starts with the ELMo embedding layer, followed by a dense layer with a ReLU activation function, before being passed into another dense layer with a sigmoid activation function to produce the final prediction. Since ELMo embeddings are not trainable, we could not get the topic/ target-aware representation as in the previous Bi-LSTMs model. We can only transfer knowledge by concatenating the output of the first dense layer of the topic/target classifier to the dense layer of the hateful classifier. In this way, we expect to get meaningful information about the topic/ target to classify the hatefulness of tweets. Again, we only tune the systems by optimizing the number of epochs and batch-sizes. -BERT multi-task . This model is similar to [75], where all tasks share and update the same low layers (i.e., BERT layers), except for the task-specific classification layer. In this architecture, after transferring the text to contextual embeddings in the shared layers and retrieving the first token hidden state of the shared BERT model, we apply a dropout of 0.1 and connect it to two different layers (corresponding to the two classification tasks: topic/target and hatefulness). To preserve individual task-specific loss functions and to perform training at the same time, we defined the losses for the two tasks separately and optimized them jointly (by backpropagating their sum through the model). This model was trained for three epochs with a learning rate of 2e-5 and AdamW optimizer. Table 9 and Table 10 present the results obtained in the Top S ⟶ Top S seen configuration in which the testing topic was previously seen during training. Table 9 presents the baseline results while Table 10 the multi-task results. We can observe that multi-task models are the best, outperforming all the baselines, the best systems being LSTM multi-task (FastText) and BERT multi-task . The results obtained on the Waseem dataset surpass all the others, which could be a consequence of the higher number of instances in this particular dataset when compared to the others. Overall, the best performance for the multi-topic HS detection task is achieved by BERT multi-task , which attains the best result in eight out of nine test datasets. Table 11 presents the results obtained for the Tag S ⟶ Tag S seen experiments in which the testing target was previously seen during training. The best result for the target women was obtained by CNN FastText , while for the target race LSTM multi-task (FastText) outperformed all the other models. Our results confirm our assumption that the multi-task approach is capable of a robust performance in a multi-topic experiment, proving its ability in transferring knowledge between different topics, as reported in previous cross-domain sentiment analysis studies. Results for the T S ⟶ T S unseen Configuration We begin by presenting the results in the Top S ⟶ Top S unseen experiments in which the testing topic was unseen during training. As shown in Table 12, we observe that in the absence of data annotated for a specific type of HS, one can use (already existing) annotated data for different kinds of HS. As this experiment is cast as a binary classification task, we compare the results with the ones presented in Table 6 that concern Top S ⟶ Top S when training on Waseem, HatEval and AMI train sets and where topics are seen in the test sets. We noticed that CNN FastText was able to achieve a similar performance for the topic misogyny (0.655 in both Top S ⟶ Top S unseen and Top S ⟶ Top S ) , improving almost 2% for the target xenophobia (moving from 0.578 in Top S ⟶ Top S with BERT to 0.595 in terms of F 1 ). However, lower results were obtained for the Waseem dataset, where the drop in terms of F 1 is between 15% and 20%. The overall results also show that CNN-FastText was the best in predicting unseen topics for the four topics we experiment on. By capturing different scales of correlation between words (i.e., bigrams, trigrams, and unigrams), the CNN model can detect different patterns in the sentence, regardless of their position [116]. Finally, Table 13 presents the results obtained when the models are trained on all the available data belonging to a target and tested on all the available data belonging to a different target (i.e., Tag S ⟶ Tag S unseen ). In line with the previous experiment, the best results were achieved by CNN FastText . In order to better interpret these results, we conducted another experiment in which a model is trained only on data belonging to a target and tested on data belonging to a topical focus on a different target (e.g., training on the target women and testing on the topic xenophobia belonging to the target race). When comparing these results (cf . Table 14) with the ones presented in Table 12, one can observe the importance for the system of having learned some information regarding the target, even if the data belong to a different topical focus. In the absence of such information, a drop of anywhere in between 1% and 12% can be observed for the best-performing models. To conclude, the results confirm that the multi-task approach is able to achieve a robust performance, especially for the multitopic HS detection task. These results are encouraging as they can constitute the first step towards targeted HS detection. This would be especially true for languages that lack annotated data for a particular target or in the aftermath of a triggering event. Methodology In this section, we focus on investigating the following questions: -To what extent does injecting domain-independent affective knowledge encoded in sentic computing resources and in semantically structured hate lexicons improve the performance for the two finer-grained tasks (i.e., detecting the hatefulness of a tweet and its topical focus)? -Which emotional categories are the most productive? We experiment with several affective resources that have been proven useful for tasks related to sentiment analysis, including abusive language detection (cf. Affective Information in Abusive Language Detection Tasks). Psychological studies suggest that abusive language is often deeply linked to the emotional state of the speaker, and that this is reflected in the affective characteristics of the haters' language. Our intuition, then, was that it would be reasonable to inject knowledge about emotions into our models as a domain-independent signal that might help to detect HS at a finer-grained level of granularity across different topical focuses and targets. In particular, we rely on: -two concept-level resources from the sentic computing framework, where affective knowledge about basic and complex emotions is encoded, concerning different psychological models of emotions: SenticNet 27 [18] and EmoSenticNet 28 [106], where emotional labels are related to the Plutchik [104] and Ekman's [31] models of emotions. -a hate lexicon (Hurtlex), where lexical information is structured in different categories depending on the nature of the hate expressed, to see whether this multifaceted affective information, specifically related to the hate domain, helps multi-topic and multi-target detection. As discussed in Related Work, emotion features have already been used in several NLP tasks (e.g., sentiment analysis [95] and figurative language detection [35,120]). However, to the best of our knowledge, no one has investigated the impact of emotion features on HS detection. In particular, we make use of several affective resources (HurtLex and, for the first time, Sentic resources) and identify the emotion categories that are the most productive in detecting HS towards a given topic/target. To this end, we designed the following two experiments (we recall that T refers either to a topic (Top) or a target (Tag)): -(T S ⟶ T S seen ) Hurt and ( T S ⟶ T S seen ) Sentic where we, respectively, add features extracted from HurtLex and SenticNet (both from SenticNet and EmoSenticNet) on top of the models presented in Methodology and Methodology. -(Top S ⟶ Top S unseen ) Sentic where we explore the impact of general affect lexica on topically focused datasets. The models developed for each experiment are detailed below. Sentic-based Models SenticNet consists of a collection of commonly used concepts with polarity (i.e., commonsense concepts with relatively strong positive or negative polarity), where each concept is associated with emotion categorization values expressed in terms of the Hourglass of emotions model [16], which organizes and blends 24 emotional categories from Plutchik's model into four affective dimensions (pleasantness, attention, sensitivity, and aptitude). Each of these four dimensions is characterized by six sentic levels that measure the strength of an emotion. In this paper, we use SenticNet 5 that includes over 100,000 natural language concepts. EmoSenticNet is another concept-based lexical resource and was automatically built by merging WordNet-Affect [119] and SenticNet, with the main aim of having a complete resource containing not only quantitative polarity scores associated with each SenticNet concept but also qualitative affective labels [106]. In particular, it assigns WordNet-Affect emotion labels related to Ekman's six basic emotions (disgust, sadness, anger, joy, fear, and surprise) to SenticNet concepts. The whole list currently includes 13,189 annotated entries. Several approaches for representing the affective information included in these two resources were tested by creating feature vectors composed of: Hurtlex-based Models HurtLex is a multilingual hate word lexicon, which includes a wide inventory of about 1,000 hate words (originally compiled in a manual fashion for Italian by the linguist Tullio De Mauro [27] 29 ) organized into 17 categories grouped in different macro-levels [6]: The lexicon has been translated into over 50 languages (English included) semi-automatically, by extracting all the senses of all the words from BabelNet [93]. We were relying on the English version of Hurlex 30 . Out of the 17 categories, the following were selected for the two vulnerable categories targeted in the four specific manifestations of hate that we address in this paper. misogyny and sexism: male genitalia, female genitalia, words related to prostitution, physical disabilities and diversity, cognitive disabilities and diversity xenophobia and racism: animals, felonies and words related to crime and immoral behavior, ethnic slurs, moral and behavioral defects We included this specific selection of the HurtLex categories features since a preliminary manual inspection of hateful contents targeting the two vulnerable groups suggests that different subsets of the HurtLex categories can be relevant in detecting any hateful speech against those targets. Moreover, concerning misogyny, we already have some positive experimental evidence about this selection from previous exploitation of Hurtlex for detecting HS targeting women [97,99]. We experimented with a number of representations of the selected features to train several classifiers: -each of the selected Hurtlex categories is used as an independent feature (binary or frequency); -all the selected Hurtlex categories (keeping in mind the choices made for the different targets) are combined in a single feature (i.e., at least one word from at least one of the categories is present) (binary or frequency). Results In the following, we present our results on injecting affective features in our models for all the configurations considered in Multi-target Hate Speech Detection (i.e., Top S ⟶ Top S seen , Tag S ⟶ Top S seen and Top S ⟶ Top S unseen ). In all the tables below, the models for which the results in terms of F 1 score outperformed the models without affective features are presented in bold. Moreover, all the tables present an additional column Δ , to highlight the improvements due to the inclusion of the affective features based on Sentic computing resources and Hurtlex. (i.e., Δ = Model + AffectiveFeatures F1 -Model F1). Table 15 presents the results obtained for the multi-label classification task by incorporating the sentic features (as described in the previous section and summarized below) 31 : Results for Sentic computing emotion features (1) Basic emotions extracted from SenticNet (2) Basic emotions extracted from SenticNet only for the concepts present in Hurtlex (3) Second level emotions extracted from SenticNet (4) All SenticNet affective information (basic emotions + second level emotions) (5) Emotions extracted from EmoSenticNet (6) Merging the affective information extracted from both SenticNet and EmoSenticNet As to the different representation strategies and combinations of sentic resources, we observed that the best results were obtained when integrating either the EmoSen-ticNet emotions, the first level emotions of SenticNet, or merging the SenticNet and EmoSenticNet emotions. In most cases, when including only the second level emotions of SenticNet, we see a drop in the performance of the model. The last results presented in Table 16 concern the Table 17 reports the results achieved by the best performing models for the Top S ⟶ Top S seen experiment (cf. Table 9) (i.e., BERT multi-task and CNN FastText ) when incorporating the following most productive Hurtlex features: Results for Hurtlex emotion features (1) Hurtlex categories used as binary independent features. In Table 17, the models for which the results in terms of F 1 surpassed the previous models are presented in bold 32 . We observe that almost all the additional features were productive and outperformed the previous models. The improvements brought by CNN fastText+HurtLex were higher compared 32 An additional experiment consisted in combining the best Hurtlex feature representation with the best sentic feature representation for each of the datasets. However, the results did not improve. to BERT multi-task + Hurtlex : ranging from anywhere in between 1% and 17% (respectively, Misogyny all , and Racism + Xenophobia) vs. 1% and 5% (respectively, Misogyny HatEval and Racism Waseem ). The results of this experiment confirm our original assumption that including affective information and making use of specific lexicons leads to significant improvements in Top S ⟶ Top S seen experiments. Main Conclusions The main findings of this paper are: Conclusion 1: Training on topic-generic datasets generally fails to account for the linguistic properties specific to a given topic. First, we experimented with several HS datasets with different topical focuses in a binary classification setting. This was done in order to capture general HS properties regardless of the dataset type (i.e., topic-generic or topic-specific). We investigated two experimental scenarios: the first one in which a system was trained on a topic-generic dataset and tested on topic-specific data; and a second one in which a given system was trained on a combination of several topic-specific datasets and tested on topic-specific data. The results show that by training a system on a combination of several (training sets from several) topic-specific datasets the system outperforms a system trained on a single topic-generic dataset. This finding partially confirms the assumption made by [122] according to which merging several abusive language datasets could assist in the detection of abusive language in non-generalizable (unseen) problems. Conclusion 2: Combining topically focused datasets enabled the detection of multi-target HS even if the topic and/or target are unseen. Second, we proposed a classification setting which allows a given system to detect not only the hatefulness of a tweet, but also its topical focus in the context of a multi-label classification approach. Our findings show that a multi-task approach in which the model learns two or more tasks simultaneously, does better, in performance terms, than a single-task system, and the best model is the BERT multi-task . In the same way, we also proposed a cross-topic and cross-target experimental setting for the task of HS detection, where a system is trained on several sets of data with different topical focuses and targets and, then, tested on another dataset where its topical focus and target are unseen during training. Results show that CNN FastText outperformed all the other systems in all the experimental scenarios. We believe that this is an important finding, which will pave the way for targeted HS manifestations, stimulated by a triggering event and which will solve the problem of a lack of annotated data for a particular topic/target. Conclusion 3: Affective knowledge encoded in sentic computing resources and semantically structured hate lexicons improve finer-grained HS detection. Finally, when injecting domain-independent affective knowledge on top of deep learning architectures, multi-target HS detection improves in both settings where topic/target is seen and unseen at training time. The most useful group of features differ greatly on both topic/target and in terms of the model architectures. In most cases, the models incorporating EmoSenticNet emotions, the first level emotions of Sentic-Net, a blend of SenticNet and EmoSenticNet emotions or affective features based on Hurtlex, obtained the best results. However, when merging both the affective features based on Hurtlex and sentic computing resources, we observed a decline in the quality of the results. Impact of Bias in Multi-target Hate Speech Detection As observed in [127], HS datasets might contain systematic biases towards certain topics and targets. In the context of automatic content moderation, the danger posed by bias is considerable, as bias can unfairly penalize the groups that the automatic moderation systems were designed to protect. In line with previous works, we observed that bias has a strong impact on target-based HS detection. Based on the results obtained in the cross-topic (i.e., Top S ⟶ Top S unseen configuration, cf. Table 12), we noted a big performance drop in both Racism Waseem and Sexism Waseem when compared to the Top S ⟶ Top S seen classification setting, as presented in Table 6. One possible explanation for this drop is the bias problems characterizing the Waseem dataset. As shown in [136], the Waseem dataset contains both author and topic bias, mostly because of their approach to data sampling. The methodology adopted in [136] for studying this issue was also based on the experience of conducting cross-domain experiments (i.e., training on a dataset different from the one used for testing), in order to make the existing bias in abusive language datasets evident. Their results show that datasets that apply a biased sampling for corpus collection (instances matching query words that are likely to occur in abusive language) contain a high degree of implicit abuse. This might lead to a performance decrease due to the difficulty of learning lexical cues that convey implicit abuse. [136] illustrated how datasets with a high degree of implicit abuse could be more affected by data bias. They observed that when query words and biased words (i.e., the words having the highest Pointwise Mutual Information towards abusive messages) are removed, the performance is much poorer than originally reported. We draw the same observations in the Top G ⟶ Top S experiments (cf. Results for the Top G ⟶ Top S Configuration), where each model is trained on one of the two topic-generic datasets (i.e., Founta and Davidson) and tested on the topic-specific datasets. As previously mentioned, when comparing the results obtained in Table 4 and Table 5 with the ones presented in Table 6, the biggest performance drop is observed for the Waseem dataset. Again, the sampling biases characterizing that dataset may be a contributing factor. Finally, let us mention the peculiarity of the results that we obtained for the HatEval dataset, especially the xenophobia portion; this is the only dataset where we observed a definite increase when training on topic-generic datasets, concerning the performances from training on topic-specific data. This counter-trend outcome needs to be further investigated. If possible, it should be investigated in relation to data sampling strategies adopted for HatEval, where training and test data were collected in different time frames [42]. Error Analysis In this section, we provide an error analysis focusing on the instances for which the predictions of our best performing model (BERT multi-task ) and manual annotation differ. We observe that misclassification is affected by several factors, including the absence of context within the utterance and the use of irony, stereotypes, and metaphors. Another relevant factor is the contextual similarities between the topical focuses in those datasets where the vulnerable category target is basically the same, e.g., misogyny and sexism (see (16) and (17) below 33 ) and xenophobia and racism (see example (18)). In the examples provided below, we underlined some portions of the text in order to highlight the main source, in our view, of misclassification. Misogyny and sexism are closely related notions, and the way in which they are related has been the object of investigation in philosophical literature in the last years [78,110]. In order to take into account relatedness among those and other HS categories, we will consider, in the future, a strategy for putting fewer penalties for errors in predicting closely related topics. The use of irony is another important source of error. For example, in (19) the underlying stereotype, implying that there is no place for women as TV sportscasters, leads to the message being classified as non -sexist. (19) They have to concentrate in the 2nd half of this half". Wise words from our female commentator." (gold label: sexist, predicted: non-sexist) In both (20) and (21) the users express their religious views on Islam. The model is not able to correctly predict that these utterances are racist. Complex inference or logical reasoning is needed to understand their point of views. (20) The fact that I have a brain prevents me from accepting Islam. (gold label: racist, predicted: non-racist) (21) If you don't want to read a pedo, you have to stop reading the Quran. (gold label: racist, predicted: nonracist) Finally, although in (22) the user reports on a series of events, the model predicts the message as conveying hate towards immigrants, most probably because of the use of the word 'rapefugee'. This is a self-explanatory and derogatory term used for Muslim refugees entering Europe. (22) Westminster terror attack suspect named as 'Sudanese Rapefugee who drove around London looking for targets' before driving car into cyclists (gold label: not-hateful against immigrants, predicted: hateful against immigrants) Conclusion and Future Work This paper investigates, for the first time, HS detection from a multi-target perspective, leveraging existing manually annotated datasets with different topical focuses (including sexism, misogyny, racism, and xenophobia) and different targets (gender, ethnicity, religion, and race). Several neural models have been proposed for transferring specific manifestations of hate across topics and targets, while also exploring multi-task approaches and additional affective knowledge. Our results demonstrate that multi-task architectures are the best-performing models and that emotions encoded in sentic computing sources and hate lexicons are important features for multi-target HS detection. This paper thereby shows that multi-target HS detection from existing datasets is feasible. This is the first step towards HS detection for specific topics/ targets when dedicated annotated data are missing. However, there is still room for improvement in building a robust system able to generalize HS towards different topical focuses and targets. In further work, we want to explore other domain adaptation strategies, such as adversarial training. Adversarial training has been shown to be an effective method of learning representations in cross-domain classification in several tasks, including sentiment analysis and image classification [47,56,141]. Another path to explore is the impact of bias in multitarget HS detection. Bias in abusive language datasets is an open problem already observed by several previous studies [25,92,101,136], in which different variants of bias, such as topic bias, author bias, gender and racial bias were explored. As no further investigation on developing an approach in debiasing abusive language datasets has been offered, we also plan to examine this direction in the future in the interests of keeping HS detection fair and compliant. Concerning the role of affective knowledge in detecting hateful contents, we observed that feeding our multi-label classification models with structured knowledge included in a hate lexicon like Hurtlex, where hate words are categorized according to different semantic areas, boosts the performance of the classifiers. This also suggests possible lines of future work. According to the psychological literature, hate words and, in particular, gendered and racial slurs have evolved to the point that they are used, and perceived, to express negative emotions towards targets, therefore providing important information about the speaker's emotional state or his or her attitude toward the targeted entity [58], even when they refer to descriptive qualities. We, therefore, think that it could be interesting to investigate the link between hateful language and the negative portions of the multifaceted emotion spectrum covered in sentic computing resources. In particular, we plan to test the effectiveness of the new version of the Hourglass model [121], that provides a better understanding of neutral emotions and their association with other polar emotions and that includes some polar emotions that were previously missing (including self-conscious and moral emotions). The revisited Hourglass model calculates the polarity of a concept with higher accuracy. It also provides a new mechanism for classifying unknown concepts by finding the antithetic emotion of a missing concept and by flipping its polarity. SenticNet 6 [15] actually contains 200,000 words and multiword expressions. We believe it may prove a valuable resource for improving multi-topic and multi-target HS detection. Finally, though most of the available HS corpora are in English, the problem of hateful speech is not limited to one language. Given language diversity and the enormous amount of social media data produced in different regions of the world, the task of detecting HS from a multi-lingual perspective is also a significant challenge. We, therefore, plan, in future, to explore the possibility of developing language-agnostic models capable of identifying HS in online communication.
2023-01-12T15:05:33.110Z
2021-06-28T00:00:00.000
{ "year": 2021, "sha1": "961e7ec2b2978807d55b47718cdc5de3d84d611e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12559-021-09862-5.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "961e7ec2b2978807d55b47718cdc5de3d84d611e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
250689270
pes2o/s2orc
v3-fos-license
Atomistic simulation of the Rayleigh-Taylor instability Large-scale atomistic simulations are performed using both the molecular dynamics and direct simulation Monte Carlo algorithms. These simulations are used to investigate several aspects of turbulent behavior, focusing on the Rayleigh-Taylor instability, in which a heavy fluid lies on top of a light fluid in the presence of a gravitational field. The use of atomistic techniques allows us to capture various physical effects not resolved by more traditional continuum methods, such as the discontinuous breakup of flow features, and the effects of micro-scale fluctuations. In addition, we compare with both experiment and continuum simulations such properties as the initial growth spectrum of the interface, and the development in time of the mixing zone width. Introduction The Rayleigh-Taylor (RT) instability [1,2], in which a heavy fluid lies on top of a lighter fluid in the presence of a gravitational field, has wide applicability to problems in fields such as astrophysics, oceanography, inertial confinement fusion, etc. Furthermore, it serves as an archetype for more general instances of turbulent mixing. The RT instability has been studied extensively by theory, experiment, and simulation. Until recently, most simulations have been carried out using continuum methods. In this work, we perform simulations of the RT instability using molecular dynamics (MD) [3] and Direct Simulation Monte Carlo (DSMC) [4], a faster, stochastic atomistic algorithm. There are several advantages to the use of atomistic schemes. The traditional continuum models of fluids have had great success in describing fluid flow in the regimes with which we are most familiar. However, this success has tended to obscure the various approximations inherent in such models. In any extended parameter space for fluid behavior, which might include the Knudsen number (the ratio of particle mean free path to the length scale of the flow), distance from equilibrium, Mach number, etc, there are regions in which continuum models do not apply. The passage of a spacecraft through the rarified gas high in a planet's atmosphere, or the discontinuous breakup of nanojets [10] are two of many real world examples. More fundamental atomistic methods, however, should have applicability across the entire parametric spectrum. The major drawbacks of atomistic methods are the small length and time scales to which they are confined by their computationally-intensive nature. For example, the largest runs described in this work correspond to a few microns for about 140 nanoseconds. Nevertheless, advances in technology have allowed multi-billion-particle simulations to be performed [9], and computational capacity is rapidly expanding to the point where atomistic simulations on the length scale of millimeters (the size of several real-world experiments) will soon be possible. For these reasons, it is important to explore the use of atomistic methods in applications beyond those to which it has been traditionally confined. Computational Details This work incorporates data from MD simulations previously described in [8], as well as several new large-scale DSMC simulations. Both types of simulation were performed using the Scalable Parallel Short-ranged MD code (SPaSM) [11]. Two different domain geometries were considered: quasi-2D (or "thin-slab") geometry, and 3D. Thin slab geometry was used in order to emulate a truly two-dimensional system while at the same time maintaining finite transport coefficients. A vertical gravitational field was maintained in all runs, which had a magnitude of approximately 10 9 to10 10 Earth gravities. Such a high gravity was necessary due to the small length and time scales to which we were limited. The MD simulations were run on up to 1, 600 central processing units (CPUs) of the ASCI Q computer system at Los Alamos National Laboratory. The largest of these runs consisted of 100 million particles simulated for 250, 000 integration time steps. The DSMC simulations were done using the BlueGene/L machine at Lawrence Livermore National Laboratory. The largest 2D DSMC run consisted of 500 million particles run for 210, 000 time steps on 65, 536 processors. The length and time scales represented by this run were approximately 5 microns and 140 nanoseconds, respectively. The 3D DSMC simulation consisted of 7 billion particles run for 30, 000 time steps on 32, 768 processors. Its length and time scales were approximately 1 micron and 20 nanoseconds. It should be noted that this latter simulation sets a new world record for the number of particles in an atomistically-based production run. This was made possible by the fact that DSMC is at least 50 times as efficient as MD on a parallel architecture, due to the low level of communication between processors. Homogeneous Turbulence Results As a validation of the use of atomistic methods in a realm traditionally dominated by continuum algorithms, we performed an additional simulation of homogeneous turbulence in 2D. This simulation was done in 12 hours on a single processor, using DSMC with 38 million particles. Turbulence was generated using a linear forcing scheme as described in [12]. A thermostat was used to prevent the temperature from growing due to viscous effects. There is an extensive literature of theoretical results for 2D turbulence, derived largely from continuum models such as the Navier-Stokes equation. However, there was no a priori reason for these results to apply to particle-based simulations. We therefore chose to compare our results for the turbulent energy spectrum with those predicted by the Batchelor-Kraichnan (BK) theory of 2D turbulence, which follows largely from dimensional considerations, and is therefore model independent [13,14]. Energy spectra for a few values of the Reynolds number Re = UL/ν are shown in Figure 1. It can be seen that there is good agreement with the general BK theory. At low Re, the flow is dominated by the so-called "enstrophy cascade" k −3 power law. At high Re, the Kolmogorov "inertial range" k −5/3 scaling is apparent. At intermediate Re, both regimes coexist, with a smooth cusp between the two. In addition to the k −3 and k −5/3 ranges predicted by the BK theory, there is also a k 1 range evident at large k. It should be noted that this is not a numerical artifact of the atomistic scheme used to model the fluid. Instead, it is a very real physical effect that results from the fact that, at small length scales, macroscopic fields such as velocity and density become random variables subject to thermal fluctuations. The k 1 scaling is indicative of two-dimensional spatial white noise. Rayleigh-Taylor Instability Results Several images from various stages in the development of the RT instability in 2D and 3D are shown in Figures 2 and 3, respectively. These examples were generated using DSMC, and represent the typical behavior of the RT instability. Driven by bouyancy, the light fluid (blue) penetrates the heavy fluid (red) in large, round features known as "bubbles". At the same time, the heavy fluid moves downward in somewhat longer, thinner shapes known as "spikes". The fraction of the domain covered by the bubbles and spikes is known as the "mixing region". For small times, linear stability analysis predicts that each Fourier coefficient present in the interface will grow as e nt , where the growth rate n is a function of wavenumber k. Results for n(k) computed from MD simulations are shown in Figure 4, along with Chandrasekhar's continuum prediction [5]. It can be seen that there is good agreement except at large k, where the MD data lies above the theoretical prediction of no growth. This occurs because high-k features lie closer to the length scale of particles. Thus, for these k values, the continuum hypothesis breaks down. Note that the "error bars" in Figure 4 are actually estimates of the fluctuation-induced standard deviations of the growth rates, which become random variables at these small scales. At larger t, the size of the mixing region grows to the point where linear stability no longer applies. In this regime, it has been observed that the depths h B and h S to which the bubbles and spikes penetrate the heavy and light fluids, repectively, grow as h B = α B Agt 2 and h S = α S Agt 2 . (Here A = (ρ h − ρ l )/(ρ h + ρ l ) is the Atwood number, where ρ h and ρ l are the heavy and light mass densities at the interface.) The bubble and spike α-values are of particular interest, and they have been extensively measured by both experiments and continuum simulations. Figure 5 shows the bubble and spike penetration depths as a function of t 2 for a typical DSMC simulation, along with the associated estimates for α B and α S . The linear scaling of h B and h S with t 2 for small-to-moderate t can clearly be seen, although (for the spikes especially) this behavior breaks down at large t due to the discontinuous breakup of spikes and bubbles. In order to compare experimental results with those of the various simulation methods, Figure 6 shows bubble and spike α values calculated via experiment [6], MD, and several continuum simulations in the literature [7]. It can be seen that the particle-based results compare at least as favorably with experiment as the continuum results. Conclusion Though the size of the systems that may be studied by atomistic methods is limited by computational capacity, the generality of these algorithms and the results of this work suggest that they can be applied to much broader sets of problems than have previously been contemplated. In particular, they are essential in the study of nanoscale fluid dynamics, high-Knudsen number flows, and other examples of non-continuum turbulence. Furthermore, particlebased simulations of truly macroscopic flows (i.e. ≥ 1 mm in size) will soon be possible. These can serve as a powerful alternative, first-principles method for predicting the properties of fluid flows.
2022-06-28T02:16:54.299Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "58344cc8bf7a59b09cd97be959093a8cfe60e937", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/46/1/008/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "58344cc8bf7a59b09cd97be959093a8cfe60e937", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
54513447
pes2o/s2orc
v3-fos-license
Natural Compound-Generated Oxidative Stress: From Bench to Bedside Natural Compound-Generated Oxidative Stress: From Bench to Bedside Oxidants are constantly generated in a biological system as a result of physiological processes. However, an imbalance between oxidants and antioxidants can lead to a pathophysiological condition known as oxidative stress. Natural compounds as inducers of oxidative stress are able to modulate physiological functions of cancer cells leading to cell death or survival. This chapter aims at providing an overview of pro- and antioxidant activities of natural compounds related to cancer and related therapies. Natural compound anticancer agents In the search of improved cytotoxic agents against cancer, natural compounds possess advantages with regard to availability, low toxicity, and suitability for oral application and metabolite likeliness [1]. Moreover, new technologies of combinatorial chemistry and highthroughput screening are used to design different synthetic drugs with natural compounds that serve as templates for development of novel molecules with enhanced biological properties. In 1960, the National Cancer Institute (NCI) began a large-scale screening program for antitumor agents, and 35,000 plant species samples were tested primarily on mouse leukemia cells [2,3]. The most promising drug to emerge from this program was paclitaxel, a microtubule disruptive agent obtained from the bark of the Pacific yew Taxus brevifolia. This finding served as the springboard for further investigations with natural compounds, and in the late 1960s, vinblastine and vincristine were reported from Catharanthus roseus. Both drugs major-ly contributed to long-term remission and cures for childhood leukemia, Hodgkin's lymphoma, testicular teratoma, etc. Other anticancer agents to enter clinics, which are derived from natural sources, include etoposide, which has been proven as an effective treatment against testicular teratoma and small cell lung cancer, whereas teniposide was shown to be effective against acute lymphocytic leukemia (ALL) and neuroblastoma in children and non-Hodgkin's lymphoma [1]. A comprehensive study published on new medicines approved by US Food and Drug Administration between 1981 and 2010 revealed that 34% of those medicines based on small molecules were either natural products or a direct derivative which mainly included statins, immunosuppressant, and tubulin-binding anticancer drugs [4,5]. Natural compound constituents demonstrated anticancer activity according to a combination of epidemiological and experimental studies [6]. Mechanistic insights underlined that the chemotherapeutic potential of these agents may be a combination of antioxidant, antiinflammatory, immune-promoting, cytostatic, differentiating, and cytotoxic effects. Altogether, natural compounds efficiently prevent initiation, promotion, and progression of cancer development thus interfering with all 10 hallmarks and enabling characteristics of cancer [7][8][9][10]. Increasing technological advancements led to the development of better purification techniques with defined molecular assays, which can efficiently exclude "distracting molecules" such as tannins and saponins, thereby increasing the chances of identifying the critical agent with specific anticancer activity. The diverse bioactivity potential of natural compounds can be related to the huge structural diversity existing in nature. This compound repertoire is available for further modifications to improve the therapeutic potential of lead compounds. In addition, combinatorial biosynthesis further modulates the functional groups of lead compounds and can be complemented with high-throughput screening, computational chemistry, and bioinformatics to generate structural analogues with improved pharmacological activity and reduced toxicity [1] (Figure 1). Natural compounds as scavengers of free radicals Oxidants are constantly generated in a biological system as a result of physiological processes. However, an imbalance between oxidants and antioxidants can lead to a pathophysiological condition known as oxidative stress [11]. In light of this knowledge, oxidative stress has been defined as perturbations in redox homeostasis. Broadly, the cellular redox level is regulated by three different systems, two of which are dependent on glutathione that includes glutathione (GSH), glutathione reductase (GR), glutathione peroxidases (GPX), and glutathione Stransferases (GST) [12][13][14]. Glutathione undergoes oxidation to form glutathione disulfide (GSSG), thereby reducing the disulfide bonds of cytoplasmic proteins to cysteine and protects the cell against oxidative stress [15]. Under normal conditions, GSH exists in reduced form due to constitutive activity of GR. GSTs act as detoxifying enzymes that conjugate GSH to various electrophilic compounds [16]. Reactive oxygen species (ROSs) have been reported in both solid and hematopoietic cancers where they are associated with tumor development and progression [17,18]. However, cancer cells also express antioxidant proteins to detoxify ROS, suggesting that the fine-tuning of intracellular ROS signaling is critical for cancer. Therefore, understanding the susceptibility of cancer cells to oxidative signals could open new therapeutic window for rational design of new anticancer agents [19]. In addition to their well-characterized effects on cell division and viability, cytotoxic agents can induce oxidative stress by modulating levels of ROS such as the superoxide anion radical, hydrogen peroxide, and hydroxyl radicals. Eukaryotic cells have highly organized pathways to orchestrate the many extracellular stimuli received and convert them into specific physiological processes. This classical cascade also termed as signal transduction pathways includes a series of events occurring constitutively and initiated by interaction of a ligand with its receptor on the cell membrane. ROS in this cascade have been proposed as second messengers in the activation of signaling events that lead to survival or death [20]. Moreover, redox-sensitive cysteine residues are known to sense and transduce changes in cellular redox status caused by ROS production and the presence of oxidized thiols. Various dietary phytochemicals have been shown to exhibit beneficial effects including the prevention of cancer by modulating the cellular redox status by acting as either an antioxidant or pro-oxidant. They function as detoxifying enzyme inducers, which mainly include phenolic and sulfur-containing compounds. Phenolic compounds are classified as polyphenols or flavonoids, whereas sulfur-containing compounds may be classified into isothiocyanates and organosulfur compounds. Epigallocatechin-3-gallate (EGCG) from green tea, curcumin [21][22][23][24] from turmeric, and resveratrol [25,26] from grapes are the classical examples of polyphenols, whereas flavonoids include quercetin from citrus fruits [26][27][28] and genistein from soya. Isothiocyanates represent a group of compounds such as sulforaphane from broccoli and phenethyl isothiocyanate from turnips. Organosulfur compounds mainly include diallyltetrasulfide derived from garlic [29][30][31][32][33][34]. Cells respond to these phytochemicals by a non-classical receptor-sensing mechanism of electrophilic chemical stress characterized as "thiol-modulated cellular signaling" events leading to gene expression commending the pharmacological activity (Figure 2). Survival pathways activated by free radicals ROSs are tumorigenic as elevated levels of ROS-sensitive signaling pathways have been implicated in various cancers where they are involved in sustenance of cell growth, prolifer-ation, survival, migration, and by inducing DNA damage leading to formation of genetic lesions initiating tumorigenesis [35,36]. Low levels of hydrogen peroxide (H 2 O 2 ) stimulation have been shown to propagate cell proliferation in an array of cancer cell types. Role of hormones in endocrine cancers is well documented. In hormone-dependent breast cancer cells, one of the functions of estrogen is to translocate to mitochondria, thereby initiating mitochondrial ROS production that can be impaired by inhibition of mitochondrial uniporter, which prevents estrogen-induced cell proliferation [37,38]. Sodium arsenic in MCF-7 was shown to mimic the effect of estrogen and potentiated S phase progression and proliferation by inducing ROS production and ROS-related depolarization of the mitochondrial membrane [39]. Moreover, estrogen-induced cell proliferation of MCF-7 was strongly inhibited by antioxidants such as N-acetyl-L-cysteine (NAC) or mitochondrial blockers of protein synthesis such as chloramphenicol [40]. ROS generation was shown to augment G1/S transition by increasing the expression levels of cyclins D1, D3, E1, E2, and B2 [41]. In contingent to these finding, cytochrome P450B1-mediated conversion of estrogen to a putative carcinogenic metabolite 4hydroxyestradiol in human mammary epithelial cells MCF-10 leads to intracellular ROS production and neoplastic transformation. ROS overproduction was shown to activate IκB kinase (IKK) signaling with increased nuclear translocation and NF-κB activity [42]. Since deregulation of NF-κB is related to increased cell survival, proliferation, and development of drug resistance in different cancers, series of work conducted in this direction showed that NF-κB is a redox-regulated sensor for oxidative stress and is activated by low doses of H 2 O 2 [43,44]. In MCF-7 cells, interleukin (IL)-1β stimulation of NF-κB is partially regulated by H 2 O 2 -mediated activation of NF-κB inducing kinase (NIK)-mediated phosphorylation of IKKα [45]. Moreover, overexpression of manganese superoxide dismutase (MnSOD) in MCF-7 cells completely abolished tumor necrosis factor (TNF) α-mediated NF-κB activation, IκBα degradation, p65 nuclear translocation, and NF-κB-dependent reporter gene expression [40]. In other forms of cancer such as oral squamous carcinoma, a mild difference in endogenous ROS functions as a physiological signaling modulator of the NF-κB signaling cascades through its ability to activate NIK [46]. Besides solid tumors, redox regulation of NF-κB has also been implicated in hematopoietic cancers. Our group for the first time reported that in U937 cells, melatonin a pineal hormone might induce ROS generation, which ultimately is involved in transactivation of NF-κB-promoting survival of these cells [47][48][49][50]. Moreover, myeloid leukemia, which often maintains a high intracellular ROS level and uses redox signal for survival, is sensitive to NF-κB inhibition since NF-κB is involved in moderating the ROS level, which prevent activation of c-Jun N-terminal kinase (JNK) and cell death [51][52][53][54] (Figure 3). Apart from NF-κB, ROS-mediated regulation of tyrosine phosphatases, protein tyrosine kinases, and receptor tyrosine kinases, which is critical for cell survival and cancer such as mitogen-activated protein (MAP) kinase/extracellular-regulated kinase (Erk) cascade and phosphoinoside-3-kinase (PI3K)/Akt-regulated signaling cascade, is well documented in the literature [55,56]. Activation of MAPK/Erk1/2, which is mediated through growth factors, and K-ras is functionally linked to increased cell proliferation. Several studies have shown how ROS activate Erk1/2 pathway by modulating and activating its upstream target such as Ras. For instance, oxidative modification at its cysteine 118 residue leads to the inhibition of GDP/ GTP exchange [57]. Moreover, ROS activates p90 RSK that acts as an upstream kinase of Erk1/2 [58,59]. In ovarian cancer, sustained Erk1/2 activity was linked to increased concentration of endogenous ROS resulting from ubiquitination and loss of endogenous mitogen-activated protein kinase phosphatase 3 (MKP3), which negatively regulates Erk1/2 [58,59]. Oxidative stress regulation of PI3K/Akt pathway has been implicated in different cancers. In ovarian cancers, H 2 O 2 produced in response to epithelial growth factor signaling (EGF) activates Akt and p70 S6k1, a substrate of Akt involved in regulating protein synthesis [60]. In pancreatic cancer PANC-1 cells, NADPH oxidase (NOX)-4-mediated generation of intracellular ROS was related to survival of these cells, which undergo apoptosis in response to diphenylene iodonium (DPI), an inhibitor of NOX that inhibited superoxide production and impaired levels of phosphorylated Akt [61]. Moreover, benzo(a)pyrene (BaP), a known mammary carcinogen in rodents, increased cell proliferation in human mammary epithelial cells MCF-10A through H 2 O 2 generation and activation of epidermal growth factor receptor (EGFR), Akt, and ERK phosphorylation, which was strongly inhibited by NAC treatment [62]. Reactive oxygen species contribute in tumor progression Intracellular redox status aids tumor progression by modulating the processes of metastasis, angiogenesis, survival of cells under hypoxic conditions, and maintenance of cancer stem cell (CSC) subpopulation [63]. Decreased cell adhesion to extracellular matrix, anchorageindependent survival, and invasion of tumor cells are well documented to be influenced by ROS [64]. Perturbation of mitochondrial respiratory chain in breast cancer cells leads to generation of a cellular subpopulation with increased levels of ROS, which are highly metastatic and maintain increased invasive property in vivo [65]. ROS induction was shown to influence overexpression of chemokine CXCL14 through the activator protein (AP)-1-signaling pathway and promote cell motility through elevation of cytosolic Ca 2+ by binding to the inositol 1,4,5 triphosphate receptor on the endoplasmic reticulum [65]. DNA methylation and histone modification leading to epigenetic silencing of superoxide dismutase (SOD)-2 alter the expression of antioxidant enzyme MnSOD, which promotes invasion of breast cancers [66]. Moreover, a decreased MnSOD level was also associated with increased pancreatic tumor invasion [67]. Degradation of the extracellular matrix (ECM) and activated matrix metalloproteinases (MMPs) are a prerequisite of cancer cell migration and invasion. Binding of several integrins to the ECM results in increased expression of several MMP proteins. Since integrins signal by a vast array of kinases, phosphatases, GTPases, and transcription factors, it is likely that an elevated level of ROS has an effect on integrin-mediated signaling. Several studies reported the inactivation of critical phosphatases such as protein tyrosine phosphatase (PTP)-PEST (PTPN12), SHP-2 (Src homology 2 [SH2] domain-containing non-transmembrane PTP), and low molecular weight protein tyrosine phosphatases (LMW-PTPs) by oxidation [68]. Catalase, a H 2 O 2 scavenger, binds SHP-2 and growth factor receptor-bound protein-2 (Grb2) adapter protein upon integrin ligand binding and therefore protects them against H 2 O 2mediated oxidation [69]. In non-transformed intestinal epithelial cells, elevated ROS increased the expression of α2β1-integrin, which subsequently increased the levels of cyclooxygenase-2 (COX-2) and promoted cell migration [64]. These results also suggest a mechanism where ROSinduced modulation of ECM promotes cancer formation in intestinal epithelial cells. ROSs have also been implicated in promoting tumor progression by modulating the processes involved in epithelial mesenchymal transition (EMT). Several transcription factors, which promote metastasis such as AP-1, Ets, Smad, and Snail, are regulated by ROS, inducing an effect on upstream target molecules involved in activation of these transcription factors such as protein kinase (PK) C and PTPs [70]. In a given tumor mass, cancer cells often are exposed to an environment with reduced levels of tissue oxygen, a condition known as hypoxia. Prolonged limitation in oxygen supply can result in cell death. Therefore, cancer cells often undergo genetic and adaptive changes that contribute to a malignant phenotype and adopt characteristics of an aggressive tumor. Cancer cells mimic a phenomenon known as the "Warburg effect" that is to switch to anaerobic glycolysis when adequate oxygen supply is absent [71]. ROSs have been implicated to facilitate the tumor survival under hypoxic conditions by modulating different transcription factors involved. Hypoxia inducible transcription factor (HIF)-1 is most widely studied for its role in tumor promotion under hypoxic conditions. HIF-1 is a heterodimer that consists of hypoxic response factor HIF-1α and constitutively expressed aryl hydrocarbon receptor nuclear translocator (ARNT) also known as HIF-1β [72]. Under reduced oxygen levels, HIF-1 binds to hypoxia response elements, thereby activating hypoxia response genes such as the pro-angiogenic vascular endothelial growth factor (VEGF) [73]. Moreover, HIF-1 has been shown to regulate expression of all enzymes of the glycolysis pathway as well as glucose transporters GLUT1 and GLUT3 [74]. In human breast carcinoma, increased MnSOD activity is reported to inhibit HIF-1α along with suppression VEGF protein that impaired tumor metastasis [75]. Suppression of endogenous ROS by NADPH oxidase inhibitor DPI and mitochondrial electron chain inhibitor rotenone decreased HIF-1 induction and VEGF expression in ovarian and prostate cancer cells [75]. Moreover, growth factor such as epidermal growth factor (EGF)-induced ROS production may lead to activation of AKT/p70S6K1 pathway resulting in increased expression of VEGF stimulating tumor angiogenesis [60] (Figure 4). In any given tumor, subpopulations of cells have the ability to self-renew and drive tumorigenesis. This population of cells is termed as cancer stem cells (CSCs), which are isolated from most cancers such as hematopoietic, breast, lung, colon, etc. CSCs are characterized by the expression of specific stem cell markers and are of clinical relevance as they are highly drug resistant and mostly initiate recurrence after chemo-or radiotherapy [76]. Studies have shown that normal hematopoietic and epithelial stem cells maintained a lower level of ROS than mature progeny to prevent cellular differentiation and maintain long-term cellular self-renewable. Similarly, CSCs unlike cancer cells have reduced level of ROS. Moreover, compared to tumor cell counterparts, CSCs showed increased expression of enzymes, which are associated with ROS scavenging [76]. Particularly, glutathione synthetase that is involved in glutathione synthesis is upregulated along with Forkhead transcription factor (FOXO)-1 to confer resistance to oxidative stress in hematopoietic stem cells [77]. Also, activation of antioxidant response that is frequently reported in CSCs prevents DNA damage in these cells exposed to ionizing radiations, thereby protecting CSCs against irradiation-induced cell death [78]. Based on these findings, it is widely accepted that cancer recurrence in response to withdrawal of conventional therapies is majorly dependent on existence of a resistant CSC subpopulation within the patients. Therefore, further identification of key molecular drivers that regulate the redox balance in CSCs might provide a possibility to eliminate these cells, which may contribute in overcoming the limitations of cancer relapse in future. Cell death pathways activated by reactive oxygen species As mentioned above, cancer cells in particular generate increased ROS levels; now severe accumulation of cellular ROS in response to chemotherapy may induce cell cycle arrest, senescence, or lethal toxicity inducing apoptosis [79]. Electrons leaking from the respiratory complexes in mitochondria are a major source for ROS production [80]. For instance, As 2 O 3 which impair the function of respiratory chain increases the production of superoxide ions [65]. Alternatively, drugs, which act as redox cyclers such as anthracyclines daunorubicin and doxorubicin, react with cytochrome p450 reductase and NAD(P)H dehydrogenase [quinone] 1(NQO1) in the presence of reduced NADPH to generate superoxide in the presence of oxygen [81]. Apoptosis is linked to an increase in mitochondrial oxidative stress that causes a series of hallmark events such as release of cytochrome c followed by caspases activation ultimately leading to cell death. Sodium salicylate and non-steroidal anti-inflammatory drugs were reported to induce apoptosis in cancers such as colon, breast prostate, and leukemia through ROS production and activation of intrinsic cell death pathway measured by cleavage of caspase-9 and caspase-3 [82]. However, apoptosis was subsequently that a Rac1-NADPH oxidase-dependent pathway is activated in response to treatments that produce ROS and triggers apoptosis [82]. Mitochondrial release of H 2 O 2 has been associated with activation of different stress kinases such as c-Jun N-terminal kinase (JNK) and p38. In response to ROS production, JNK mediates phosphorylation and downregulation of anti-apoptotic proteins Bcell lymphoma (Bcl)-2 and Bcl-extra large (xL) [79]. Moreover, several studies reported that both Bcl-2 and Bcl-xL antagonize ROS generation and protect cells against apoptosis [44,83]. p38 MAPKs are also implicated in apoptosis induction in response to increased ROS production [84]. p38 is activated through apoptosis signal regulating kinase (Ask)-1. Activity of Ask-1 is dependent on a redox-regulated protein thioredoxin that in its reduced form binds to and conserves Ask-1 in an inactivated form. Increased ROS production uncouples thioredoxin from Ask-1 leading to its activation and phosphorylation of p38 required for TNFα-mediated apoptosis [84]. Studies conducted on L929 fibrosarcoma cells revealed that mitochondrial ROS play a key role in inducing TNFα cytotoxicity presumably by ROS-mediated caspase activation and cell death [85]. Moreover, TNF receptor associated factor 4 (TNFR4), a component of the TNF signaling chain, binds to NADPH and activates JNK suggesting different mechanisms by which death receptors induce ROS activation in cells [86]. Additionally, different studies have reported the significance of ROS-mediated signaling pathway regulated by protein kinase D1. PDK1 is activated by direct binding to Src and by phosphorylation, which promotes proliferation [35]. Inhibition of this pathway sensitizes cancer cells to ROS. Furthermore, beyond the conventional therapy to induce cytotoxicity to cancer cells and overcome the limitations associated with therapy resistance and risk of developing metastatic phenotype, recent advancement is made to explore the phenomenon of senescence, which inhibits the proliferation of cancer cells and restricts them in a dormant phase [87]. Senescence in cancer cells is mainly characterized by increased activity of β-galactosidase along with modulation of several cell cycle regulators such as cyclin-dependent kinases (CDKs), p16, and p27 [87]. Different polyphenolic compounds extracted from artichokes (Cynara cardunculus) or ginseng (Panax ginseng) were described to trigger ROS-dependent senescence. Pathological alterations triggered by free radicals Intracellular ROS generation may lead to damage of cellular macromolecules such as DNA, proteins, and lipid bilayer. Studies have indicated that H 2 O 2 is not very reactive towards DNA; however, the damage to DNA is mainly caused by hydroxyl ions that are generated by the Fenton reaction where transition metals such as iron or copper donate or accept free electrons during intracellular reactions [88]. H 2 O 2 acts as a catalyst in the reaction in the formation of free radicals. The generated hydroxyl ions are highly diffusible and lead to DNA damage like oxidation, single-, and double-strand breakage. Under normal physiological conditions, such DNA defects are repaired by base excision repair (BER) or nucleotide excision repair (NER). Cells unable to repair the DNA lesions undergo apoptosis to ensure that the mutations are not passed on during cell division. However, failure in either process of DNA repair or apoptosis may harbor the possibility of formation of cancerous growth. ROS-mediated damage of proteins is mainly associated with modifications in specific amino acid residues leading to altered function [89]. Beside, some ROS-mediated modifications of protein also includes increased protein carbonylation, nitration of tyrosine and phenylalanine residues or formation of cross-linked and glycated proteins [89]. The oxidized amino acid residues in proteins may influence their activity in a signal transduction pathway. For instance, oxidation of phosphatases within the catalytic sites impairs their enzymatic activity [90]. Moreover, ROSs react with polyunsaturated or polyunsaturated fatty acids to trigger lipid peroxidation that has also been used as a tumor biomarker in clinical studies [91]. For instance, in colorectal cancer patients, the presence of thiobarbituric acid reactivates has been linked to high levels of lipid peroxidation [63] (Figure 5). Natural compounds as pharmacological antioxidants It has been reported in several studies that dietary phytochemicals can interfere with every stage of cancer development. Therefore, antioxidant functions of phytonutrients have been investigated thoroughly for their role in pathophysiology associated with cancer. Dietary antioxidant compounds with significant anticancer activity mainly include anthocyanidins (and their glycosides termed anthocyanins) from berries [92], catechins from green tea, curcumin from turmeric, genistein from soy, resveratrol from grapes and red wine, all-trans lycopene from tomatoes [93], indole-3-carbinol from broccoli, sulforaphane from asparagus, quercetin from red onions and apples. Beside this, carotenoids, flavonoids, and isothiocyanates have also exhibited strong antioxidant properties. Epigallocatechin gallate (EGCG) is the most abundant catechin found in green tea and curcumin-induced anticancer activity promoting cell cycle arrest, polyamine synthesis, and affecting transglutaminase (TG) activity along with regulation of signaling pathways mediated by NF-κB, AP-1, and MAPKs [94]. In a recent study, EGCG was shown to inhibit cell proliferation of cervical carcinoma Hela cells by promoting depolymerization of cellular microtubule and disrupting tubulin-microtubule equilibrium. Spectroscopic analysis revealed that EGCG bound to the α-subunit of tubulin at the interphase of α-and β-heterodimers preventing colchicine binding to the colchicine-binding site [95]. Also, in osteosarcoma cells, EGCG treatment induced cell cycle arrest, promoted apoptosis, and inhibited growth of transplanted tumors in vivo by regulating miR1/c-MET interaction [96] (Figures 6 and 7). Eugenol (4-allyl-2 methoxyphenol) is a naturally occurring phenolic compound that exhibits antioxidant properties. The antioxidant activity of eugenol was evaluated by the extent of protection offered against free radical-mediated lipid peroxidation using both in vitro and in vivo studies [97]. The chemopreventive and anticancer role of eugenol was evaluated on Nmethyl-N'-nitro-N-nitrosoguanidine (MNNG)-induced gastric cancer in Wistar rats by analyzing the markers of apoptosis, invasion, and angiogenesis. Rats exposed to MNNG developed gastric cancer with upregulation of pro-invasive and angiogenic factors. Eugenol inhibited cell proliferation by suppression of NF-κB signaling. Apoptosis in these cells following eugenol treatment was mitochondrial pathway mediated that decreased the expression of Bcl-2, following release of cytochrome c and caspases activation. Anti-angiogenic and inhibition of invasion was evidenced by decreased expression of VEGF, its receptor VEGFR1 changes in the activities of MMPs and the expression levels of MMP-2 and MMP-9, VEGF, VEGFR1, tissue inhibitor of metalloproteinases (TIMP)-2 and reversion-inducing cysteine-rich protein with kazal motifs (RECK), a metastasis inhibitor [97]. Several studies aim toward proving the anticancer properties of flavonoids on an array of cancer cell types. Hirano and co-workers tested the anticancer activity of 28 flavonoids on human acute myeloid leukemia (AML) cell line HL-60. Eight of these flavonoids showed strong inhibition of cell proliferation with IC 50 values in a nanomolar range [98]. In contingent to this finding, Kuntz et al. showed strong inhibition of proliferation induced by flavonoids on two colon cancer cell models with Caco-2 displaying features of small intestinal epithelial cells and HT-29, resembling colonic cryptic cells [99]. Moreover, in vivo studies on mice strongly inhibited the growth and metastatic potential of melanoma cells B16-BL6 in response to flavonoid treatment [100]. Epigenetic modifications resulting in heritable changes into gene expression without changing the DNA sequence have been marked as key player in promoting cancer [101]. The most common types of epigenetic modifications that may contribute to tumor promotion are DNA methylation and histone acetylation or methylation. Antioxidant compounds mainly isoflavones, flavonols, and catechins have shown to modulate epigenetic features, thereby showing antitumor activity [102][103][104]. EGCG was shown to affect DNA methyltransferase by inhibiting DNMT and reactivating tumor suppressor genes RARα, p16, and O 6 -methylguanine methyltransferase in esophageal cancer KYSE 510 cells [105]. Treatment with caffeic acid (3,4dihydroxycinnamic acid) or chlorogenic acid [106] of hormone-dependent MCF-7 and hormone-independent MDA-MB-231 breast cancer cell lines partially inhibited the methylation of promoter region of the RARβ gene, thereby restoring its function [107]. Furthermore, studies also indicated that dietary antioxidants such as genistein, quercetin, parthenolide, and lycopene may affect DNA methylation status of different genes associated with cancer [108][109][110][111]. In addition to this, synergistic or additive effects of phytochemicals could be achieved when administered along with conventional chemotherapy or radiation therapy. This could be explained due to the fact that phytochemicals, which target different biochemical pathways, may enhance the efficacy of conventional therapies. Moreover, different studies have reported the synergistic cytotoxicity on different cancers when phytochemicals are administered together. Apple extracts and quercetin 3-β-D-glucoside combination showed synergistic antiproliferative effect on MCF-7 breast cancer cells [112]. Genistein a major phytoestrogen which has higher affinity for ERβ compared to ERα showed synergistic cytotoxicity in combination with indole-3-carbinol in HT-29 cells by simultaneously inhibiting Akt phosphorylation and progression of autophagic process [113]. Combination of δ-tocopherol and resveratrol showed strong inhibition of HMC-1 mastocytoma cell proliferation. The two compounds together strongly inhibited Ser473-phosphorylation of Akt, thereby reducing its activity compared to individual treatment [114]. Gagliano et al. suggested that the use of quercetin in combination with other antioxidants such as resveratrol or sulforaphane might be a novel approach for the treatment of human glioma, which has poor clinical prognosis in both adults and children [115]. Additionally, pharmacological implications of polyphenols have also been explored with respect to inhibition of cancer stem cells and self-renewal. It has been demonstrated that polyphenols can efficiently target pathways such as Wnt/β-catenin, Hedgehog, and Notch, which are critical for cancer stem, cells self-renewal [116]. Sulforaphane has been demonstrated to target cancer stem cells by modulating the pathways such as NF-κB, Hedgehog, and Wnt/ β-catenin in different cancers such as breast, pancreas, and prostrate and has been proposed as an adjuvant of chemotherapy in different pre-clinical studies [117,118]. As discussed earlier, cancer stem cells are characterized by a glycolytic metabolism with lower mitochondrial respiration compared to the tumor cells. Therefore, a proposed strategy to counteract CSCs population is to impair their metabolism by inhibiting glycolysis or by forcing CSCs into mitochondrial metabolism and oxidative phosphorylation. To this purpose, polyphenols have been implicated to regulate the cancer metabolism. For instance, EGCG in human breast cancer have been shown to target the 5' adenosine monophosphate-activated protein kinase (AMPK) pathway, which is involved in maintaining cellular energy status, cell cycle, and protein synthesis [119] (Figures 8 and 9). Natural compounds as pharmacological pro-oxidants As discussed earlier, cancer cells produce high levels of ROS that allow these cells to maintain a state of increased basal oxidative stress. The increased state of oxidative stress promotes survival but on the other hand makes the cancer cells vulnerable to further increase in ROS levels over a cancer-specific threshold. Accordingly, pro-oxidant agents and increased oxidative stress levels could then selectively target cancer cells. Different compounds of natural origins modulate the intracellular ROS levels and induce both chemopreventive and anticancer effect in different cancer types. Polyphenolic extracts from artichokes (Cynara cardunculus) at high doses induce apoptosis and decrease the invasive potential of human metastatic breast cancer. Apoptosis was regulated in a caspase-independent manner. Additionally, sublethal concentrations of artichoke increased ROS and induced significant increase in senescence-associated β-galactosidase along with upregulation of tumor suppressor genes p16 INK4 and p21 Cip1//Waf1 . Altogether, NAC attenuated the antiproliferative effect induced by artichoke extracts, which suggests that induction of premature senescence and apoptosis is regulated in a ROS-dependent manner [120]. 20(S)-ginsenoside Rg3 [20(S)-Rg3], a chemical compound extracted from Panax ginseng, induced senescence in glioma cells at sublethal concentrations, which was abrogated by NAC treatment suggesting involvement of ROS. Moreover, depletion of Akt and inactivation of the p53/p21 pathway attenuated the compound-induced senescence. These results suggest that ROS is playing a role in activation of Akt and p53/p21, which leads to growth arrest in human glioma cancer [121]. Bisdemethoxycurcumin, a curcuminoid from turmeric, demonstrated potential chemotherapeutic activities by inhibiting proliferation and decreasing the cell viability of hormonedependent breast cancer. Bisdemethoxycurcumin treatment leads to increased ROS production, which disrupted mitochondrial membrane potential assessed using mitochondrial potential sensor JC-1. Moreover, the compound induced increased expression of proapoptotic protein p53 and its downstream effector p21 along with cell cycle regulator p16 and its downstream regulator retinoblastoma protein (pRb). The results overall suggested bisdemethoxycurcumin-induced ROS accumulation, which leads to inhibition of hormonedependent breast cancer [122]. We have previously reported that garlic-derived organosulfur compounds including diallyltetrasulfide induce growth arrest and apoptosis in colon cancer cells by disrupting the redox status in the cells. Drug-induced cell cycle arrest in G2/M phase followed by apoptosis was further associated with decreased Cdc25c expression, one of the key enzymes responsible for G2/M transition [32]. Moreover, we have also shown that plumbagin, a plant naphtoquinone, reduces cell viability and induces apoptosis in a series of hematopoietic cancer cell lines including HL-60, Jurkat, K562, Raji, and U937 with a most pronounced effect on AML U937 cells by 10-fold increase in ROS production. This was followed by decreased expression of antiapoptotic proteins Mcl-1 and Bcl-2 along with activation of caspases-8, caspases-9, caspases-7, and caspases-3 [123]. Recently, we have also demonstrated ROS induction in neuroblastic and stromal neuroblastoma cells by hemisynthetic cardenolide UNBS1450. ROS induction was followed by autophagic response eventually leading to apoptosis or necroptosis. Timedependent increase in ROS affected lysosomal integrity of the cells inducing lysosomeassociated membrane protein (LAMP)-2 degradation leading to cathepsin B and L activation [124] (Figure 10). Conclusion Natural compounds or their derivatives comprise of more than 50% of cancer chemotherapeutic agents available in the clinics. Information encoded by the human genome project would definitely lead to identification of several gene products, which could potentially be targeted by novel anticancer drugs. Due to various advantages associated with the use of natural compounds such as high availability and reduced toxicity, it is likely that the natural products templates combined with chemistry will allow the generation of novel analogues with enhanced pharmacological benefits to enter clinics. Malignant cells, which often exhibit increased ROS generation that is associated with tumor proliferation and drug resistance, highlight the crucial role of ROS stress in cancer. Therefore, targeting the redox-modulated biochemical properties of cancer cell may allow to develop a feasible therapeutic approach to overcome challenges associated with cancer treatment. Furthermore, not critically explored unique redox biology of cancer stem cells suggests the use of redox modulating strategies to eradicate these cells.
2018-12-04T17:23:32.672Z
2016-10-26T00:00:00.000
{ "year": 2016, "sha1": "e06e3dd54e9712bf7184669ee0c4890d1ad6f172", "oa_license": "CCBY", "oa_url": "https://openresearchlibrary.org/ext/api/media/7419ed16-6c23-467f-ad7a-dfd68cff219b/assets/external_content.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "e91bc87548eaea9882a3e38943607f04a7cab6e0", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
227219370
pes2o/s2orc
v3-fos-license
Ultrasound methods of imaging atherosclerotic plaque in carotid arteries: examinations using contrast agents Abstract The primary technique for detecting the presence and monitoring the development of carotid atherosclerotic plaque is ultrasound. The development of ultrasound techniques has made it possible to precisely visualise not only blood flow, but also vessel walls, including atherosclerotic plaque. Contrast-enhanced ultrasound examination enables one to make an objective observation of atherosclerotic plaque neovascularisation, clearly indicating active inflammation, which is an inherent feature of vulnerable (unstable) plaque. Depending on the examination method used, it is possible to precisely visualise different components of the plaque and its behaviour during blood flow through the vessel lumen or through the neovessels of the plaque, and, consequently, determine the possible presence of inflammation, which is a defining feature of plaque stability. The full utilisation of physical phenomena that underlie contrast-enhanced ultrasound will bring further enormous progress of diagnostic and probably also therapeutic methods for carotid atherosclerosis. The selection of the right examination method significantly accelerates diagnosis and adequate classification of plaque, and makes it possible to monitor the progression of atherosclerosis. However, one needs to bear in mind that ultrasound remains a very subjective method. The success of contrast-enhanced ultrasound also depends on the skills and experience of the examiner. Current attempts at increasing the objectivity of contrast-enhanced ultrasound examination using artificial intelligence will make it possible in the future to make a definitive evaluation of atherosclerotic plaque stability. This will allow one to assess the risk of ischaemic stroke adequately. Introduction The sheer number of methods currently used to image the precranial segments of carotid arteries (vessel walls, blood flow and atherosclerotic plaque itself) proves that it has not been possible yet to establish definitively which of them should be the gold standard for both determining the presence and assessing the severity of atherosclerosis, a disease that is one of the main causes of death and disability in Europe (1) . Over the last 20 years B-mode ultrasound combined with Doppler imaging has become the primary method of e192 J Ultrason 2020; 20: e191-e200 Andrzej Fedak, Robert Chrzan, Ositadima Chukwu, Andrzej Urbanik Unstable plaque is described as the one which is a potential cause of brain ischaemia. It is characterised by a thin fibrous cap, increased proteoglycan content or increased number of calcifications and surface irregularities which lead to exposed endothelium. The image of atherosclerotic plaque and preliminary assessment of its stability are described in the Gray-Weale-Nicolaides (GWN) classification and its modified versions. GWN can serve as a potential assessment tool for further diagnostic investigation and treatment, and for therapy monitoring (7)(8)(9)(10)(11) . The course of atherosclerotic plaque development has not been fully discovered. However, it is believed that the "natural" development of atherosclerotic plaque is associated with repeated, alternate episodes of stabilisation and destabilisation (12)(13)(14) (Fig. 1). The majority of studies describing the mechanisms of atherosclerotic lesion progression focus on changes at the cellular level. These processes cause limited changes to the plaque's structure leading to its local damage; in consequence, a cascade of destabilisation is triggered (15,16) . According to studies by Constantinides (17,18) , the presence of neovascularisation in the lipid core of atherosclerotic plaque is a sign that the process leading to its impaired stability has been activated. It was also concluded (10,11,19,20) that changes in the structure of the lipid core of atherosclerotic plaque associated with impaired neovascularisation play a fundamental role in the transformation of the plaque's collagen matrix, leading to core necrosis, and, subsequently, to loss of stability (Fig. 2). Signs of active inflammation in the atherosclerotic plaque are assumed to be a definitive criterion of plaque instability; these signs include lipid core neovascularisation (21,22) or products of plaque inflammation (C-reactive protein, interleukin complexes, LDL oxidase, myeloperoxidase, glutathione peroxidase etc.) (23,24) . A technique which makes it possible to visualise signs of atherosclerotic plaque instability in a very reliable manner is contrast-enhanced ultrasound (CEUS) (25)(26)(27)(28) . It was discovered that bubbles filled with air or another gas (Fig. 3) are perfectly fit as a contrast agent for ultrasound examination, since they have a low volume, are stable, rheologically (haemodynamically) inactive and practically neutral for the body. This resulted in the development of contrast-enhanced blood flow examination techniques. Apart from investigating blood flow in large vessels (the aorta), medium (carotid arteries, aortic branches) and small ones (lobar and segmental arteries, arcuate arteries of the kidneys, arteries of the circle of Willis), these techniques are used to examine blood flow in the capillaries. Capillary blood flow imaging has become of particular importance in the diagnosis of Raynaud's disease (phenomenon), assessment of skin flap vitality for plastic surgery and even in microcirculation imaging; in addition, capillary flow imaging is useful for the assessment of tumour and atherosclerotic plaque neovascularisation (25,26,(29)(30)(31)(32)(33)(34)(35)(36) . Ultrasound contrast agents modify acoustic impedance (acoustic rigidity) of tissues (the impedance of contrast agent microspheres is approximately 300 higher than that of red blood cells) and increase blood echogenicity. This phenomenon occurs as a result of changes in ultrasound wave parameters (extinction and attenuation) due to the presence of fine gas structures: ultrasound enhancing agent (UEA) microspheres. Another characteristic of ultrasound enhancing agent microspheres, which is used to change blood echogenicity and increase or decrease the returning echoes, is the ability of microsphere walls to resonate in response to ultrasound waves with an appropriate frequency. As a result of interaction between the microspheres and the ultrasound wave, their walls reach their resonant frequency. As a result, apart from waves with fundamental frequencies emitted by the ultrasound probe crystals, waves with harmonic frequencies also reach the receiver (ultrasound probe) (Fig. 4). The probe also receives waves with subharmonic and ultraharmonic frequencies, which are used in differential tissue harmonic imaging (DTHI). Currently, the use of these kinds of waves is being investigated in clinical trials. (12)(13)(14) LIPID RICH FIBROUS SMC and matrix synthesis lipids and inflammation e193 J Ultrason 2020; 20: e191-e200 Ultrasound methods of imaging atherosclerotic plaque in carotid arteries: examinations using contrast agents UEA microspheres behave differently depending on the strength of the ultrasound beam expressed by the mechanical index (MI). This index reflects the probability of unfavourable mechanical (non-thermal) bioeffects of ultrasounds associated with streaming and cavitation effects. MI also shows the level of negative acoustic pressure in the ultrasound field, indicating the maximum amplitude of a pressure pulse in a tissue subjected to ultrasounds. At a low MI (<0.4), the response of the microspheres is linear: oscillation induced by ultrasounds, which is caused by the compression and decompression of microsphere walls, does not cause their damage or resonance (Fig. 6). At an intermediate MI (0.4-0.8), there is a non-linear response associated with microsphere wall resonance ( Fig. 7). At a high MI (>0.8), microsphere walls are damaged, which emits a high-intensity signal. Appropriate acquisition requires intermittent imaging: time is needed for subsequent bubbles to flow into the vessel lumen and into the probe field, where they will be ruptured by the emitted ultrasounds (Fig. 8). The continuous development of contrast-enhanced ultrasound techniques has been generating new diagnostic and therapeutic possibilities every day. In 2014, a comprehensive paper was developed summarising exclusively intravascular phenomena in which UEA microspheres are used (36,37) . Currently, in studies on nanodroplets, methods of extravascular -interstitial -use of UEA for the diagnosis and therapy of atherosclerotic plaque are also developed (38,39) . Decision to use ultrasound enhancing agents to investigate atherosclerotic plaque stability In a situation where ultrasound examination of carotid arteries reveals the presence of a potentially unstable plaque, meeting instability criteria, and the patient has a history of cerebrovascular accident (CVA) with no clear aetiology, a CEUS examination is warranted to detect Another phenomenon utilised in contrast-enhanced ultrasound imaging is pulse inversion, which consists in emitting two pulses one immediately after another with the same amplitude, but a shifted phase. Since microspheres are "non-linear reflectors" of ultrasounds, the echos generated by them do not overlap, as is the case for echoes generated by tissues, but enhance one another. Ultrasound examination techniques utilise the phenomenon of overlapping wave harmonics (Fig. 4). These phenomena and the possibility of maintaining the integrity of the microbubble wall or inducing its controlled rupture allow one to use ultrasound enhancing agents in multiple ways to examine atherosclerotic plaques located in carotid artery walls (Fig. 5). A contrast-enhanced ultrasound examination is not a unidirectional procedure. The type of data acquired changes depending on the contrast agent used, manner of its administration and method of observation. Ultrasound examination using UEA has a limited spatial and temporal scope considering the manner of administration and the method of acquisition and observation. Therefore, after the patient's eligibility for the procedure has been determined, a precise algorithm of action should be established, which includes the selection of: • UEA, • UEA manner of administration, • phase of observation of UEA propagation in the vessel and tissues, • method of UEA interaction with the ultrasound system, • mode of procedure recording, • method of assessment of examination findings. Ultrasound system An ultrasound system equipped with contrast-enhanced examination option is necessary. This option usually has one or two timers displaying the length of the loop recorded during the procedure. For this option, the system itself should include the possibility to record a film as raw data with a length covering at least the following phases: wash-in, arterial, venous and wash-out. The systems available on the market enable one to acquire a few loops, including their combined record. The contrast option should also include flash mode, which makes it possible to use the pulse inversion method. This method involves the emission of two ultrasound pulses with the same amplitude, shifted by 180° between one another, at a very short interval. This method utilises a non-linear response of UEA microspheres: overlapping echoes of fundamental wave harmonics. It is also necessary to have an indication of the mechanical index displayed on the desktop. The standard options of systems supporting the use of ultrasound enhancing agents include the colour flow mode (CFM) and power Doppler (PD) or its modification: directional power Doppler (DPD). Power Doppler options (PD, DPD) are necessary for possible microsphere rupturing using the flash or replenish method. A PD pulse necessary for the flash method is released automatically by the ultrasound system at preset moments or delivered on demand by the examiner after they press an appropriate function key. An appropriate ultrasound probe is necessary to conduct contrast-enhanced examination of atherosclerotic plaque. Carotid atherosclerotic plaques are usually examined using vascular probes with a foot print of approximately 45 mm, frequency range of 3-11 MHz and nominal frequencies of 5-8 MHz. Studies are also conducted on the use of a linear high speed volumetric imaging probe with UEA for three-dimensional imaging of the surface of atherosclerotic plaque (37,40,41) . Which contrast agent to use? Based on pharmacokinetic properties, ultrasound enhancing agents are divided into those which: • do not pass through the pulmonary vascular bed (only the right ventricle of the heart is visualised; short-term action): Echovist; • pass through the pulmonary vascular bed, have a short half-life (less than 5 minutes from intravenous administration), produce a low signal on harmonic imaging when low acoustic power is used: Albunex, Levovist; • pass through the pulmonary vascular bed, have a long half-life (over 5 minutes from intravenous administration), produce a high signal on harmonic imaging when preset time, which is usually 100 msec in the case of flash mode, and remain visible for a few seconds as a result of physical properties of the display monitor (34) (Fig. 10). 2. Venous phase (up to 60-90 sec from the administration of contrast agent) makes it possible to visualise the increasing contrast enhancement of the image of plaque neovessels with the effect of excessive enhancement: UEA "flowing out" of the vessels, which is described as a "blooming effect" (Fig. 11). The arterial and venous phases are assessed as wash-in phases. Microcirculation phase (90-180 sec from UEA administration) is a phase of continued contrast enhancement with excessive UEA being washed out of the Region of Interest (ROI). In this phase, atherosclerotic plaque neovessels are filled with blood and their echo is enhanced directly by undamaged UEA microspheres or by an increased extinction caused by gas particles (following microbubble rupture). Depending on the method used, the microspheres and gas remaining after microbubble rupture are washed out of the lumen of large blood vessels, which enables one to perform thorough observation and possible measurement of plaque enhancement (Fig. 12). Late phase (over 180 sec after contrast agent administration): UEA wash-out from plaque vessels. In this phase, distinctly hypoechoic elements can be observed, which are impossible to distinguish from the vessel lumen under physiological conditions. These include juxtaluminal black areas (JBA): extremely hypoechoic areas, described as lesions with an echogenicity of <25 GSM (grey-scale median) units, with no fibrous cap, histologically defined as necrotic components or fragments of lipid core of a damaged plaque or interpreted as GWN type I plaque described as thin-cap atheromatic plaque (TCAP), impossible to visualise clearly on B-mode ultrasound, and mobile components (described as "jellyfish sign", with an image consistent with JBA, subject to displacement during heartbeat). The mobility of these areas can also be described low acoustic power is used: Echogen, Optison, SonoVue, Sonovist; • are captured in the liver and spleen, make imaging possible after the vascular phase: Levovist, Sonovist, Sonazoid. Currently, in Poland, two ultrasound enhancing agents are approved for use. SonoVue, a Bracco product, contains sulphur hexafluoride and phospholipid stabilisers. It is a contrast agent that can be used to examine peripheral vessels. Sulphur hexafluoride (also called elegas, SF6-enflurane) is an inorganic chemical compound with very good dielectric properties. It does not have any colour, taste or odour and is approximately 6 times heavier than air. Sulphur hexafluoride is a non-flammable gas with a low chemical activity, which is non-toxic under normal conditions (comparable to noble gases such as argon or helium). It is only at very high temperatures (>200°C, e.g. at electric arc temperature) and at the presence of humidity or oxygen that small amounts of toxic substances can occur, mainly sulphur tetrafluoride (SF 4 ) and thionyl fluoride (SOF 2 ). Enhancing agent microspheres are ruptured with ultrasounds; fine bubbles of gas can circulate freely in the bloodstream. Larger amounts are exhaled and the lipid "shells" are metabolised in the liver and subsequently excreted with bile. Another UEA is Optison (General Electric), which is currently approved for the Polish market, but only for cardiac ultrasound scans. Ultrasound enhancing agents are tolerated very well: the reported anaphylactic reactions following their administration are estimated to occur in 0.001% of cases. No kidney toxicity has been found. UEA half-life is approximately 12 minutes. The amount of UEA administered during the procedure depends on the method and the result analysis protocol used. Observation phase for ultrasound enhancing agent (UEA) propagation in the vessel lumen 1. Arterial phase (up to 30-45 sec from the administration of contrast agent) makes it possible to assess the vessel lumen/atherosclerotic plaque boundary due to precise contouring of the observed object. The assessment of the lumen/plaque boundary can be difficult on a scan using the classic algorithm with UEA bolus as a result of microspheres filling the vessel lumen in excess. This boundary is best visible when the temporal maximum intensity projection (TMIP) technique is used (transient and flash methods). In TMIP, every microsphere, after becoming resonant following a power Doppler pulse, leaves its own separate trace of motion, reproduced by subsequent microbubbles entering the field of a probe set to flash mode. Authors (38,39,42) explain this as an "open shutter" image, where subsequent traces of microsphere motion are tracked for a as "intraplaque contents" (IC) or "motion of intraplaque contents" (MIC) (43,44) , i.e. mobility of areas referred to as JBA that is independent from the heartbeat. Here, it should be noted that MIC interferes with the image of subjective contrast enhancement of an unstable plaque (44) (Fig. 13). The microcirculation and late phases are referred to as the wash-out phase. Which contrast agent administration technique to use? The selection of the contrast agent administration technique depends on the structure of the examined object and on the aspect for which observation has been planned. Therefore, it needs to be decided whether atherosclerotic plaque itself, the boundary between the vessel lumen and plaque or plaque vascularisation will be observed. Thus, the contrast agent administration technique should be selected with a view to its most effective use. The currently used methods allow one to make complex observations provided that the advantages and possibilities of the contrast agent are used properly. Combinations of the methods described below are used to examine carotid atherosclerotic plaques. 1. Classic method: the contrast agent is administered as a bolus, with saline washing. A low MI value (up to 0.4) should be set in the ultrasound system. During the procedure, the contrast agent is used as a modifier of tissue impedance: with a low mechanical index of the sound wave, the microspheres are not ruptured (the microbubbles pass through the vessels that are large enough). During the procedure, it is possible to observe echoes appearing in the atherosclerotic plaque's topography, consistent with wash-in and wash-out phases. This method allows one to determine the presence of vessels in the observed atherosclerotic plaque. Due to a low image resolution, relatively wide vessels and a small amount of the contrast agent entering a neovascularised plaque are visualised. In this method, it is only possible to make a subjective evaluation and observe potentially contrast-enhanced atherosclerotic plaque components (Fig. 14). The amount of UEA administered is from a minimum of 4 ml (Clevert) up to 8 ml (Feinstein). Modified classic method: the contrast agent is administered in fractions. Half of a single dose should be administered in a bolus, then saline washing is performed, and subsequently the remaining portion of the contrast agent is administered. A low MI value (up to 0.5) should be set in the system. This method makes it possible for contrast agent microspheres to stay longer in the neovascularised plaque. However, the assessment of the wash-out phase is not possible (Fig. 15). The total amount of UEA is 8 ml administered in fractions (as described above). 3. Transient method: the contrast agent is administered as a bolus, with saline washing. It is necessary to set the mechanical index in the ultrasound system to high values: MI >1.2 (31) (0.8 according to other authors) (45) . This method makes it possible to rupture the microspheres immediately after they reach the region of interest. During the procedure, due to the distinctly Ultrasound methods of imaging atherosclerotic plaque in carotid arteries: examinations using contrast agents values on ultrasound (the adventitia is GSM 180-200) (47) . The assessment of enhancement in the arterial phase and the wash-out phase is made using the so-called long loop: observation lasting at least 240 sec. The difference of at least 20 GSM units between plaque enhancement before and after contrast agent administration is considered significant. The amount of UEA administered is a minimum of 8 ml (Burns). Evaluation of atherosclerotic plaque neovascularisation In the studies published to date, CEUS examination of atherosclerotic plaques was analysed based on various protocols. The simplest method of atherosclerotic plaque neovessel filling assessment is the observation of loops recorded during the procedure, which was proposed by Feinstein (26) . The most common protocol, which is recommended by Iezzi et al. (26,27) , is that involving a dynamic examination. It consists in observing ROI increased extinction of the medium, the plaque/vessel lumen boundary can be assessed more precisely. This method partly utilises the TMIP phenomenon (Fig. 16). The amount of UEA administered is a minimum of 4 ml. 4. Replenish mode, flash mode: the contrast agent is administered in a slow infusion, with subsequent saline washing. It is necessary to set the mechanical index to a low value (MI <0.4). Once the vessel is filled with the contrast agent, the replenish mode, a colour Doppler function is used (Fig. 17). This method makes it possible to rupture the microspheres in a controlled manner to release gas in order to thoroughly fill the lumen of not only carotid arteries, but also that of atherosclerotic plaque neovessels. This method utilises the TMIP phenomenon. The examination findings can be evaluated using GSM analysis (46) . GSM presents the median of the pixel tonal distribution frequency in the range of 0 (black tones) to 256 (white tones). On ultrasound, fluid corresponds to the lowest values (blood is GSM 0-5). Solid tissues, on the other hand, correspond to the highest GSM Fig. 14 Andrzej Fedak, Robert Chrzan, Ositadima Chukwu, Andrzej Urbanik enhancement during the procedure, in the wash-in and wash-out phases. This protocol includes the late phase, recorded at 6 minutes from contrast agent administration after applying flash mode and rupturing contrast agent microbubbles using a power Doppler pulse. This protocol requires continuous recording of the procedure based on the CEUS programme timer assigned to a setting in the ultrasound system. A protocol with a late wash-out phase, with UEA wash-out curve being recorded, makes it possible to observe neovessels while avoiding blooming artefacts in microspheres washed out of the arteries. A similar protocol was used in studies by Clevert et al. and Coli et al. (25,30) , in which, apart from subjective evaluation, a full examination record consisting of a 280-360-second loop was analysed. According to the present authors, the examination protocol proposed by Hoogi (48) , among others, which includes a repeated assessment of plaque enhancement in ROI, seems to provide a precise account of the level of atherosclerotic plaque vascularisation during the procedure, slightly reducing the role of subjective evaluation for the benefit of objective assessment, with off-line enhancement analysis using GSM (Fig. 18). Another method of increasing the assessment objectivity is the scoring system proposed by Akkus (49,50) to evaluate the enhancement of the atherosclerotic plaque in the predicted arterial, venous and interstitial phases using GSM analysis. Conclusion The ultrasound image of atherosclerotic plaque allows one to draw conclusions regarding its future fate, and, most importantly, make decisions on further patient management. However, this is possible on condition that the findings are definitive, i.e. a comprehensive evaluation of the patient's Ultrasound methods of imaging atherosclerotic plaque in carotid arteries: examinations using contrast agents beyond doubt, which is an inherent feature of unstable (vulnerable) plaque. Current attempts at making CEUS more objective with the help of artificial intelligence (AI) will make it possible in the future to make a definitive assessment of plaque stability, and, consequently, evaluate the risk of CVA adequately. Conflict of interest The authors do not report any financial or personal affiliations to persons or organisations that could adversely affect the content of or claim to have rights to this publication. clinical situation has been made and the patient has been adequately assigned to a prognostic group. Considering the fact that atherosclerosis is a changeable, progressive process with atherosclerotic plaque transforming dynamically from a stable to an unstable state, one should make a very cautious evaluation of the current disease process and the potential or actual complications. This is because ultrasound examination is a highly subjective procedure whose result depends to a large extent on the experience of the examiner and the quality of the equipment used. Contrastenhanced ultrasound allows one to determine the presence of neovascularisation in an objective manner; thus, the presence of active inflammation can be demonstrated
2020-11-30T13:42:06.339Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "966fe6259c19ecb127668f7b759784707d8a0923", "oa_license": "CCBYNCND", "oa_url": "http://www.jultrason.pl/artykul.php?a=837&key=4539b39b5c74b14d4cbafdbe32897e57", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "966fe6259c19ecb127668f7b759784707d8a0923", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
191555356
pes2o/s2orc
v3-fos-license
Pantun in the text of Nyanyian Lagu Melayu Asli ( NLMA ) The purpose of this study is to understand the role of pantun in the text of Nyanyian Lagu Melayu Asli (NLMA). By using critical descriptive method accompanied by implementation of content analysis theory, the author conducted literature studies (literature studies), namely activities relating to compilation and critical analysis of literature data, such as books, magazines, documents, historical stories and etc. The results of the study found that Pantun is an old Malay poetry work that is not only full of meaning but also solid with its beauty value. Values of beauty can perceived if we are sensitive and susceptible with structure and language style a pack of Pantun. The other result of this study found the functionality of the origin creation of Pantun associated with; (1) commoners who created pantun through their own living experiences, (2) wise people who issued wise words from their contemplation and (3) wise verses from the holy book, namely the Qur‟an. The most important research results above all of them are: 1). Pantun as a literary art, which has fulfilled the provisions as one of the highest art works of the Malay heritage. 2). Pantun as a culture of the Malay community confirms that the culture of reciprocating pantun is the culture of the Malay people. 3). Therefore, as the „soul‟ of NLMA, meaning and aesthetic of a Pantun should be understood and expressed by all NLMA singers. INTRODUCTION In realizing a form of personal and cultural expression that is fundamental to the field of music, we know one of them in the form of singing. As text and history, songs and musical works use traditional signs and symbols as an alternative both in the offering and in the playing. Through singing, we can appreciate and understand cultural diversity and richness and indeed, indirectly through singing, we can also understand our identity as a cultured nation. It can be understood in a society, in-97 cluding allied Malay people, singing is also a special activity in cultural communication. This is certainly in line with the opinion of Minette Mans (2009) who says: "A song, as well as other human expressions, is a habitual pattern that is learned, as a common thing for a cultured human being. Singing is a special activity of communication. " Besides that, as one of the various cultures that have been learned from generation to generation for centuries, NLMA is a symbolic nature of a musical communication for the Malay people. This research, based on the desire to understand the musical culture and singing of the archipelago, which in particular is Nyanyian Lagu Melayu Asli (NLMA) as the ultimate wealth of allied Malay culture. NLMA was chosen as the main aspect in this research because this genre is a legacy for allied Malay people around the world with its own history. In addition, the researcher has also conducted research on NLMA specifically related to the transmission (inheritance) of NLMA through formal education. This research is for a doctoral program dissertation entitled "Development of Curriculum for Teaching and Learning of Nyanyian Lagu Melayu Asli (NLMA) for Formal Education Institutions" at University Pendidikan Sultan Idris (UPSI), Malaysia. Even though it is also related to pantun, but the discussion is not very specific, so that the writer intends to study pantun in the context of NLMA specifically in this research. NLMA is a tradition of singing sounds with a certain rhythm. The variety of sounds is adjusted to the meaning that will be conveyed by the text of the song. For example in the song Tudung Periuk. The text or song lyrics conveyed illustrate the attitude of humility of the Malayans in the perspective of life and relationships among human beings. Other things also show about the manners and elegance of Malayan language in daily life. There are several types of rhythms in NLMA that are commonly sung by the Malay people, including: langgam, inang, zapin and joget rhythm. Every rhythm certainly brings its own atmosphere. For example, the type of langgam rhythm is always identical with a sad atmosphere, so the pantun used as lyrics is in sad tones too. NLMA text or lyrics are very open. There are many NLMA whose pantun"s text is developed and modified according to the location and condition of the country where the song was popularized. Hasan (2002) notes that Malay music is compatible with the Malay community itself, and most have become the universal property of Malayans in the world. Malay music which home is in Riau, or anywhere else has almost the same character. The content contained in music and text certainly gives a certain meaning in people"s lives. This classical Malay literature called pantun has played a very special role in the life of Malay society. Various soul experiences were manifested by Malay people into the pantun. Of course this is related to the way the Malay thinks which tends to express it through the metapho-ric way. According to Akmal (2015) it was alleged that there were pantuns that were very classic and popular in the Ma-lay community, namely pantun which was often used in the wedding culture of Ma-lay people, for example in reciprocating"the Door Opener". The reciprocating pan-tun "Pembuka Pintu" (the door opener) is a repying pantun one another in the door of the bride"s house that is carried out by the male guard. Reciprocating The pantun "Door Opener" is a very exciting activity, spontaneous and contains a lot of metapho-ric values, which shows the culture of the Malay community that views an object or problem through inner depth, then reveals the results of the mind, appreciation and desire of the heart by using metaphor, na-mely symbol and figuration. This conditi-on makes the dimension of language and literature very thick in the life of the Malay world. The language containing symbols and metaphors is like a mirror of the life of the Malay people (Hamidy, 2011). Because of the expansion of Sri Wijaya"s kingdom trade activities, in its development of Ma-lay language became a kind of lingua fran-ca throughout the archipelago. Pantun in the NLMA text can also be said to be the "Soul" of NLMA. The pantun itself for the Malay community have been used from all sides of life, for example, folk songs based on pantun. Even classical Malay songs that were popular in the past also spread to the archipelago such as: Dear Musalmah, Zapin Kasih and Budi, jalak Lenteng, Mak Inang Pulau Kampai and so on. These classical Malay terms refer to the notion of music genres which are based on classical literature which are used as other sciences such as customs, religion, beliefs, literature, social, politics and general culture (Nicolas, 1994). In the National Seminar proceedings organized by PS PBSI FKIP Jember University, Murti (2017) noted that "Pantun is a representation of rhetorical intelligence in the language and literature of Malay society that is beautiful with thought, beautiful in rhyme, and graceful in harmony. Pantun is born from the game of sound and strict rhythmic consideration with instruments that are "in" and "enlightening". Pantun includes the values of the wisdom of the local wisdom in reflecting his noble life. This wisdom has attached and symbolizes the identity of the Malay nation. " As a song text, pantun plays a big role in people"s lives. The songs that are played will certainly indirectly permeate the hearth and have the impression of the audience. The text or lyrics of this song not only can contribute towards the formation of the mind of a society, but also be a documentation of the socio-cultural history of a nation. Choo Ming"s research (2010) shows that pantun contains a lot of content including advice, ideals, philosophy, tasks, loyalty, ethics, family, respect, filial piety, obedience, memory, and many others for the supporting community such as Malay society for centuries. Therefore, the song text must also be considered for its creation so as not to bring negative impressions and as a reflection of identity for the supporting community. The beauty of the song is determined by the elements that are interrelated between the melody of the song and the lyrics. Song lyrics are part of a song that gives a new dimension to music composition. Thenew dimension is a language that makes a language like poetry feel beautiful to hear. Poems such as rhymes used for song lyrics will give the beauty of the song in two aspects, the beauty of the musical and the beauty of the literature. The beauty of sound equation in the rhymes is at the end of the line, according to the poem that is a-b-a-b. At the end of the rhyme, row one is the same as the end of the third row, and the end of the second row is the same as the end of the fourth row (Suharto & Subroto, 2014). The form of pantun as text or lyrics in NLMA is a part that is inseparable from the noble archipelago"s cultural heritage. As the formulation of the discussion above, in the end the aim of this study was to understand the role of pantun in the original text of Nyanyian Lagu Melayu Asli (NLMA) text. This study also wants to know how the role of pantun for Malay society is seen from the perspective of literature and Malay culture. METHOD This research is a qualitative research using critical descriptive method through library research (library research), which is a series of activities related to library data collection methods, such as books, magazines, documents, historical stories (Mahmud, 2011). In this study the author uses a content analysis approach model which is defined as a theory that is used to analyze all forms of communication both in the text of books, newspapers and other documenta-tion materials. As the implementation of this approach, the research has begun by reading and studying writing material, which is quite significant as a reference especially for the discussion of the study of pantun analysis in the text of Nyanyian Lagu Melayu Asli (NLMA), then the data is sorted to be easily analyzed to answer the study problems that is exist. The final study provides certainty whether the conclusions are in accordance with the formulated hypothesis if it exists. In summary, research is a genuine and true scientificcontribution to the development of knowledge (Takavoli, 2012). RESULTS AND DISCUSSION The researcher wants to note about the rhymes in the NLMA text specifically the results of the study from the literature review that has been carried out. For this reason, it will also be explained about the background of Malay history so that its relationship with Malay art in general and specifically to the realization and aesthetics of NLMA. It should be stated that the pantun is the "soul" of NLMA because of the importance of its role as described below. Pantun as a literary art As a literary work, pantun must have met the criteria as one of the culmination of Malay heritage art. Pantun is the work of old Malay poetry which is not only loaded with meaning but also dense with its beauty value. The value of its beauty can be seen if we are sensitive and susceptive to the structure and language of the double style of a pantun. The structure of language is not created arbitrarily but requires attention and sensitivity from all aspects including lexical selection, syllable syllables, and a limited number of lines. " The expression above gives an idea of the sensitivity of feelings that arise during the creation and selection of the right words to give meaning to the pantun. Not only that, pantun contributes to diversity in Malay art treasures. One of them is of course NLMA. To understand the NLMA text, there are no shortcuts but must understand the deep meanings of the rhymes sung. This is not an easy thing, because according to Idris (2011) "Pantun not only has a profound meaning but its offerings are also interesting with limited syllables and lines, the chosen dictation, and the structured receptions, including the composition of words and phrases." Furthermore, Ahmad (as stated by Idris, 2011) states that: "The recording of the beauty of feeling is indeed already in every double pantun. In terms of the creation of the pantun, aspects of physical beauty and the beauty of language as illustrated in the first part (intentional imagery) can be perceived as the pantun skin. The goal seems to provide an opportunity for listeners to understand the content delivered. The second part contains the real intentions about the contents of the rhymes that will be conveyed. Shafii & Introduction Research (2010) even mentions the two parts between shadow (sampiran) and intent (content) something that is inseparable, such as body and soul, birth and mind, outside and inside, even bodies and spirits for Malay people. The linking is something nice to sing and hear including from the sound at the end of the line (Braginsky, 1998;Daillie, 1990). In the rhythm of the final sound in each part of the shadow should be different while the end sound in each part of the contents should be the same as the end sound on the shadow. NLMA singers must understand the description above, because almost all NLMA song texts are in the form of rhymes. Further knowledge must be elaborated with a further understanding of the concept of the structure of a pantun. According to Alisjahbana, pantun is formed of four rows of rhyming and alternating two-two (rima a-b-a-b) and sometimes there is a pantun bond which also consists of six or eight lines (Idris, 2011). Pantun has multiple copies and each double consists of lines or slices. Examples of the Malay pantun type are pantun two slices, pantun four slices, pantun six slices, pantun eight slices, pantun ten slices, pantun twelve slices, pantun fourteen slices, and pantun sixteen slices. The number of syllables in each row consists of eight to twelve. Each double pantun has a shadow section and a purpose section: the shadow is called also as sampiran or imagined intention and the rhyme scheme is a-b-a-b. Each double or pantun stanza has (1) complete and perfect unity of mind, (2) has symbols that are in accordance with the norms and values of the local community: (3) and there is a meaning relationship between the shadow and the intent. The essence of the beauty of this rhymes is the rhythm in the lines, the sound of words that form the heart, and the contents of the next two lines (Idris, 2011;Alisjahbana, 2009) Pantun as a Malay culture Apart from being one of the tops of Malay literary works, pantun is also part of the culture of Malay society. Appropriately describe this by saying that old poems such as pantun are part of the culture emitted by the Malay community (Alisjahbana, 2009). Idris (2011) revealed that Pantun is a container used by the Malay community to express their thoughts and feelings about the meaning of life, about human behavior and its relationship with the surrounding environment. Pantun is also one of the main aspects in understanding Malay civilization because pantun usually describes the unique character of nature, environment, thought and subtlety of the Malays. Shafii"s research (2010) even found that there was a love of nature in Malay culture through poetry in his contemporary life. The following is the poetry example. Tonight the maize for roasting"s set, Tomorrow it is a lemon grass, Tonight we are together met, Tomorrow on the ways we pass. Maulina (2015) states that «... pantun seems to originate from the Malay tradition which has been so firmly rooted and becomes an inseparable part of the daily lives of its people. Pantun may spread along with the development of Malay language which became the lingua franca in the archipelago. It might be because of that, compared to people in other regions, pantun for the Malay community has been so firmly integrated and as an important medium in delivering advice regarding social relations in social life. " Piah (1989), asserts that rhyme is not only a communication tool to express emotions of love and affection. Pantun also has educational elements, which contain teachings about social life, advice, religious studies and affirming the oneness of God. Andriani (2012) also emphasized: "Pantun plays a very important role in the life of the Malay community because in the pantun many values of life are in accor-dance with Islam based on the Qur"an and Sunnah. Pantun plays a very vital role in the life of the Malay people. Through pantun, teaching points were disseminated, inherited and developed. Through rhymes also noble values are perpetuated and conveyed to members of the community There is a problem about how to in-terpret pantun values in the context of Malay language and culture. First, pantun interpretation should be based on words, phrases, lines or couplet. There is also a pantun in the form of a story called pantun cohesion (for example there is in 'dadendate', namely the art of traditional music of the people of Palu, South Sulawesi, Indonesia). This pantun genre interpretation needs to be done in the whole couplet to work on themes, problems, and teaching that might be revealed through dual continuity (Idris, 2011). Then Idris further explained how to assess the order of a pantun. According to him, the order of a pantun does not lie in one part but is related to the whole. There are rhymes that are rich with sounds and rhymes that are melodious (poetic) and others are thick with the comparison. Even so, the whole becomes strong and steady because of its cohesion and is not due to sound aspects or comparative aspects separately. The melodious rhymes, but the raw contents will not be considered quality. The rhymes are rich in comparison but the meaning of comparison is cheap, the nature of it will also be pinned down. The source related to the origin of the creation of rhymes in the Malay community is divided into three sources, namely; 1) commoners who create rhymes through their life experiences, 2) wise people who express phrases of results rather than their reflections and 3) holy book, namely Al-Qu"ran. Then the question also arises about the authenticity of pantun art, whether it is genuine or has other cultural influences. To answer that problem, Idris (2011) asserts that the culture of acting is Malay culture. Pantun is favored by the Malay community because pantun is a poem that does not have elements of foreign influen-ce. This is due to the fact that pantun is a product of true Malays who can describe the thinking of the people. With regard to Malay culture, NLMA is one form of Malay identity. Pantun which is the most important idiom in the NLMA text has great values as the identity of the Malay community. The pantun text in NLMA is formed by the laws contained in the repertoire including the musical theories that frame it. The aesthetic has contextual values related to the musical behavior of the community formed by the community who want to express cultural characteristics (cultural identity), and make it a social identity (Hanks, 1989). In the Malay republic, various types of literary works can be found, but why is the pantun used as an identity of the Malay identity. In addition, pantun can become a means to convey noble values including the Islamic values of the Malay community. Pantun as 'Soul' NLMA As noted before this pantun is a "soul" which shows that NLMA art depends entirely on the beauty of pantun art. Likewise, because of that, as a first step, an NLMA singer needs to understand with certainty the meaning of pantun and second, it must be clever to make rhymes when desired at the time of offering. Andriani (2012) asserted: "Pantun is very close to Malay life. Pantun is considered as a form of art that was born from the Malay cultural instinct itself. Even pantun survives its use until now in Malay life. The rhymes are often made by song lyrics or even used as new expressions." Andriani (2012) also stated: "For most Malays, especially Riau Malays, they already know the terms of the rhymes, so that only one name is mentioned, they can understand the meaning. Because of its variety, the term for the rhymes that contain the teaching and religious teachings, the Malay elders enter the rhymes into various forms of presentation so that the designation follows the intended form. For example, pantun which is used as a song or rhyming song, is no longer called pantun teaching or tunjuk ajar, but is called 'pantun nyanyian' or 'pantun lagu." If the song is a song to put the child to sleep, it is called "pantun lulling a child" or pantun "singing to lulling a child" or chid hum" Besides that, in different NLMA pantun offerings are always sung with the same melody repeated, depending on the atmosphere experienced. Below are some examples of rhymes that are always used in the NLMA and the meaning of the text within. This rhyme on Joget "Hitam Manis" is very popular, especially among teenagers and young people, even more so for those who are loving. This song is a song that is well known to this day and listeners really enjoy the song, rhythm and lyrics. Revealing the meaning contained in the lyrics of this song, it is a mercy verse that contains expressions addressed to loved ones. Jalak Lenteng Hit the monitor lizard skin drum, A little bit no more Where to go I want to bring, A little unlucky again Jalak lenteng chicken brook My heart to remember your master Pain really hit nettles, I can"t take a bath It hurts to live a ride, Pain should not be hearted Malay Jalak lenteng Malay song My heart saddened The rhymes in the song "Jalak Lenteng" are very popular and are often sung in the NLMA genre in areas that still practice Malay culture, especially in Indonesia and Malaysia. The first pantun is a symbol of a woman"s overflowing feeling towards someone who no longer gives attention to her. The second pantun, illustrates the distress of the heart and the feeling of sadness when remembering the fate that befell. A heart that wants to live a ride becomes increasingly painful when disappointed. That is the feeling of a Malay woman who is hurt in love matters. The village girl is good at carving, She is good at woven cloth too, Poor flower facing the water, Dew drops elsewhere Daughter"s child is sitting pensive, While arranging the lofty place I mean my heart is hugging a mountain, What"s the power of the hand does not arrive The song (Mak Inang) 'Kampai Island' is also one of the songs that are also very popular in the Malay Archipelago. The song text (lyric) is the rhymes that are simple and easy to understand. This song is also often used to accompany Traditional Malay dance, namely "Tari Mak Inang Pulau Kampai". If analyzed, the whole meaning of pantun above illustrates a person"s disappointment because his sincere love does not get a reply. The lyrics or text of this song can sometimes be composed in such a way as to maintain its rhythm, because what is prioritized is not only the text but the rhythm that can satisfy the listeners. 'Sayang Musalmah' is also one of the most popular songs in Riau, North Sumatra and Peninsular Malaysia. Pantun from this song reflects the day-to-day life of a Malay woman named "Musalmah" who always wears a bun going down to grow rice. Here there is advice about the importance for someone so that they are not always consumed with anyone because it will lead to disaster. This shows how everything that has been done can be used as an initiative in the future. This provides an illustration of how a Malay woman who is in accordance with her traditional customs must always think before accepting a person"s favor so that she will not be remo-rse in the future. The song (Zapin) 'Love and Budi' is a song that still gets a place in the heart of NLMA activists and listeners up to now especially in the Malay community. Overall the meaning contained in the above text contains the instructions, invites, prohibitions and examples of good and truebehaviors. This pantun also gives a lot of teaching about good manners and does not deviate from Malay customs. Messages delivered are very useful for our lives to become a better human being. An in-depth analysis of some of the pantun examples in the NLMA above explains that the aspects and values of the Malay community can be described. Examples of some pantun above although short of the bait but contain a solid and brilliant meaning. The Malay community in the past but today still makes pantun as one of the entertainment media. NLMA-shaped songs, jogets, zapins and hostages for example still use pantun as the main text to the present, despite the ability to express new pantun as a process of imagination open to a NLMA singer. What cannot be ignored is its beauty of the literary. Literary elements add to the beauty of song lyrics if sung in a song (Suharto & Subroto (2014). Repetition of the word (repetition) and the equation of sound in the repetition and at the end of the line (rhyme) add to the beauty of the song. The distinctive beauty of this song is because of the shape of the pantun which also has elements of beauty such as musical, literary and cultural inherent in the pantun itself. Pantun consists of two elements: the first two lines as sampiran and the last two lines as contents. These two parts, according to Shafii (2010) and Daillie (1990) are inseparable souls and bodies of the Malay community. The sampiran in pantun generally reflects the character and culture of Malay people. CONCLUSION The results of the analysis of content from various data as described above show that Nyanyian Lagu Melayu Asli (NLMA) is a tradition of singing sounds with a certain rhythm. The variety of sounds is adjusted to the meaning that will be conveyed by the text of the song, for example in the song Tudung Periuk delivered to describe the attitude of humility of the Malays in the perspective of human life and relationships. Pantun literary art as the text or lyrics of NLMA is an inseparable part of the realization of NLMA. As the art of Malay literature, pantun also has cultural values and symbols and communication media of the Malay community. This study has found results from the literature study of pantun in the NLMA text on the realization and aesthetics of NLMA. Pantun is said to be the "soul" of NLMA and has an important role such as: 1) Pantun as a literary art, which has fulfilled the criteria as one of the culmination of Malay heritage. 2) Pantun as a Malay culture, which confirms that the culture of acting is the culture of the Malay community. Popular rhymes are caused by the creation of true Malays who can describe the thoughts of the people. 3) Pantun as the "soul" of NLMA, which shows that NLMA art depends entirely on the beauty of pantun art. Therefore, it is important for an NLMA singer to understand the meaning of pantun and express new rhymes as a process of imagination.
2019-06-19T13:24:23.275Z
2018-06-30T00:00:00.000
{ "year": 2018, "sha1": "314eca1ccc933f5d7e0f96540c4a7cb0a9a8cfb0", "oa_license": "CCBY", "oa_url": "https://journal.unnes.ac.id/nju/index.php/harmonia/article/download/15524/8402", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "314eca1ccc933f5d7e0f96540c4a7cb0a9a8cfb0", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Art" ] }
119335942
pes2o/s2orc
v3-fos-license
Kaluza-Klein black hole with negatively curved extra dimensions in string generated gravity models We obtain a new exact black-hole solution in Einstein-Gauss-Bonnet gravity with a cosmological constant which bears a specific relation to the Gauss-Bonnet coupling constant. The spacetime is a product of the usual 4-dimensional manifold with a $(n-4)$-dimensional space of constant negative curvature, i.e., its topology is locally ${\ma M}^n \approx {\ma M}^4 \times {\ma H}^{n-4}$. The solution has two parameters and asymptotically approximates to the field of a charged black hole in anti-de Sitter spacetime. The most interesting and remarkable feature is that the Gauss-Bonnet term acts like a Maxwell source for large $r$ while at the other end it regularizes the metric and weakens the central singularity. Superstring/M-theory is an attempt to unify all the forces in nature and for that it requires dimensions higher than usual four [1,2]. In this theory, extra dimensions in our universe are considered to be compactified and hence are not accessible to present state of observations. There has emerged an attractive string-inspired braneworld model in which the universe we live in is a four-dimensional timelike hypersurface of a higherdimensional bulk spacetime [3,4,5]. In the latter, the fundamental scale could be of order of TeV, and one of its consequences would be creation of tiny black holes, whose detection in the upcoming high energy collider becomes a distinct possibility [6]. Studies of higher dimensional spacetime and in particular black hole or black object have therefore been pursued vigorously and extensively. Recently, the black object called the black p-brane has been investigated for its stability which should be closely related to stability of the fundamental string theory object D-brane. The black p-brane is the (n ≥ 5)-dimensional black object locally homeomorphic to M n−p × R p , where R p is the p-dimensional flat space. The special case with p = 1 is called the black string. The instability of a black p-brane originally found by Gregory and Laflamme has occupied centrestage of investigations in this field [7]. (See [8] for a review.) On the other hand, in the low-energy limit of heterotic * Electronic address:hideki@gravity.phys.waseda.ac.jp † Electronic address:nkd@iucaa.ernet.in superstring theory, the Gauss-Bonnet (GB) term naturally arises in the Lagrangian as the higher curvature correction to general relativity [9]. From a general standpoint it should also be included in the most general action for n ≥ 5 which yields quasi-linear second order differential equation. There is also a purely classical motivation for higher dimensions based on the physical realization of dynamics of self interaction of gravity [10]. For n ≥ 5, the GB term must naturally be included along with Einstein-Hilbert action in the Lagrangian giving rise to Einstein-Gauss-Bonnet (E-GB) gravity. The black p-brane solutions in E-GB gravity have recently been studied by several authors [11]. However, the generalization to the case with curved extra dimensions has not been done both in general relativity and in E-GB gravity. The purpose of this Letter is to report a new exact vacuum solution of E-GB gravity in a spacetime locally homeomorphic to M 4 × H n−4 for n ≥ 6, where H n−4 is the (n − 4)-dimensional space of constant negative curvature. We write action for n ≥ 5, where α is the GB coupling constant and all other symbols having their usual meaning. The GB Lagrangian is the specific combination of Ricci scalar, Ricci and Riemann curvatures and it is given by This form of action follows from low-energy limit of heterotic superstring theory [9]. In that case, α is identified with the inverse string tension and is positive definite which is also required for stability of Minkowski spacetime. It should however be noted that it makes no contribution in the field equations for n ≤ 4. The gravitational equation following from the action (1) is given by where We consider the n-dimensional spacetime locally homeomorphic to M 4 × K n−4 with the metric, g µν = diag(g AB , r 2 0 γ ab ), A, B = 0, · · · , 3; a, b = 4, · · · , n − 1. Here g AB is an arbitrary Lorentz metric on M 4 , r 0 is a constant and γ ab is the unit metric on the (n − 4)dimensional space of constant curvature K n−4 with its curvaturek = ±1, 0. Then G µ ν gets decomposed as follows: where the superscript (4) means the geometrical quantity on M 4 . The decomposition immediately leads to a general result in terms of the following no-go theorem on M 4 : , then G A B = 0 for n ≥ 6 andk and Λ being non-zero. The proof simply follows from substitution of the conditions (i) and (ii) in Eq. (6). As a corollary, it states that M 4 cannot harbour any matter/energy distribution unless at least one of the conditions (i) and (ii) is violated. These conditions also imply for α > 0,k = −1 and Λ < 0. Hereafter we setk = −1 and obtain the vacuum solution with T µν = 0 satisfying the conditions (i) and (ii). The governing equation is then a single scalar equation on M 4 , G a b = 0, which is given by We seek a static solution for a point mass with the metric on M 4 reading as: where dΣ 2 2(k) is the unit metric on K 2 and k = ±1, 0. Then, Eq. (8) yields the general solution for the function f (r): where µ and q are arbitrary dimensionless constants. The solution does not have the general relativistic limit α → 0. There are two branches of the solution indicated by sign in front of the square root in Eq. (10), which we call the minus-and plus-branches. The n-dimensional black hole with (n − 4)-dimensional compact extra-dimensions is called the Kaluza-Klein black hole. The warp-factor of the submanifold r 2 0 is proportional to GB parameter α which is supposed to be very small. Thus, compactifying H n−4 by appropriate identifications, we obtain the Kaluza-Klein blackhole spacetime with small and compact extra dimensions. Here we shall mainly focus on the physical properties of the solution while a detailed study of its geometric structure and thermodynamical properties will be given in a forthcoming paper [12]. The function f (r) is expanded for r → ∞ as This is the same as the Reissner-Nordström-anti-de Sitter (AdS) spacetime for k = 1 in spite of the absence of the Maxwell field. This suggests that µ is the mass of the central object and q is the charge-like new parameter. Further, the solution (10) agrees with the solution in the Einstein-GB-Maxwell-Λ system having the topology of M n ≈ M 2 × K n−2 although it does not admit n = 4. The solution is given for n ≥ 5 by ds 2 = −g(r)dt 2 + 1 g(r) dr 2 + r 2 dΣ 2 n−2(k) with g(r) = k + r 2 2(n − 3)(n − 4)α where g c is the coupling constant of the Maxwell field, and M and Q are mass and charge respectively [13,14]. k is the curvature of K n−2 and a constant V k n−2 is its surface area on compactifications. The non-zero component of the Maxwell field reads as representing the coulomb force of a central charge in ndimensional spacetime. Thus the parameters µ and q act as mass and "charge" respectively in spite of the absence of the Maxwell field. The new "gravitational charge" q is generated by our choice of the topology of spacetime, splitting it into a product of the usual 4-spacetime and a space of constant curvature. This splitting gives rise to the Kaluza-Klein modes which are known to generate such a gravitational charge known as the "Weyl charge" in the Randall-Sundrum braneworld model [15]. There it is caused by the projection of the bulk Weyl curvature onto the brane, that is how it derives its name. One of the first and the simplest black hole solutions on the brane obtained by Dadhich et al. by solving the gravitational equation on the brane is indeed given by the Reissner-Nordström metric [16]. (See [17] for the rotating case.) The Weyl charge was taken to be negative so as to work in unison with the mass. In our solution as well, q must be negative so as to ward off any branch singularity indicated by vanishing of the expression under the square root in (10). So we have a new Kaluza-Klein black hole with mass µ and Weyl charge q < 0 sitting in an AdS spacetime. It is really remarkable that our new solution asymptotically approximates (except for AdS background) to the brane black hole which was obtained in quite a different setting [16]. The common point between the two is splitting of spacetime into a bulk-brane system or a product. The Weyl charge seems to be caused by the Kaluza-Klein modes which require splitting of spacetime in some or the other way. Thus the Reissner-Nordström metric seems to be an asymptotically true description of black hole with GB adding AdS to it. Clearly the global structure of our solution (10) will be similar to that of the solution (13) and it has been comprehensively studied in [14]. Note that f (0) = k ∓ √ −q, which produces a solid angle deficit and it represents a spacetime of global monopole [18]. This means that at r = 0 curvatures will diverge only as 1/r 2 and so would be density which on integration over volume will go as r and would therefore vanish. This indicates that singularity is weak as curvatures do not diverge strongly enough. We plot f (r) and df /dr to get a good feel of the metric and gravitational field. Figs. 1 and 2 respectively refer to black hole (in the minus-branch) and naked singularity (in the plus-branch) and they show that both metric and gravitational field always remain finite for finite r. That is why singularity is weak [14]. The solution (10) is the general solution of G a b = 0 with the metric assumption (9) in addition to the conditions (i) and (ii). Without the condition (i), it is obvious from Eq. (6) that the vacuum equations G A B = 0 are identical to those in general relativity with a cosmological constant. Then, by the generalized Birkhoff's theorem, the general solution of G A B = 0 with the topology M 4 ≈ M 2 × K 2 is Eq. (9) with f = k − µ/r − λr 2 or the (anti-)Nariai solution. Being confronted with G a b = 0, the former will lead to µ = 0 [12]. Now arises the question of the interior solution. Our ansatz for local topology places a stringent constraint for interior of black hole. It is rather natural to consider the situation where α and Λ in the interior are identical to those in the exterior. Consequently, the condition (ii) holds in the interior, too. Then, by the contraposition of the no-go theorem proven above, the matter interior represented by the metric g µν = diag(g AB , r 2 0 γ ab ) cannot satisfy the condition (i). Therefore, such an interior solution cannot be attached to our vacuum solution. However, it could be attached to the interior with the metric g µν = diag(g AB , S(x D ) 2 γ ab ), where S is a scalar on M 4 , at S 2 = r 2 0 = 2α(n − 4)(n − 5). The matching problem to the interior is a very involved and difficult problem which we shall address in our future studies. We have thus found a new Kaluza-Klein black hole solution of Einstein-Gauss-Bonnet gravity with topology of product of the usual 4-spacetime with a negative constant curvature space. In this solution we have brought the GB effects down on four dimensional black hole as envisaged in [10]. Asymptotically it resembles a charged black hole in AdS background while at the other end it approximates to a global monopole. What really happens is that GB term regularizes the metric and weakens the singularity while the presence of extra dimensional hyperboloid space generates the Kaluza-Klein modes giving rise to the Weyl charge. This is indeed the most interesting and remarkable feature of the new solution which needs to be probed further for greater insight and application [12]. The authors would like to thank Reza Tavakol and Umpei Miyamoto for discussions. HM would like to thank Hideki Ishihara and Takashi Torii for useful comments. HM would also to thank IUCAA for warm hospitality where the work was conceived and formulated.
2019-04-14T02:54:54.122Z
2006-05-03T00:00:00.000
{ "year": 2006, "sha1": "3492641518a603786976d23ecd5833be269443d3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0605031", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c2036162fd649f644445d1cef2fcc3746378dffc", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
98185655
pes2o/s2orc
v3-fos-license
A microcosm study of permeable reactive barriers filled with granite powder and compost for the treatment of water contaminated with Cr (VI) The permeable reactive barrier (PRB) is a technology developed for the removal of contaminants in groundwater. It consists of a screen perpendicular to the flow of contaminated groundwater filled with a material capable of adsorbing, precipitating or degrading pollutants. Several materials have been tested for their use as reactive substrates for the construction of PRBs. Waste materials are of particular interest for this purpose due to the possibility of their reuse and their generally lower cost. With this aim, the Cr (VI) retention capacity of a filler material consisting either of pine bark compost (PB) or a 50% mixture of compost and granite powder (PB50) was evaluated using an experimental device specifically designed for this study, which reproduces a permeable reactive barrier at the laboratory scale. Percolation experiments were carried out with a solution of 100 mg L-1 Cr (VI) in 0.01M KNO3, followed by a leaching step with the saline background. The results show that compost is a highly efficient filler for permeable reactive barriers with almost 100% retention of Cr, whereas the retention efficiency of the mixture of PB50 oscillated between 18 and 46% during the experiment. The Cr retained by the filling material is strongly fixed, since no desorption was detected by leaching with the saline background, and concentrations in the standard Toxic Characteristic Leaching Procedure (TCLP) extracts were lower than 1 mg L-1. This behaviour minimizes the risk of release of the Cr retained by the material of the barrier in the event of it being traversed by water not contaminated with Cr. Modelling with Visual Minteq indicates that in the experiments with PB, the reduction of Cr (VI) to Cr (III) occurs and that Cr (III) is associated with dissolved organic matter, which is a form of lower toxicity than the initial Cr (VI) species. In turn, in the experiments with PB50, Cr (III) and Cr (VI) coexist and the oxidised form is not associated with dissolved organic matter, which suggests greater toxicity. The results indicate that pine bark compost is a potential candidate for use as filler material permeable reactive barriers. A microcosm study of permeable reactive barriers filled with granite powder and compost for the treatment of water contaminated with Cr (VI) Estudio a escala de microcosmos de barreras permeables reactivas con serrines graníticos y compost para el tratamiento de aguas contaminadas con Cr (VI) Estudo a escala de microcosmos de barreiras reativas permeáveis com serragem de granito e composto para o tratamento de água contaminadas com Cr (VI) BPRs, consiguiendo una retención de Cr cercana al 100%.La mezcla de serrín granítico y compost presentó una capacidad de retención que osciló entre el 18 y el 46% a lo largo del experimento.El Cr retenido por el material de relleno se encuentra fuertemente fijado, pues no se desorbe por lixiviación con la solución salina y las concentraciones en los extractos obtenidos mediante la aplicación del procedimiento estándar de lixiviación Toxic Characteristic Leaching Procedure (TCLP) fueron inferiores 1 mg L -1 .Este comportamiento minimiza el riesgo de liberación del Cr retenido por el material de la barrera, en el supuesto de que fuera atravesada por un agua no contaminada con Cr.La modelización con Visual Minteq indica que, en los eluatos de los experimento con PB, se ha producido reducción de Cr (VI) a Cr (III), y éste se encuentra asociado con la materia orgánica disuelta, lo que sugiere una reducción de la toxicidad en comparación con la que presenta el Cr (VI) introducido en la solución de percolación.En los eluatos del experimento con PB50 se encuentran tanto Cr (III) como Cr (VI), y la forma oxidada no se encuentra asociada con la materia orgánica disuelta.Los resultados de este estudio indican que el compost de corteza de pino tiene un gran potencial para ser usado como material de relleno de barreras permeables reactivas. Introduction Water pollution by chromium is an environmental hazard that poses serious risks to human health.Cr exists in nature in two stable oxidation states: Cr (III) and Cr (VI), which differ in terms of mobility, bioavailability and toxicity.Cr (VI) is toxic, mutagenic and potentially carcinogenic, and has great mobility.In contrast, Cr (III) is considered as an essential nutrient, less toxic and less mobile than Cr (VI). The most common Cr form in nature is Cr (III); however, under strongly oxidizing conditions, Cr is found as Cr (VI), mostly as a chromate anion (CrO 4 2- ).In turn, Cr (VI) can be reduced to Cr (III) in environments where a ready source of electrons is available: dissolved Fe (II), reduced Mn oxides or reduced S compounds.Organic matter can also act as an electron donor, the reduction being more favourable in acid than in alkaline environments (Bartlett and Kimble 1976;Cary et al. 1977).It has been found that the addition of a C source and protons can stimulate microbial activity, thus favouring the reduction of Cr (VI) to Cr (III) (Losi et al. 1994;Bolan et al. 2003). Chromium pollution is mainly derived from mining and industrial activities.Cr is used in many products and industrial processes such as leather tanning, wood treatments and chrome plating (USEPA 1997).Many of these industrial applications use Cr in the VI oxidation state.Occasionally, Cr reaches the soil through spillages and disposal.As rainwater infiltrates into the soil, Cr may dissolve and Cr-rich lixiviates can reach the water table, causing contamination of groundwater and leading to potential problems for drinking water quality. Among the technologies for groundwater remediation, permeable reactive barriers (PRBs) are considered suitable systems for the treatment of contamination plumes.This technique is based on the in situ installation of a trench perpendicular to the direction of the flow of the polluted plume.The walls of the barrier are permeable and allow the passage of water, contacting the filler material (reactive material) which can adsorb, precipitate or degrade the contaminants.As the contaminated water passes through the reactive zone of the barrier, the harmful chemicals are retained or transformed into harmless substances (USEPA 1997).Selection of the reactive material used to construct PRBs will depend on the substances that have to be removed and on the mechanism used for this purpose (adsorption, precipitation or degradation).One of the most commonly used materials is granulated metallic Fe 0 , which has been used to degrade organic compounds and to precipitate organic and inorganic substances (USEPA 1997), but other materials such as compost have also been tested (Boni and Sbaffoni 2009). Traditionally, Cr treatment technologies include ion exchange and chemical reduction followed by precipitation (Benefield et al. 1982).PRBs filled with Fe 0 have been used to treat groundwater contaminated with Cr (VI), reducing Cr (VI) to Cr (III) and giving rise to the coprecipitate Cr x Fe 1-x (OH) 3 (James and Barlett 1983;Palmer and Wittbrodt 1991).Other Fe compounds, such as Fe sulphides and Fe oxyhydroxides, also promote the reduction and precipitation of the chromate anion (Blowes et al. 2000).The use of biological materials has been proved as an alternative to the mentioned methods due to their removal efficiency and low cost (Battacharaya et al. 2008;Jain et al. 2009;Miretzky and Cirell 2010).Particularly, diverse biosorbents have been used to remove Cr (VI) from contaminated waters (Boddu et al. 2003;Koby 2009). The use of readily available, safe and inexpensive waste materials as filler materials for PRBs represents an interesting opportunity from an economic and environmental perspective.In a previous study, Barral et al. (2014) conducted batch type experiments addressed to evaluate the Cr (VI) adsorption capacity of granite powder (GP), pine bark compost (PB), composted municipal solid waste (M) and mixtures containing different proportions of GP and compost.Individually, GP was not suitable for use as a PRB filler because of its moderate permeability and Cr (VI) adsorption capacity.The addition of compost M decreased the hydraulic conductivity of the mixtures and only slightly improved the adsorption capacity.In turn, the addition of compost PB increased the hydraulic conductivity and improved the Cr (VI) adsorption capacity of the material, while decreasing Cr desorption.Adsorption data for compost PB were well fitted by the Langmuir model and the maximum adsorption capacity (X m ) determined was 21 mg g -1 .This value was in accordance with that found by Wei et al. (2005) for the adsorption of Cr (VI) on compost (36 mg g -1 ) and higher than the value reported by Jain et al. (2009) for sunflower waste biomass (8 mg m -1 ).Barral et al. (2014) recommended mixtures containing 50 or 25% granite powder and 50 or 75% pine bark compost (v/v), respectively, as the best materials for use as PRBs in relation to cost/effectiveness. In this work the use of pine bark compost and its mixture with granite powder as PRB filler is tested at a microcosm scale.To this end, an experimental device that was called "reactive box" was specifically designed to evaluate Cr (VI) retention capacity in conditions that resemble those of the barrier.The reactive box simulates the arrangement of the filler materials in PRBs at a laboratory scale and allows the permeation of Cr solutions and the collection of eluates.Adsorption capacity was evaluated by percolating Cr solutions and determining Cr concentrations in the eluates.Because leachability of the previously retained Cr is a critical aspect of the performance of a PRB, desorption of previously adsorbed Cr was subsequently evaluated by means of percolation experiments with unpolluted solutions and by chemical extraction of the filling materials.Finally, modelling with Visual Minteq was applied to estimate the Cr forms in the eluates, since this is a critical aspect in terms of mobility and potential toxicity. Reactive materials The reactive materials tested were: 100% pine bark compost (PB) and a 50% (v/v) mixture of PB and granite powder (PB50).PB was obtained through an aerobic transformation in windrows and was supplied by Costiña Orgánica (A Coruña, Spain).Granite powder (GP) is a waste product generated during the cutting, polishing and finishing of the blocks extracted from quarries, and was supplied by granite transformation plants located in Porriño (Pontevedra).The industries in this area mainly use local adamellitic granites, with quartz, abundant biotite and equivalent proportions of potassium feldspar and plagioclase, as well as granodiorites and biotite-amphibole granites, with less potassium feldspar than plagioclase and biotite as the principal mica (IGME 1981).GP composition is coincident with that typical of the rocks from which it is originated, except for the concentrations of Ca, Fe and some trace elements which are higher in the granite powder, due to the use of metal filings as abrasive products during the cutting process and the addition of calcium hydroxide to avoid the appearance of iron oxide stains on the stone.More details on GP composition can be found in Barral et al. (2005) and Silva et al. (2013).GP samples were air-dried and gently crushed to < 2 mm and then combined into a single representative sample which was employed for PB50 preparation. The tested materials were characterized in a previous study (Barral et al. 2014) and their main properties are shown in Table 1.GP shows alkaline pH, and low EC and water content; it is practically devoid of organic matter and has a moderately low permeability.In turn, PB has an acidic pH, slightly higher EC and permeability, and is mostly constituted by organic matter.For this study an experimental device called "reactive box" was specifically designed to simulate, at a microcosm level, a reactive barrier disposed vertically through which the contaminant plume passes.The "reactive box" consists of a prismatic container (25 x 15 x 12 cm), which can be divided into two compartments by a sheet of multiperforated methacrylate (Figure 1).The first compartment, which occupies a third of the volume of the device, is filled by washed quartz sand, aimed at achieving a homogeneous distribution of the percolating liquid, and the rest is filled by the reactive material.A 0.5-cm diameter hole at the bottom end of the first compartment allows the entry of the solutions.The pollutant output is produced by another hole located at the top of the opposite face of the box, to ensure that the fluid path includes all the material under test.The device includes a reservoir for the solutions to be percolated, which move through a flexible tube, driven by a Gilson peristaltic pump operating at 2 rpm.It was previously demonstrated that the material is homogeneously wetted, and no apparent preferential flow areas were observed. Experimental procedure The device was initially saturated with a 0.01M KNO 3 solution.Subsequently, a solution of 100 mg L -1 of Cr (VI) (as K 2 Cr 2 O 7 ) in a 0.01M KNO 3 saline background was percolated, in a volume approximately equivalent to four pore volumes (8 L).To evaluate the desorption of the Cr retained by the reactive material, a leaching experiment was subsequently performed with 0.01M KNO 3 solution, in a volume equivalent to four pore volumes (8 L).The total duration of the experiment was about 30 h.Consecutive aliquots of 0.5 L of effluent were collected during the adsorption and desorption steps (approximately every hour) and submitted to analysis of Eh, pH, total Cr, inorganic carbon (IC) and total organic carbon (TOC) as described below. Chemical analysis of leachates The pH and Eh of the eluates were measured using a portable electrode (HANNA HI 9025C). Then the eluates were filtered by 0.45 µm and total Cr concentration was determined by flame atomic absorption spectroscopy (SPECTRAA220 FS from VARIAN) (detection limit 1 mg L -1 ).Total carbon (TC) was determined by catalytic oxidation at 680 °C and determination of the CO 2 evolved by IR detection (TOC-5000 from SHIMADZU).Inorganic carbon (IC) is determined by measuring the CO 2 released following sample acidification in the same apparatus.Total organic carbon (TOC) was obtained from the difference between TC and IC.All measurements were performed in duplicate. Chemical speciation of the leachates To estimate the composition of the leachates, the chemical equilibrium model for the calculation of metal speciation Visual MINTEQ version 3.0 (Gustafsson 2010) was applied to eluates from the retention step.For the modelling of the interactions between metallic ions and humic substances, the NICA-Donnan model was used, a combination of the non-ideal competitive adsorption (NICA) isotherm description of binding to a heterogeneous material, coupled with a Donnan electrostatic sub-model describing the electrostatic interactions between ions and the humic material.In the NICA-Donnan model, the specific bond between the cations and the negatively-charged functional groups is described using the NICA isotherm, whereas the non-specific electrostatic bond with any negative charge is described using the Donnan model (Kinninburgh et al. 1996).The model assumes that the humic substances present two binding sites, mainly attributed to the carboxylic and phenolic functional groups (Milne et al. 2001).Additionally, the model considers that the humic substances are formed by a mixture of 90% fulvic acid (FA) and 10% humic acid (HA), which are representative values of humic substances in natural water (Tipping 2002). TCLP extractions To evaluate the potential desorption of the retained Cr, the standard Toxicity Characteristic Leaching Procedure (TCLP), according to EPA Method 1311 (USEPA 1992), was applied to the filler materials removed from the reactive box after the desorption step.An extraction with aqueous acetic acid (a solution made with 5.7 mL glacial acetic acid in 1000 mL distilled water buffered to pH 4.93 with 0.1N NaOH) was performed using a 1:20 solid: solution ratio.The suspensions were shaken on an end-over-end shaker at 30 rpm during 18 h at 23 ºC.After the extraction step, samples were centrifuged at 2000 rpm during 15 min and filtered by 0.45 µm. Total Cr was determined in the extracts as explained above. Results and Discussion The PB compost showed high efficiency as a potential PRB filler, as almost 100% of Cr in the percolating solution was retained by the material throughout the experiment, whereas the mixture of PB and GP only retained between 18 and 46%.Taking into account the percolated volume, the initial Cr concentration and the mass of reactive material, 0.11 mg of Cr per gram of reactive material was retained by PB50 at the end of the sorption experiment, whereas PB retained 1.05 mg g -1 .The latter value was notably lower than the maximum adsorption capacity (36 mg g -1 ) for PB determined by Langmuir model in Barral et al. (2014).Although the compost is more effective for Cr retention, its mixture with GP would improve the constructive properties of the PRB and allow adjusting the hydraulic properties of the mixture to achieve the retention times that would allow the attenuation of the contaminant (Barral et al. 2014). No desorption was observed in the subsequent percolation with KNO 3 0.01M, and no significant Cr was extracted by the TCLP procedure (concentrations were under the detection limit 1 mg L -1 ).This fact confirmed the strong retention of Cr (VI) by the tested material, which is a relevant feature for potential PRB fillers, as Cr would not be remobilised when unpolluted water passes through the barrier.The eluates of PB presented higher Eh and were more acidic than the eluates of PB50, and showed constant values for these parameters throughout the sorption experiment (Figure 3a).IC was scarce in the eluates, mostly for the more acidic PB.TOC decreased for both filling materials along the experiments and was almost exhausted after the passage of 4.5 L of percolating solution for PB50 (Figure 3b). Modelling with Visual Minteq was applied to the eluates obtained in the sorption step.pH and Eh, and concentrations of dissolved Cr and TOC were introduced as inputs in the model. To determine the proportion between TOC and total dissolved organic matter (DOM), a default factor of 1.65 was used in the model, which is an average of the results obtained for lakes and streams in Sweden (Sjöstedt et al. 2010).The model indicates that Cr (VI) was completely reduced to Cr (III) in the eluates of PB and that Cr (III) is completely associated with DOM (Figure 4).On the contrary, in the experiments with PB50, Cr (VI) was the predominant form in the sorption step.Moreover, Cr (VI) is not associated with DOM in the eluates, whereas this fraction represents between 8 and 55% (mean 33%) of Cr (III) in the sorption step.Adsorption is considered an efficient method for decontamination of polluted waters.The adsorbent properties of organic materials and particularly of compost have been frequently applied to soil and water decontamination (Blowes et al. 2000;Tsui et al. 2003;Farrell and Jones 2009;Pereira et al. 2009;Smith 2009;Park et al. 2011;Paradelo and Barral 2012), and specifically to remove dissolved Cr (VI).The Cr retention capacity of compost is attributed to the cation and anion adsorption capacities of the organic matter-rich materials and their potential reducing effect.Thus, in a soil incubation experiment, Bolan et al. (2003) Although the compost PB is the most reactive component of the tested materials, it has several limitations to be used alone, as it can be easily displaced in the barrier and can experiment volume changes.It also shows an excessive permeability, making it difficult to achieve a sufficient residence time for the attenuation of contaminants (Barral et al. 2014).Therefore, mixing with GP is recommended as it provides physical support, reduces volume changes and avoids the movement of the compost inside the barrier (AFCEE 2008).Moreover, mixing compost and GP allows reaching suitable hydraulic conductivities (Barral et al. 2014).In this way, the mixtures of 50% GP and 50% PB compost could also be considered suitable as PRBs fillers, combining moderate adsorption and low Cr desorption with an acceptable permeability. PRBs filled with PB should be also effective in retaining other metals with affinity for organic matter such as Cu.Other uses of PB for metal decontamination such as soil bioremediation or retention of metal spillages to water can be envisaged with promising perspectives. Conclusions The "reactive box" device, designed and used in this study to reproduce the operation of a permeable reactive barrier at a microcosm scale, proved to be suitable for this purpose, allowing for the evaluation of the retention capacity and release of pollutants.Pine bark compost was the most reactive filler material for the decontamination of Cr (VI) polluted waters, as it showed a high sorption capacity and low desorption both in saline 0.01M KNO 3 solution and in TCLP extracts.Moreover, Cr (VI) was reduced to the less toxic Cr (III) associated with organic matter in PB eluates.Nevertheless, the incorporation of the granite powder is useful from the viewpoint of construction and physical stability of the barrier.Its proportion in the filler mixture should be based on the criterion of achieving the hydraulic conductivity necessary to optimize retention and facilitate the construction of PRBs. Figure 1 . Figure 1.Reactive box for percolation experiments: a) Experimental setup, and b) Detail of the Reactive Box. percentages of Cr retention and release are presented in Figure 2. The first 2 L of the eluate in the sorption step (roughly corresponding to one pore volume) are not represented because in this volume substitution of the saturating saline solution by the Cr solution occurs and Cr concentrations in the eluates are affected by dilution, thus overestimating retention.Similarly, Cr data corresponding to the first 2 L of the eluates in the desorption step are not represented because in this volume substitution of Cr solution by the saline leaching solution occurs and Cr concentrations in the eluates mostly represent Cr remaining in the percolating solution, thus overestimating desorption. Figure 2 . Figure 2. Retention and desorption of Cr by the tested materials. Figure 3 . Figure 3. (a) pH and Eh conditions in the eluates of the retention step, (b) Total organic carbon (TOC) and inorganic carbon (IC) in the eluates. Figure 4 . Figure 4. Cr speciation in the eluates from the retention step, as predicted by Visual Minteq applied to PB and PB50. [ A MICROCOSM STUDY OF PERMEABLE REACTIVE BARRIERS FILLED WITH GRANITE POWDER AND COMPOST FOR THE TREATMENT OF WATER CONTAMINATED WITH Cr (VI) ] Table 1 . General properties of the tested materials.EC: electrical conductivity; CBD: compacted bulk density;
2018-12-30T00:01:58.322Z
2015-07-14T00:00:00.000
{ "year": 2015, "sha1": "5d83c5a612b2a88044932a6eaa3bd35da736870f", "oa_license": "CCBYNC", "oa_url": "https://minerva.usc.es/xmlui/bitstream/10347/21648/1/2015_sjss_cancelo_microcosm.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5d83c5a612b2a88044932a6eaa3bd35da736870f", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
119206869
pes2o/s2orc
v3-fos-license
Developing an integrated concept for the E-ELT Multi-Object Spectrograph (MOSAIC): design issues and trade-offs We present a discussion of the design issues and trade-offs that have been considered in putting together a new concept for MOSAIC, the multi-object spectrograph for the E-ELT. MOSAIC aims to address the combined science cases for E-ELT MOS that arose from the earlier studies of the multi-object and multi-adaptive optics instruments. MOSAIC combines the advantages of a highly-multiplexed instrument targeting single-point objects with one which has a more modest multiplex but can spatially resolve a source with high resolution (IFU). These will span across two wavebands: visible and near-infrared. . The upper panels illustrate the ray path in a telecentric telescope (left) and in a non-telecentricity telescope like the E-ELT (right). In a non-telecentric field the light rays do not reach the focal plane perpendicularly but with an incident angle. The bottom panel gives a conceptual view of the tile concept. The focal plane of MOSAIC is covered with hexagonal tiles. Each tile is set to the mean focus and ray tilt over its patrol field. drawbacks on non-telecentricity correction and nodding capabilities (see section x). The 4 observational modes aimed for MOSAIC will share this tiled focal plane. Figure 2 shows a conceptual design for the MOSAIC focal plane and the implementation of the 4 observational modes. The two highly multiplexed modes (HMM) will operate in seeing limited or ground layer adaptive optics (GLAO) conditions with the following specifications: • HMM-VIS: 200 sub-fields within a 3.75 arcmin radius field. Each sub-field consists in bundles of several microlens + fibres. • HMM-NIR: 100 sub-fields consisting in dual apertures for optimal sky-subtraction. The two integrated field spectroscopy modes HMM and IGM will operate with the following specifications: • HDM: High definition mode, operating with multi object adaptive optics (MOAO) in the near-IR. A pickoff mirror in the focal plane directs light via an MOAO adaptive system (receiver) and fibre bundle to the spectrograph • IGMM: Light bucket IFS operating in seeing limited condition. A pick-off mirror redirect the light via a path compensator and fibre bundle to the spectrograph. High multiplex mode trade-off: allocation efficiency, multiplex and scientific field From the science cases, the optimal multiplex in HMM is 200 sub-field within a field of 40 arcmin 2 [ 3 ]. From a technical view, the tiled focal plane ties the HMM multiplex to the instrument field of view and tile size. Therefore a trade-off has been performed between instrument field, the size of the tiles, that set the density of pick-off, and the positioner patrol area. The aim is to maximise the instrument field and maintain the multiplex to 200 sub-fields in the visible, while keeping the allocation of science targets efficient and residuals from non-telecentricity correction low at the edges of the patrol field. In a first approach, we have assumed that each tile will host a single patrol arm positioner. This simplifies the positioner/tile manufacturing and minimises the cost. In this case, the maximum multiplex in HMM modes is given by the number of tiles. The right panel of Figure 3 gives the number of tiles (or maximum multiplex) as a function of the field radius for three tiles radius -10", 12.5", 15" -in a 6 laser guide stars configuration (left panel). To reach the top level requirement of 40 arcmin 2 instrument field and 200 multiplex in HMM-VIS, the option with 30" diameter tiles should be favour. The efficiency of target allocation has been tested for 3 HMM science cases and several combination of positioner density ( tile diameter) and patrol area. The HMM apertures were automatically allocated to the targets using a stable marriage algorithm. The success of the fiber allocation for each SC has been simulated in real fields, see the description of the input catalogues in table 2.2. Each combination of (tile diameter, patrol area) has been simulated 100 times. At each iteration, the center of the MOSAIC field has been randomly move inside the input catalogue field. The success rate is defined as the number of allocated fibers over the number of available targets in the field-of-view. SC1 and SC4/a are observational cases where the density of targets in the field of view is smaller than the density of HMM subfields. In these cases the configuration with a patrol area ranging the center of the adjacent tiles are optimal, with success rate close to 100% independently of the diameter of the tile (15" to 30"). The gain on target allocation to increase the patrol beyond the center of the next tiles is negligible. SC4/b is an observational case where the number of targets in the fov is larger than the multiplex. The success of the target allocation depends on the density of positioners. The SC3 corresponds to a case where the density of potential targets is close to the density of HMM subfields. In this case, the optimal configuration would be tiles of 15" diameter and with a patrol area ranging the center of the adjacent tiles. This configuration gives a success rate of 90% but need a large number of tiles/fiber, nearly 360. A configuration 30" diameter tiles and a patrol radius ranging the center of adjacent tiles still gives acceptable success rate about 70%. Finally, we have investigated the amount of defocus from non-telecentricity residuals as a function of the tile size. Each tile is oriented towards the centre of the pupil, and offset in order to compensate for the defocus. However, a residual defocus remains, which increases with the size of the tile. A study has been performed in order to evaluate this residual defocus in the case of 30" tiles with 60" patrol area. The calculation consists in considering a focal plate which radius of curvature is 37.2 m in order to be compliant with the E-ELT exit pupil position, ensuring self-aligned targets, fractioning this focal plate into tiles, and comparing the focus position of each point of the patrol area to the perfect focus, located on a 9.9 m sphere. The results are summarise in Figure 4. The defocus has no impact on the pupil conjugation (location and focus) onto the fibrer core, since the exit pupil of the E-ELT can be considered at infinity with respect to the microlens focal length. No resolution is required inside the HMM sub-field, thus having the field microlens out of focus is not an issue either. The only effect of defocus could be a loss of flux at the edge of the sub-field due to the enlargement of the seeing or GLAO corrected PSF. The residual focus curve (magenta) shows a maximum defocus for the most off-centre tiles of 5 mm that converts at F/17.48 into less than 0.3 mm PSF enlargement. Considering the E-ELT plate scale, this is less than 0.1" that could very likely be accommodated passively, removing the need for extra focus compensation on each tile. Confirmation of this, however, is pending the AO analysis on seeing-limited and GLAO-based operation. Technical field and AO performance As part of the adaptive optics work package, a trade-off analysis have been started that includes: instrument field, required ensquared energy, implemented AO modes, number and magnitude of Natural Guide Stars, and number of LGS. A complete description of this trade-off analysis can be found in [ 5 ]. As part of this a comprehensive set of baseline parameters for AO simulations has been formulated to ensure consistency and interoperability between the various simulator environments being used (at Durham, LESIA, ONERA and LAM). In addition, four main architecture trades were identified which will be prioritised due to their impact on the rest of the system, namely: the number of LGS to be use (0, 4 and 6 will be investigated), whether or not to use MEMS deformable mirror, whether the laser guide stars should track the pupil or the sky and an assessment of the adaptive optics performance with ground layer adaptive optics only. SENSITIVITY REQUIREMENTS AND SKY SUBTRACTION MOSAIC will observe extremely faint sources, up to J/HAB ∼ 30 mag in emission, and up to J/HAB ∼ 27 mag for continuum and absorption line features in the near-infrared window (e.g. SC1 and SC3 science case). The detection and spectroscopic follow-up of these faint sources will require an accurate and precise sky subtraction process. Accurate sky subtraction is particularly challenging in the near-infrared, where the sky signal is dominated by fluctuating bright sky lines from the airglow. The spectral features from faint sources will be typically observed between these bright OH sky lines. However, the near-infrared (NIR) sky continuum background is still hundreds to a thousand times brighter than the sources to be detected, about J/HAB ∼ 19 − 19.5 mag in dark sky conditions (Sullivan & Simcoe, 2012). For the future detection of such faint sources, the sky continuum in the NIR will need to be subtracted with accuracies at a level of a few tenths of a percent at least. This first analysis mainly focus on the sky subtraction in the HMM mode. Contrary to HDM, the sky cannot be sampled in the immediate vicinity of the target in HMM. To achieve high accuracy sky subtraction in the HMM, the sky need to be sampled to a distance from the science target inferior to the typical spatial variation of the sky continuum. Yang et al. 2012, Puech et al. 2012 have shown that sky continuum background exhibits spatial variations over scales from ∼ 10 to ∼ 150 ", with total amplitudes below 0.5% of the mean sky background. At scales of ∼10", the amplitude of the variations is found to be ∼0.3-0.7%. Observationally, this small scale fluctuation of the sky background implies that the sky should be sampled less than 5" from the object, and translates into the requirement on a strict upper limit on the minimal distance between fiber. Two observational strategy has been defined to achieve high precision sky subtraction with MOSAIC in HMM: • Nodding. This sky subtraction strategy will be use in the visible for the science cases requiring accurate sky subtraction. The object and the sky are alternatively by a sub-field following a sequence ABBA or ABAB, obtained by nodding either the telescope or the sub-field. • Cross Beam Switching. This sky subtraction strategy will be use in the near-IR. The sky is sampled simultaneously at < 5" from each object by a sky sub-field. Each science target has two sub-field speared by less 5", forming a dual aperture, see NIR sub-field in Figure 2. The object is observed in both fibers following a sequence ABBA or ABAB, obtained by nodding either the telescope or the sub-field. During the consecutive A-B sequences, a given object is always observed by one of the fibres bundle of the pairs alternately. This method has the advantage to be similar to the nodding along slit and thus is 100% of the time on the scientific targets and allows a very accurate instrumental response subtraction. This configuration implies to dedicate half of the sub-fields to sample the sky. There is no show-stopper to implement these two strategies in the tiles design. Compared to previous design (pick-off positioner), the tiles design increases the complexity on the target allocation software (preparation software) and on the positioning algorithm (configuration sequence of the local positioners). However, MOONS/VLT(phase B), which has a similar local positioner design and will use the same sky subtraction strategies, as successfully complete the first stage of development of such software. SPECTROGRAPH PRELIMINARY DESIGN CONSTRAINTS The plate scale of the E-ELT implies a new constraint for instruments: considering that working at fast F ratio for camera optics is risky, this necessarily implies either more slicing at the field entrance level, or oversampling at the detector level, affecting the multiplex and the spectral coverage. Considering the 0.3" sampling of the visible HMM, an optimal sampling of 2 pixels could only be obtained with a extremely fast camera at around F/0.5 which is not feasible. It is complicated further due to the baseline concept of sharing spectrographs -that is utilising a single collimator, grating, camera and detector to serve more than one mode of operation. This concept is considered to be crucial in order to achieve a reasonable multiplex in all modes for a reasonable cost. For MOSAIC, the intention is to have a visible spectrograph serving the HMM-VIS and IGM modes and a near infra-red spectrograph serving the HMM-NIR and HDM modes. The NIR channel is a particular challenge due to the significant difference in sizes between the individual spaxels of the two modes (around 0.075" and 0.2"). Two possible approaches are currently being investigated, both of which aim to keep reasonable parameters for the spectrograph camera: 1. Keep a reasonable spectrograph camera F ratio between F/1.5 and F/2, and work with an oversampled PSF onto the detector. The number of pixels per element could reach 6 to 7 pixels for HMM. This approach is detector-consuming but relaxes the constraints on the optics of the spectrograph. 2. Slice the fibre output into three with an image slicer. This has a negative impact on the multiplex per detector (decreased by 40 %) but it allows proper sampling to be achieved with a reasonable camera. Further, it presents a great advantage especially for the near infra-red spectrograph: the slit widths of HDM and HMM would be roughly equal, greatly simplifying the overhead required in sharing the spectrographs between the two modes. The effective slit is three times smaller than in the approach (1), making a classical optical design for the camera with a reasonable F ratio possible. The slicing feasibility is under investigation. Pupil Sheer The allowable offset of the pupil on a fibre end, the pupil sheer, is a critical parameter. It is not feasible to inject a pupil that is exactly the same size as the fibre core, as various manufacturing and alignment tolerances, plus telescope stability, will lead to offsets of the pupil from the centre. This can cause variable clipping that will render any flat-field calibration useless. Instead, the pupil must be over or under sized with respect to the fibre core such that the transmission is constant irrespective of the pupil motion, and thus the calibration is stable. A budget for the allowable offset of the pupil, taking into account all known sources of pupil offset at present, has been developed. With a substantial technical immaturity margin applied, an over (or under) sizing of the pupil by 20% will be targeted. Whether an over-or under-size will be used is still to be determined, and will depend on the required throughput (which is less for an oversize pupil), the size of the spectrograph optics (which increase for an undersized pupil), and the manufacturability of the fibre (which decreases for an oversized pupil). DETECTORS: THE CHALLENGE OF OH-LINE SATURATION The saturation of strong sky lines are a well known limitation in near infrared observations. This is especially critical in bands above 1.4µm, where sky lines are almost 10 times more intense than in Y and J-band. The presence of these strong sky lines have limited typical DIT in the near-IR to 900s on 10m class telescopes. Because of the large on-sky pixel size, MOSAIC observations will be particularly affected by the quick saturation of sky lines. We have investigated the impact of strong sky lines on maximum exposure times possible in MOSAIC and on science operations. Sky line saturation and the RoN limited regime We have calculated boundaries for optimal exposure times in the HMM in several bandwidth: a minimal DIT, under which observations are RoN dominated; a maximal DIT, over which the strongest sky lines in the bandwidth start to saturate. We have also estimated an optimal DIT: (1) In the visible, the optimal DIT was set as the exposure time for which the RoN noise account for 25% of the variance of the sky background; (2) In the near IR, the optimal DIT was set to 900s, which is the time scale of variation of the sky. This analysis is based on sky spectra simulations using current MOSAIC baseline. In this analysis, we have supposed that the MOSAIC design is optimised for the low spectral resolution mode. Table 1 gives the current specification of MOSAIC spectrographs and detectors for the HMM mode. We have assumed that the two spectrographs use Volume Phase Holographic gratings. A E2V CCD231-84 15µm CCD was assumed for the visible spectrograph and a Teledyn H4RG 15µm for the near-IR spectrograph. The E2V CCD231-84 15µm specification were gathered from the specification of the MUSE/VLT detector (ref). The Teledyn H4RG 15µm specification were assumed to be similar to those of KMOS detector H2RG 18µm [ref] Minimal DIT Observations should preferentially be carried out in background-limited regime, in which the noise budget is dominated by the Poisson noise of the sky continuum (interlines). In the case of the HMM mode, the noise budget has been calculated within a spaxel (fiber). Figure 6 gives for a spaxel (fiber) the total RMS noise (N spaxel ) as a function of the sky continuum variance (N spaxel bg ), for the HMM mode R=5000 in r-band (left panel) and HMM mode R=5000 in H-band (right panel). where I pixel bg is the mean photons counts on the sky continuum over the observed bandwidth, RoN is the readout-noise, dark is the dark current, text is the exposure time, and n 2 pixel is the number of pixels over which a spaxel is imaged on the detector. Observations are RoN-dominated when n 2 pixel RoN 2 > N spaxel bg . The minimum DIT for background-limited regime is thus given by: where C pixel bg is the count rate (ph/s) in the sky continuum C pixel Maximal DIT The maximal DIT was calculated from the counts/s of the strongest sky line (C pixel skyline ) in a single pixel and the saturation limit of the detector. Because the intensity of OH sky lines fluctuates by more than 20%, the maximum DIT has been calculated assuming a margin of 2/3 of the detector saturation threshold. Maximal DIT is given by: Saturation C pixel skyline (4) Table 7 gives the maximum and minimum DITs, for the two resolution settings of the HMM mode, in r-J-and H-bands. The saturation of strong emission lines is particularly problematic for J-and H-band observations, for which sky lines start to saturate before background-limited observation can be reach. The high spectral resolution mode is more affected because of its lower spectral and spatial sampling. The saturation of bright sky line is also affects in a similar way the HDM mode. The quick saturation of sky lines leads to either a read-noise limited performance, or a very short integration time, and thus to poor observation efficiency. In RoN-limited regime, the penalty in term of signal-to-noise ratio scales with the square root of number of exposures. Figure 6. Signal-to-Noise regimes in two modes: HMM mode R=5000 in r-band (left) and HMM mode R=15000 in J-band (right). The violet area indicates the regions where the noise is RoN-limited. The red area indicates the region where sky lines are saturated. In the HMM R=5000 in r-band, a background-limited regime can be reach with exposure time above DITmin = 144s. The saturation of sky lines start to be problematic at very high DITs (DITmax ¿ 12500s). Results In the HMM R=5000 in H-band, background-limited regime cannot be reach with the actual detector and spectrograph configuration. Sky lines are saturated below the minimum DIT to work in a background-limited regime. The cells in red corresponds to modes that are RoN-limited. The last line gives the fraction of pixels saturated assuming optimal DIT observation for the other modes and photometric bands. The fraction of saturated pixels were computed for a full photometric band, and does not take into account the real bandwidth of each mode. Impact on spectrograph design and detector Different combinations of subfield and spectrograph properties (e.g. spaxel diameter, spectral sampling) have been investigated. It results from this analysis, that the issue of sky line saturation vs RoN-limited observation cannot be resolved by changes on the spectrograph design. During the phase A, solutions will be investigated such as OH-suppressor system 6 and skyline masks. The most promising solution is a read mode of CMOS detector which reset the pixels in specific windows while integrating. 7 This read mode would permit to integrate during long exposure time (to reach background limited observations), while reseting the saturated pixels several times during the exposure. The fraction of saturated pixels that would need to be removed, in order to reach optimal DIT in each band (t exp = DIT opt =900s)., are up to 7% along the spectral direction (per line in the detector). To minimise the number of rectangular window, the spectrograph concept should minimise the distortion of spectra trace.
2016-09-21T15:58:36.000Z
2016-08-09T00:00:00.000
{ "year": 2016, "sha1": "b891afc0ccd108df14d4e2a13fd0e8811412f54c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1609.06610", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cf269f2db26abcfa6d4fc24bcef6e9f074d8f28c", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Engineering", "Computer Science" ] }
263160976
pes2o/s2orc
v3-fos-license
The value of Middle cerebral and umbilical arteries Doppler indices in pregestational diabetic versus normal pregnancies in prediction of adverse neonatal outcome Objectives: To study the impact of pregestational DM on fetal middle cerebral (MCA) and umbilical arteries (UA) Doppler indices and hence, evaluating their diagnostic performance as predictors for adverse neonatal outcome. Methods: The study included 2 equal groups of 60 patients each, thus making up a total of 120 patients; control group of healthy pregnant women and study group included pregnant patients known to have pregestational diabetes. The study group was furtherly subdivided into two equal subgroups of 30 patients each. This sub HbA1C levels namely; controlled diabetics MCA Doppler indices (resistance index and pulsatility index) and Cerebroplacental Doppler ratio were measured for each patient. Neonatal outcome was assessed and recorded following delivery. The following parameters were assessed: neonatal blood sugar, 1min and 5min Apgar score and admission to neonatal intensive care unit. Results: umbilical artery Doppler in the prediction of adverse neonatal outcomes among diabetic patients were 25% and 88.89%, respectively, middle cerebral artery Doppler were 20.83% and 91.67%, respectively. The resistance index and pulsatility index of MCA and UA the neonatal outcomes (Pearson’s r ranged -0.07 to 0.13, p > 0.05). Conclusion: maternal DM is not associated indices of placental or fetal circulation. In addition, both UA and MCA had low sensitivity in the prediction of adverse neonatal outcome. Introduction The prevalence of Diabetes mellitus (DM) in the pregnant population in the United States gestational DM (PGDM) and gestational DM (GDM) (1). Pregestational DM poses an increased risk for the mother, the fetus and the neonate (2).Congenital malformation (e.g., cardiac or Musculo-skeletal) occurs more frequently in pregestational diabetic women and approximately 50% of such women deliver macrosomia babies with consequent risk of birth related trauma, and the development of type 2 diabetes mellitus, metabolic syndrome, vascular and cardiac diseases later in life (3 & 4).On the other hand, long standing preexisting DM before current pregnancy poses a higher risk of vasculopathy involving the uterine arteries thus resulting in abnormal development of the uteroplacental circulation & restricted fetal growth (5).Those with suboptimal glycemic control have higher risk of developing such complications (6). The pregnancy outcome in this pregnant population could be improved by approaching different targets.Health education should be delivered to all patients ensuring following adequate balanced diet, adhering to drug treatment, and the role of optimum glycemic control on pregnancy outcome (7). Fetal surveillance in high-risk pregnancies has been a matter of concern, with the aim of achieving optimum pregnancy outcomes in such group of However, till the present date, no method of fetal surveillance was proved to be superior (8).Doppler velocimetry was introduced as an important fetal well-being arteries and veins, such as umbilical and middle cerebral arteries.However, the results obtained regarding the use of Doppler studies in pregnancies complicated by diabetes were application, few studies have investigated their role as effective tools in ameliorating perinatal outcomes among such population of pregnant mothers.(1). Shabani Zanjani and his colleagues (9) studied the Doppler indices of fetal brain hemodynamics among pregnant women with GDM compared to healthy ones and reported that the pulsatility index (PI) of the middle cerebral artery (MCA) was Increased among diabetic group compared to the healthy group.On the other hand, Niromanesh and his colleagues (10) compared the umbilical artery (UA) and MCA Doppler indices as fetal well-being tests in diabetic pregnant women (whether gestational or pregestational) and they claimed that UA Doppler indices were better than MCA indices in the prediction of adverse neonatal outcomes but both had low sensitivity. Hence, we performed this study to evaluate of effect pregestational diabetes on fetal middle cerebral and umbilical arteries Doppler indices and to evaluate their validity as predictors of poor neonatal outcome in pregnancies complicated by diabetes. Methodology The current study is a Cross-sectional one conducted in the Obstetrics & Gynecology department (Kasr El-Aini Hospital -Faculty of Medicine -Cairo University) in the time period from July 2019 to October 2020.One hundred and twenty pregnant women (aged from eighteen to forty years old) with living health singleton fetus between 34-37 weeks the LMP or 1st trimesteric ultrasound scan) were recruited and they were divided into two equal groups of 60 each; Control group that included healthy pregnant women and Study group that included pregestational diabetic pregnant patients.The latter group was furtherly subdivided into two subgroups according to HbA1C levels namely; controlled diabetics (included 30 diabetic as HbA1C less than 6.5%) and uncontrolled diabetics (included 30 diabetic pregnant HbA1C equal to, or more than 6.5%).The study was approved by the Hospital Ethical Committee & was registered at ClinicalTrial. gov registry (The registry number: NCT03915990). Diabetic women with either complicated diabetes or any other concomitant chronic disorder (e.g., hypertension or renal disease) were excluded.Patients with growthrestricted (EFW less than the 10th percentile for the corresponding gestational age) or malformed fetuses were excluded.Patients with any superimposed medical disorders, percentile) or rupture of membranes in the current pregnancy were also omitted. Informed consent was obtained from all participants (after explaining the aim of the study and discussing the potential hazards) then all candidates who met the eligibility criteria were subjected to the following: full history taking, thorough physical examination (including maternal body weight and the 1st day of the LMP) followed by obstetric current pregnancy to participate in the study fetal anomalies or oligohydramnios) and to index (to detect presence of macrosomia 90th percentile for gestational age and AFI more than 95th percentile, respectively).Laboratory investigations (complete blood picture, fasting and post prandial blood sugar, liver & kidney functions and HbA1C estimation) were also done.Doppler ultrasonography assessment was done using Samsung SonoAce R3 abdominal probe convex linear transducer 3.5 MHZ equipped with color and pulsed Doppler capabilities (SonoAce R3, SAMSUNG MEDISON CO., Gangnam-gu, Seoul, Korea).As regard umbilical artery (UA) Doppler, participants were examined in a semirecumbent position with a left lateral tilt.The uterine content was scanned and an area of amniotic cavity with many free loops of cord was selected.The characteristic sound and using a pulsed wave Doppler applied on a free loop of cord.The image was frozen when at least 3 consecutive waves of similar height appeared on the screen and umbilical artery Resistance index (RI) and pulsatility were obtained after a minimum of 3 separate readings were averaged.Umbilical artery Doppler evaluations were performed during fetal apnea (to nullify effect of fetal breathing movements on waveform variability) and avoided during fetal activity (11).Abnormal UA Doppler velocimetry was considered when UA indices exceeded the 95th centile for the corresponding gestation or when the As for the evaluation of the middle cerebral artery (MCA) Doppler indices, the fetal brain was scanned at the level of the biparietal diameter and a transverse view was obtained then the probe was advanced towards the base of the skull till the level of the lesser wing of the sphenoid bone.The middle of the circle of Willis that runs anterolaterally at the margin between the anterior and the waveforms were obtained by placing the pulsed Doppler sample gate on the middle portion of the artery.The image was frozen when at least 3 consecutive waves of similar height appeared on the screen and MCA RI obtained after a minimum of 3 separate readings were averaged.As fetal head compression may alter intracranial arterial waveforms, subsequently, no or minimal pressure should be applied to maternal abdomen during the scan (12).Abnormal MCA Doppler velocimetry was considered when MCA indices were below the 5th centile for the corresponding gestational Doppler evaluations were done by the same sonographer (Rasha El-komy). The following data were recorded; gestational age at Doppler study and termination, presence of macrosomia or polyhydramnios, Doppler indices for UA & MCA, mode of delivery and neonatal outcomes (i.e., birth weight, 1-& 5-minutes Apgar score, blood sugar at birth, admission to neonatal intensive care unit).Abnormal perinatal outcomes were considered in the presence of any of the following four events: 1-& 5-minutes Apgar scores below 7, neonatal blood sugar less than 50 mg/dl (neonatal hypoglycemia) and neonatal intensive care unit (NICU) admission.Patients with at least one adverse neonatal event were categorized in the abnormal neonatal outcome group. Primary outcome measured the difference in Doppler indices values (RI&PI) for umbilical and middle cerebral arteries between the control (non-diabetic) and the study group (diabetics whether controlled or of umbilical artery and middle cerebral artery Doppler indices (RI&PI) as predictors for adverse neonatal outcomes among diabetic women were assessed as secondary outcomes. The sample size was calculated according to and his colleagues (9) using the Pulsatility Index (PI) of the left MCA Doppler.The (gestational diabetic cases) was 2.07 and the second group (non-diabetic cases) was 1.85.The standard deviation (SD) used for calculation was 0.40.The ratio of enrollment for the study to control was 1:1.The power was set at 0.8 and Alpha error at 0.05.This gave us the sample of 52 patients in each group, we raised the sample size by 15% to avoid dropouts thus giving us 60 cases in each study arm.Sample size was calculated using Sample Size Calculator ClinCalc.comlast accessed on 2/4/2017.Data were coded and entered using the statistical package SPSS version 25.Data was summarized using mean, standard deviation, median, minimum and maximum for quantitative variables and frequencies (number of cases) and relative frequencies (percentages) for categorical variables.Comparisons between groups were done using unpaired t test in normally distributed quantitative variables, while non-parametric Mann-Whitney test was used for nonnormally distributed quantitative variables (13).For comparing categorical data, Chi square (|2) test was performed.Exact test was used instead when the expected frequency was less than 5 (14).Correlations between quantitative variables were done (15).Logistic regression was done to detect independent predictors of cases (16).P-values less than 0.05 were considered as Results This prospective study included one hundred and twenty patients who met the inclusion criteria.Flow (7). Logistic regression to detect independent predictors of cases We performed a multivariate logistic regression to identify factors associated with cases compared to control group.We have found that only PPBS was more likely to be associated with the study group (odds ratio 1.66, p= 0.018) (table 8). Discussion Diabetes is a multisystem chronic disease that requires vigilant medical care and implementation of different strategies for risk reduction beside optimum glycemic control.Patient support and health education form the cornerstone of management of those cases in order to prevent acute complications and reduce the risk of long-term complications. Mona Mohamed Sediek Several strategies have been developed in order to improve the outcome of diabetes.(17). The care of women with pregestational diabetes should be delivered ideally by multidisciplinary team in a multidisciplinary setting that consists of an endocrinologist, maternal-fetal medicine specialist, dietitian and diabetes educator, when available (17). Fetal surveillance is an important tool in the care of such pregnancies, complicated with PGDM or GDM.Doppler velocimetry is one of the most important methods of antenatal surveillance.In the present study, we examined whether UA and MCA Doppler measurements could help to sort out fetuses at risk of jeopardized outcomes in case of maternal DM. Our results showed that fetal and neonatal risks were higher with pregnancies complicated by pregestational diabetes in comparison to their healthy counterparts.In another study, Niromanesh and his colleagues (20) compared between the that of the umbilical artery (UA) Doppler assessments in the prediction of adverse perinatal outcomes in 50 pregnant women with GDM.Totally, 22% and 12% of women had an abnormal UA Doppler and a nonreactive NST respectively; 13 women had poor outcomes.Women with non-reactive (p=0.033) had a higher prevalence of poor neonatal outcome.The sensitivity and poor outcomes were 76.9% and 97.3% respectively, whereas that UA Doppler in predicting different poor outcomes were 30.8% and 94.6% respectively.Accordingly, they concluded that NST was far more superior to UA in the prediction of adverse perinatal outcomes in patients with GDM. Moreover, Yalti and his colleagues (21) stated that Umbilical velocimetry, is an assessment tool for placental function only and is not a study, sensitivity, positive predictive values of umbilical artery Doppler indices alone were 30 and 50 per cent respectively. To the contrary, Shabani Zanjani and his colleagues (9) studied the effects of GDM on Doppler parameters (fetal MCA and UA) in comparison to normal pregnancies.The study was performed on 66 pregnant women, including 33 women with GDM and 33 healthy pregnant patients.Peak systolic and diastolic velocities, PI, RI and systolic diastolic ratio (SD) were recorded in UA as well as both right and left fetal MCAs for every recruited pregnant woman by means of Doppler ultrasonography.The mean gestational age at the time of examination was 34.45 weeks in GDM group.Although, the study group had higher Doppler indices values compared to their healthy counterparts; yet this was not in GDM group; for which they concluded that gestational diabetes may contribute to an elevated PI in the fetal MCA.However, the small sample size with the consequent low statistical power and the lack of access to follow up data, were two major limiting factors to this study. To the best of our knowledge, the current middle cerebral and umbilical arteries Doppler indices in pregestational diabetic versus normal pregnancies.Most of the former studies focused more on the blood in patients with gestational DM (GDM).Furthermore, we evaluated any difference that could be impacted upon the Doppler indices by glycemic control.We also excluded patients with other concomitant medical disorders to avoid the effect of other confounding variables on Doppler indices. small sample size that led to low statistical power in between the groups of comparison. Although the rate of poor neonatal outcomes in our sample was about 40%, the outcomes were not associated with abnormal Doppler test results of fetal circulation.This may be due to the outcomes were not severe enough to affect fetal circulation.Further studies and systematic reviews are warranted to reach a precise answer about the best surveillance test for fetal evaluation among diabetic mothers. In conclusion, maternal DM is not associated with abnormal changes in Doppler indices of placental or fetal circulation (irrespective of the glycemic control).In addition, both UA and MCA assessments had low sensitivity in the prediction of adverse neonatal outcome. Declarations Ethics approval and consent to participate Kasr Alainy ethical committee approval. Consent for publication all participants agave their consent for publication Informed consent: Informed consent was obtained from all individual participants included in the study. Figure Figure (5): of patients was demonstrated Table ( 4 ): Patients characteristics and laboratory parameters, pregnancy characteristics and neonatal outcome in controlled and Uncontrolled diabetics groups.racy of umbilical artery & middle cerebral artery Doppler in the prediction of adverse neonatal outcomes among diabetic patients -
2023-09-28T15:18:14.799Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "3d1219f1655c86e4ed619ec3dc9a77ff7916e9c0", "oa_license": null, "oa_url": "https://egyfs.journals.ekb.eg/article_317681_d1bf0f14425ca52f097df86ec842e7ca.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d72d4d9574d4b0dc227a3dee37c1084c8b6a1342", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
195291107
pes2o/s2orc
v3-fos-license
Demonstrating developments in high-fidelity analytical radiation force modelling methods for spacecraft with a new model for GPS IIR/IIR-M This paper presents recently developed strategies for high-fidelity, analytical radiation force modelling for spacecraft. The performance of these modelling strategies is assessed using a new model for the Global Positioning System Block IIR and IIR-M spacecraft. The statistics of various orbit model parameters in a full orbit estimation process that uses tracking data from 100 stations are examined. Over the full year of 2016, considering all Block IIR and IIR-M satellites on orbit, introducing University College London’s grid-based model into the orbit determination process reduces mean 3-d orbit overlap values by 9% and the noise about the mean orbit overlap value by 4%, when comparing against orbits estimated using a simpler box-wing model of the spacecraft. Comparing with orbits produced using the extended Empirical CODE Orbit Model, we see decreases of 4% and 3% in the mean and the noise about the mean of the 3-d orbit overlap statistics, respectively. In orbit predictions over 14-day intervals, over the first day, we see smaller root-mean-square errors in the along-track and cross-track directions, but slightly larger errors in the radial direction. Over the 14th day, we see smaller errors in the radial and cross-track directions, but slightly larger errors in the along-track direction. Since the 1980s, various methods for dealing with the problem have been presented in the literature (Colombo 1986;Beutler et al. 1994;Fliegel and Gallini 1996;Springer et al. 1999;Bar-Sever and Kuang 2003;Arnold et al. 2015). Many of these are empirical methods, requiring no a priori knowledge of the spacecraft properties or its operating environment. In global network analyses that incorporate tracking measurements from a large network of one hundred stations or more, such methods can produce spacecraft orbits with cm-level accuracy (Sośnica et al. 2015). However, in a purely empirical approach, the orbit model parameters can absorb the effects of other un-modelled or mis-modelled processes (e.g. Earth rotation, geocentre variation (Meindl et al. 2013), etc.). This can result in orbit model parameter estimates that are non-physical, which means they cannot improve our understanding of the physical processes that determine the trajectory of the satellites and are therefore limited in their ability to help improve the modelling of those processes. As a result, a number of groups introduced analytical, or physics-based, radiation force modelling into their orbit estimation processes. In this area, the box-wing (BW) approach, first introduced by Marshall and Luthcke (1994) for application to POD of the TOPEX/Poseidon mission, has been particularly influential. The general concept is to model the spacecraft structure using eight flat plates (six for a cuboid representing the spacecraft bus and two for solar panels), with assumed values for the optical and thermal properties of the surfaces, which are then combined with a priori modelling of the spacecraft attitude and the incident radiation fluxes. This approach was applied to the Block II/IIA and Block IIR satellites of the Global Positioning System (GPS) by Rodriguez-Solano et al. (2012). In comparing the performance of their semi-analytical adjustable box-wing model with the Centre for Orbit Determination (CODE) Empirical Orbit Model (ECOM; Beutler et al. 1994), the authors determined that the orbit solutions produced by the two methods were comparable, but that the accelerations produced by the ECOM model were less physically meaningful. More recent efforts that adopt a broadly similar modelling approach include Montenbruck et al. (2015) for Galileo satellites and Montenbruck et al. (2017a) for the QZS-1 satellite of Quasi-Zenith Satellite System (QZSS). The box-wing models are relatively easy to implement. However, they are not able to fully capture the radiation flux-spacecraft surface interaction in satellites with complex surface geometry where the effects of mutual self-shadowing and reflected radiation can be significant. An alternative class of analytical radiation force modelling methods, in which ray-tracing techniques are combined with detailed spacecraft surface models to account for SRP, Earth radiation pressure (ERP) and thermal re-radiation (TRR), was developed in the early 1990s (Klinkrad et al. 1991), and these methods are able to capture these detailed effects. The models were tested in POD of the European remote sensing satellites ERS-2 and ENVISAT (Doornbos et al. 2002). To distinguish them from the box-wing methods, we refer to these as high-fidelity analytical radiation force modelling methods. In GNSS, high-fidelity SRP modelling for GLONASS satellites was first explored by Ziebart and Dare (2001). This work was motivated by broader efforts to improve GLONASS orbit quality as part of the IGEX-98 campaign (Willis et al. 1999). Work in this area continued over the years at University College London (UCL), where the approach was enhanced with methods to account for TRR (Adhya 2005), ERP (Sibthorpe 2006;Ziebart et al. 2007;Li et al. 2017) and antenna thrust (AT; Ziebart et al. 2007), and validated on a number of additional cases including the GPS Block IIA and Block IIR satellites, the Jason-1 spacecraft of the Ocean Surface Topography Mission and ENVISAT (Ziebart et al. 2005;Sibthorpe 2006). Recent work in this area demonstrated improved accuracy in shadow modelling when using geometric primitives, as opposed to triangular tessellations, to represent curved surfaces when constructing the spacecraft model (Grey and Ziebart 2014). The modelling approach, as presented in Ziebart et al. (2005), was adopted into the operational standards for precise orbit determination of the Jason-1 altimetry satellite (Cerri et al. 2010;Zelensky et al. 2010). Recently, other research groups have explored a broadly similar approach for modelling SRP on Beidou satellites (Tan et al. 2016;Wang et al. 2018), on the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) satellite (Gini 2014) and on QZS-1 (Darugna et al. 2018). In this paper, using a new model for the GPS Block IIR and IIR-M satellites, we present developments to the UCL model computation strategies that are designed to extend the validity of the models to all possible orientations of the radiation source(s) with respect to the spacecraft. As such, these nextgeneration general-purpose models account for the effect of radiation forcing from any number of radiation sources, from any direction, providing a high-fidelity radiation flux-spacecraft interaction model that can be used to deal with both SRP and ERP. A key advantage of this approach is that the final model makes no prior assumptions about the attitude characteristics of the spacecraft and can therefore deal with any deviations from nominal attitude. UCL modelling strategy Our radiation force modelling strategy comprises three processes: (i) Computation of the bus model, where the space vehicle bus contribution to the accelerations due to SRP and ERP is dealt with. In this process, the accelerations due to thermal emissions from the multi-layered insulation (MLI) covering the bus are also computed, according to Sect. 4.3.3 of Adhya (2005). The core technique uses a ray-tracing algorithm, where the rays simulate the incident radiation flux for a given geometry of the spacecraft with respect to the radiation source. The output of this process is a set of three grids representing the accelerations in the X, Y and Z-axes of the spacecraft body-fixed system (BFS), where the grid nodes are spaced at 1°intervals in latitude and longitude in the BFS. (ii) Separate computation of the solar panel model, where the solar panel contributions to the accelerations due to SRP and ERP are dealt with. (iii) AT modelling, which accounts for the recoil force on the spacecraft due to emission of photons from signal transmitters. As input, the approach requires a computer model of the spacecraft that holds information about the external geometry and various surface material properties including reflectivity, specularity, absorptivity and emissivity. The models are built from a combination of geometric primitives (polygons, circles, cylinders, spheres, cones and truncated cones), avoiding any need for tessellation, especially on curved surfaces (Ziebart et al. 2003). This produces models with good geometric fidelity without requiring an excessively large number of components, e.g. the UCL model for the GPS IIR/IIR-M bus, as shown in Fig. 6, is made up of 182 components. The solar panels (not shown in Fig. 6) are modelled as two rectangular plates. For solar flux, the models are computed using a nominal value for the mean solar irradiance at one astronomical unit (AU) of 1368 Wm −2 (Hastings and Garrett 1996). The solar irradiance is known to vary over the solar cycle (with a period of between 9 and 14 years) by 1.4 Wm −2 . This represents circa 0.1% variation in the parameter. Little is gained by correcting the nominal value. It is more important to scale the model depending on the probe-Sun distance at the calculation epoch. Taking 1368 Wm −2 as a reference value, the eccentricity of the Earth's orbit about the Sun modulates the solar irradiance near the Earth to 1415.7 Wm −2 at perihelion (+3.4%) and 1322.6 Wm −2 at aphelion (−3.3%). This gives a variation (between perihelion and aphelion) of circa 100 Wm −2 (the precise value being 91.3 Wm −2 ), approximately 6.7% of the mean value. For the Earth radiation flux model, we use data from the Clouds and Earth Radiant Energy System (CERES) (Wielicki et al. 1996) project, which provides the irradiance at the top-of-atmosphere (TOA), an altitude of~30 km above the Earth's surface, in a grid format spaced at 1-degree intervals in latitude and longitude in an Earth-centred Earth-fixed (ECEF) system. Computationally, it can be expensive and slow to determine the total Earth radiation flux incident on a spacecraft, from the part of the Earth's surface that is visible to that spacecraft, based on a search of the full CERES grid. To overcome this, we have developed a configurable Earth radiation model that re-organises the CERES data into a grid of triangles wrapped around the TOA surface. The number of triangles used to represent the TOA surface is configured during run-time, based on the number of triangles required to achieve a specified precision level. This approach is outlined in Li et al. (2017). At GNSS altitudes, radiation flux from the Earth is about 15 Wm −2 . In the ray-tracing algorithm, a pixel array (simulating the radiation source) is projected onto the computer simulation of the spacecraft with the force at each ray-surface intersection computed according to: where • F n is the normal force acting in the direction of the surface normal,n, • F s is the shear force acting in theŝ direction, which is along the projection of the total force onto the surface plane, • F mli is the force due to the thermal re-radiation from the MLI on the bus surface, which also acts along the normal direction, • E is the mean irradiance of the radiation source at one astronomical unit, • A is the area of the surface (determined in this case by the pixel array spacing), • c is the speed of light in vacuum, • ν is the reflectivity of the material, • μ is the specularity of the material, • θ is the angle of incidence of the radiation with respect to the surface, • T mli is the temperature of the MLI, • σ is the Stefan-Boltzmann constant, • α is the absorptivity of the MLI material, • mli is the emissivity of the MLI material, • eff is the effective emissivity between the MLI and the spacecraft and • T sc is the internal temperature of the spacecraft bus. Note, Eqs. 1 and 2 are derived in Ziebart (2001Ziebart ( , 2004 and Eqs. 3 and 4 are developed in detail in Adhya (2005). The acceleration due to AT,ẍ at (r, t), is calculated according to: where W is the signal power in Watts, m is the spacecraft mass in kg andr is the unit vector from the geocentre towards the satellite centre of mass (Ziebart et al. 2007). Equations 1 and 2 are also used for computing the SRP and ERP forces acting on the solar panels, but the spacecraft bus and the solar array are treated separately during force model computation, with the results combined during model implementation, as explained in Sect. 4. A similar approach is used by Darugna et al. (2018), and this is done because it simplifies the model computation process as it is not always practical to incorporate the correct solar panel behaviour into the ray-tracing computations. Bus model computation scheme To produce the complete bus model, the pixel array is rotated around the spacecraft in a systematic way, through a discrete set of points, and the ray-tracing computations are performed from each point. Each computation takes the following form: where the inputs ϕ atan2 z, x 2 + y 2 and λ atan2 (y, x) represent latitude and longitude, respectively, of the radiation source in the spacecraft BFS (as defined in Fig. 1); the outputs a x , a y and a z are acceleration along the X, Y and Z-axes, respectively, in the spacecraft BFS. In the computation scheme used in early analyses (Ziebart and Dare 2001), the pixel array was rotated around the spacecraft Y -axis at 12°intervals in the Earth-probe-Sun (EPS) angle, as shown in Fig. 2. The ray-tracing algorithm was executed at 30 points only, and a Fourier series was fitted to the results to represent the final continuous model. The underlying assumption was of spacecraft attitude behaviour fully consistent with the nominal attitude model, in which the Sun is confined to the XZ plane, and the Z-axis points to the geocentre (see Fig. 2). In subsequent studies, e.g. Ziebart et al. (2005), this modelling assumption was maintained but the number of data points computed was increased to 360 by reducing the increment in the EPS angle to 1°. GNSS satellites on orbit may depart from a nominal attitude state due to limitations of their attitude control systems. In nominal attitude mode, in the spacecraft BFS, the Sun is confined to the BFS X-Z plane and the spacecraft Z-axis points to the geocentre Non-GNSS satellites can have attitude laws that are far less constrained than the typical GNSS attitude laws. Thus, as the application of the core technique was considered for non-GNSS missions, the computation scheme was modified. First, the EPS-sweep pixel array orientation scheme was proposed. In this scheme, the pixel array centre points are uniformly distributed around the spacecraft in the central EPS plane (i.e. the spacecraft X-Z plane in this case). Then, each point is rotated by ± 1°about the spacecraft Y -axis, resulting in a new set of points, all inclined at ± 1°with respect to the spacecraft X-Z plane. This process is repeated at 1°steps to populate a full set of pixel-array centre points in 10°-arcs around the spacecraft as shown in Fig. 3. For GNSS satellites, the EPS-sweep method produces pixel arrays that are distributed around the primary parts of the spacecraft BFS within which the Sun moves, but coverage remains incomplete in other directions. Therefore, an additional computation scheme based on the spiral points algorithm (Saff and Kuijlaars 1997) was introduced, see Fig. 4. With this method, it is possible to efficiently position the radiation source uniformly on a sphere that encloses the spacecraft. As it provides complete coverage in all directions, it can be used to produce a general-purpose model that makes no prior assumptions about the orientation of the spacecraft with respect to the radiation source(s). Currently, the spiral points computation scheme is our preferred model computation method, but there are additional data processing steps required before the outputs of a computation scheme based on this method can be used in a POD process. Producing the grid files The spiral points are not regularly spaced in latitude and longitude. Instead, the points are sorted according to distance along the spiral path, starting at the north pole (ϕ 90 • , λ Each point represents a pixel array centre point for a single instance of a radiation pressure acceleration computation and for the specific geometry of the radiation source with respect to the spacecraft 0 • ) and ending at the south pole (ϕ −90 • , λ 0 • ). This is not a standard method for organising the data. Thus, to provide a final model that is easily integrated into the POD processes of model users, we produce a set of acceleration grids, with grid nodes uniformly spaced at 1°intervals in latitude and longitude in the satellite frame. To compute the grid values, we use a modified version of Shepard's method (Shepard 1968)-specifically, an implementation of the modified quadratic Shepard's method (Franke and Nielson 1980) with a type of full sector search as described in Renka (1988)-to determine the optimal set of gridding parameters. The interpolated values are computed in a two-step process. First, a quadratic surface is fitted around each data point. The quadratic (Q) neighbours parameter determines the radius of a circle large enough to include the nearest Q neighbours. Then, the interpolant at a chosen location is computed using an inverse distance weighted average of the computed quadratic surface fits around each data point. The weighting (W ) neighbour's parameter specifies the number of nearest data points to include for this. There are no clear rules for choosing either the Q or the W parameter for the modified Shepard's method, in that the optimal choice is data set specific. In this work, for the GPS IIR bus model, we developed a quality assurance process for determining this parameter pair using a two-dimensional search through Q-W space. This is how the process works: (i) The radiation pressure model is computed using the spiral points scheme and the EPS-sweep scheme. (ii) The 10,000 spiral points data set is expanded using a padding process, see below. (iii) Using modified Shepard's method, 1600 grids are produced from the output of the spiral points computation, for each acceleration component, with all combinations of Q, W pairs considered, where both Q and W range from 11 to 50. (iv) For each grid file, the interpolated values at each of the 3960 EPS-sweep points are calculated, and the interpolated value is compared with the results from the EPS-sweep computation. This is used to compute the RMS error value, E rms , for that grid file according to: where a EPS,i are accelerations at point i according to the EPS-sweep computation and a grid,i is an interpolated acceleration at point i derived from the grid file. (v) Finally, the grid files that minimise the E rms quantity for each component (X, Y and Z) are chosen as the optimal grid files for that spacecraft. Padding the spiral points data set The modified Shepard's method is a general-purpose interpolation algorithm that works with two-dimensional data that are irregularly scattered by using information from a spec-ified set of the nearest neighbour points. As such, in using this method to create the radiation force model grids, we encounter a problem. The algorithm is not able to identify the correct nearest neighbour points in the regions close to the data set boundaries (i.e. ϕ 90 • or − 90 • , λ 180 • or − 180 • ) when the output from a spiral points computation, labelled using latitude and longitude pairs in a 2-d Cartesian system, is provided as the input. To overcome this, we developed a method that creates an artificially extended spiral points data set that is bounded in the region ϕ ∈ (−270 • , +270 • ) and λ ∈ (−540 • , +540 • ). The transformation rules that map the raw spiral points data, bounded in the region ϕ ∈ (−90 • , +90 • ) and λ ∈ (−180 • , +180 • ), to data points in the extended regions are given in Eqs. 8 to 15, where f (ϕ, λ) is used to populate the extended region using the raw data. A portion of this extended data set around the north pole (ϕ 90 • ) is shown in Fig. 5, where the red data points are the spiral points. The yellow points, the top-padding above the north pole, are reflections of the raw data points about the ϕ 90 • line, which are then shifted by ±180 • in longitude beyond the north pole. Using this, expanded data set gives us a solution to the nearest neighbour problem. Another largely unavoidable issue is caused by the requirement to project the spiral points onto a 2-d Cartesian space, which distorts the apparent distance between points. With the Mercator projection, this effect increases with distance from ϕ 0 • . The impact of this is clearly seen in Fig. 5 where the density of data points becomes sparser approaching ϕ 90 • . Modelling limitations There are a number of factors limiting the accuracy of the current approach, which include: (a) Mis-modelling of reflected radiation coming off the bus onto the panels and shadowing from the bus onto the panels (and vice versa). Both of these are due to the separate treatment of the solar panels and the spacecraft bus during model computation. Here, there is a trade-off in modelling accuracy between being able to deal with non-standard solar panel orientations and being able to capture the effects of reflections and self-shadowing of the bus onto the panels. An analysis of this trade-off is not presented here but will be considered carefully in future development work. (b) No modelling of the time-evolution of the surface material properties. (c) Incomplete modelling of TRR effects. In the ray-tracing algorithm, we only consider spacecraft bus surfaces that are covered in multi-layer insulation (MLI). This strategy can perform reasonably well on those satellites where the surfaces are mostly covered in MLI, as is the case with the GPS Block IIR and IIR-M bus surfaces. However, it is limited in cases where a significant (15), where the region labels are also defined proportion of the spacecraft surface is not covered in MLI (e.g. the SAR antenna on Sentinel-1; radiators on Galileo spacecraft, etc.). Also, we are not considering the force due to the temperature gradient across the Sunfacing and anti-Sun-facing sides of the solar panels in this study, but this effect has been considered in previous studies (Adhya 2005) and we are working towards developing a simplified approach to account for this. (d) No modelling of thermal recoil forces due to emissions from radiators and other thermal control system components that actively emit heat. The impact of this will be different between the Block IIR and the Block IIR-M satellites. The thermal control system of the Block IIR-M satellites was updated with additional integral heat pipes due to high heat concentrations in the honeycomb structure of the L-band panel due, in part, to increased signal power needs (Hartman et al. 2000). Model implementation The UCL radiation force model implementation requires several inputs. Most of these are spacecraft-specific information that includes position, nominal mass, actual mass if available, the grid files for the bus model, solar panel properties (area, surface material properties), attitude information (in the form of attitude control laws or on-board attitude measurements) to enable accurate determination of the spacecraft BFS and solar panel orientation in the BFS. As explained in Sect. 2, the model for the spacecraft bus is pre-computed, with the results of the computation stored in grids that are uniformly spaced at 1°intervals in latitude and longitude of the Sun position in the spacecraft BFS. To call these models in an orbit determination algorithm, these grids must be read in and stored in a suitable data structure. As a part of this process, it is a good idea to denormalise the grid values according to: where • m n is the nominal mass of the spacecraft, i.e. the value used to compute the grid in kg, • m a is the actual mass of the spacecraft in kg, •ẍ grid are the grid file accelerations in the spacecraft BFS x, y and z-axes in ms −2 , • ẍ grid are denormalised grid file accelerations in ms −2 , • E is the mean solar irradiance at 1 AU. This is because our radiation force modelling software was originally developed for solar radiation pressure modelling only. As such, the accelerations given in the grid files are produced using a solar radiation flux model that assumes a constant solar irradiance of 1368 Wm −2 at 1 AU. However, by applying this denormalisation step, it becomes relatively straightforward to use the UCL grids as a general-purpose radiation flux-spacecraft interaction model. With all required inputs provided, and made accessible, it is possible to compute the accelerations due to the separate model components. The bus model is computed according to: where κ is the shadow crossing function (equals 1 in full phase of the Sun and 0 in umbra); E s (r, t) and E e (r, t) are solar radiation flux and Earth radiation flux, respectively, at the spacecraft's location r at time t; ϕ s and λ s are latitude and longitude, respectively, of the Sun's position in the spacecraft BFS; ϕ e and λ e are latitude and longitude, respectively, of the Earth's position in the spacecraft BFS. For latitude and longitude values between grid nodes, the accelerations should be calculated using bilinear interpolation. The solar panel's contribution to accelerations due to radiation forcing is: Finally, the combined acceleration due to radiation forces, x rad , is calculated according to: x rad (r, t) ẍ bus (r, t) +ẍ panel (r, t) +ẍ at (r, t), whereẍ at (r, t) is the acceleration due to antenna thrust. The GPS IIR/IIR-M model description and data sources The detailed UCL GPS IIR/IIR-M geometric model is generated from a set of technical drawings that are published in Chapter 5 of Adhya (2005). The primary source for the surface material properties is Fliegel and Gallini (1996). Additional details about how the model was put together are given in Ziebart et al. (2003). According to an unpublished report produced by UCL in collaboration with the Aerospace Corporation, and delivered to the United States Air Force in October 2005, the Block IIR/IIR-M satellites beyond GPS satellite vehicle number (SVN) 51 are equipped with a NAP ultra-high frequency (UHF) antenna (see Fig. 6), which is installed on the same side of the bus as the W-sensor high band antenna used for military applications found on the − X-face. Like the W-sensor high and low band antennae, the NAP UHF antenna is also composed of thin cylindrical components made of aluminium that are covered in black tape Values given to 2 decimal places (dp) (Adhya 2005). Thus, in our model, the same material properties are used, i.e. ν 0.06 and μ 0. The authors were unable to determine the full form on the NAP acronym or the purpose of this antenna. The computation of the force models for the bus is performed using Version 5.05 of UCL's Analytical SRP and TRR Modelling Software at a nominal spacecraft mass of 1100 kg and a pixel-array resolution of 1 mm 2 . The bus model grid files for the IIR/IIR-M spacecraft, with and without the NAP antenna, are provided alongside this article as an electronic supplement. Most of the values used for our solar panel model, given in Table 1, are taken from Adhya (2005). The combined surface area of the solar panel yoke arms is taken from Fliegel and Gallini (1996) because the drawings in Adhya (2005) provide only their length. For the rear side of the panels, we use surface properties given in Rodríguez-Solano (2009). In Table 2, we present the statistics for the selected UCL grids for the GPS IIR/IIR-M satellites, both with and without the NAP antenna. The grids chosen are the ones corresponding to the Q, W parameter pairs that minimise the RMS error when the interpolated grid file values are compared against the results of an EPS-sweep computation. The RMS errors of the Z grids are approximately five times higher than the X grids. This is due to the W-band antennae and for those satellites that have them, the NAP antenna. In the EPS-sweep computation, these protruding elements result in significantly larger cross-section boundaries as the pixel array pans across the Z surfaces. By contrast, these elements have almost no effect on cross-section boundaries as the pixel array pans across the X surfaces. Thus, there are larger errors in the ray-tracing algorithm when computing Z accelerations. This is due to the edge-matching effect, which depends upon the cross-section perimeter and is explained in Chapter 10 of Ziebart (2001). This does not affect the Y grids as the pixel arrays do not pan across the Y surfaces in the same way. For the antenna thrust model, we use the IGS model values for signal power (http://acc.igs.org/orbits/thrust-power.txt), which are 85 W for GPS Block IIR and 108 W and 198 W for GPS Block IIR-M satellites. Model validation We investigate the performance of the new modelling strategy using two software systems: the UCL Orbit Dynamics Library (UCL-ODL) and ESOC's Navigation Package for Earth Observation Satellites (NAPEOS) software (Springer 2009). The UCL-ODL comprises a set of programs developed by researchers at UCL over the years, for the explicit purpose of studying the impact of force modelling strategies that are developed by the UCL Space Geodesy and Navigation Laboratory. NAPEOS is a GNSS data processing package developed by ESOC and used in its contributions to IGS activities to produce satellite orbits, precise clocks, station coordinates, Earth rotation parameters and so on. Analysis of the impact of separate model components using the UCL-ODL Using the UCL-ODL, we performed a series of sensitivity analyses to investigate the impact of the individual model components and verify the implementation method. In these tests, as the reference trajectory, we used precise IGS final orbits, considering all available IIR and IIR-M satellites over the full month of March 2016. For each satellite, we perform multiple orbit predictions, with separate prediction runs corresponding to separate IGS final orbit files. As such, in this part of the analysis, we consider 13 GPS IIR satellites and 7 GPS IIR-M satellites. For those satellites with a complete set of IGS final orbits during the analysis period, we perform 31 prediction runs from 1 to 31 March 2016. In the orbit propagator, the general force modelling strategy uses Earth Gravity Model 2008 up to degree and order 20 (Pavlis et al. 2012(Pavlis et al. , 2013 and the JPL Development Ephemerides 405 (DE405) for third-body gravitational forcing due to the Sun, Moon, Jupiter and Venus (Standish 1998). The solid Earth tide effect due to the Sun and the Moon is accounted for according to Marsh et al. (1987). General relativistic effects are modelled according to Sect. 3.7.3 of Montenbruck and Gill (2000). The numerical integration is based on an 8th order Runge-Kutta integrator. In terms of radiation force modelling strategy, the following scenarios were systematically assessed: • Base model: SRP-only model using the ESOC BW model (Garcia-Serrano et al. 2016). • Test 1: SRP-only, where the bus model comprises grids produced by the UCL ray-tracing software, but only Eqs. 1 and 2 are used. • Test 2: Same as Test 1, but here the bus model comprises grids that account for both SRP and the effects of MLI TRR (Eqs. 3 and 4). • Test 3: Same as Test 2, but with ERP turned on. • Test 4: Same as Test 3, but with AT turned on. In Fig. 7, we show the impact of different modelling strategies on orbit prediction error over a single 12-h arc for the GPS satellite SVN 46. As smaller and smaller effects are considered in the modelling strategy, the orbit prediction results improve, giving a general indication that the models are performing as we expect. In Table 3, we provide orbit prediction errors statistics for all Block IIR/IIR-M satellites on orbit during the analysis period. The best results, in the sense that the RMS and the maximum 3-d orbit prediction error over a 12-h arc are minimised, at 0.648 m and 1.440 m, respectively, are produced by the method that considers the combined effects of SRP, bus MLI TRR, ERP and AT. For that modelling approach, the full set of statistics for all satellites that were considered in the analysis, are given in Table 4. An interesting observation from these results is that the modelling of the bus MLI TRR, an effect that is not considered by most IGS analysis centres, has a significant impact on reducing orbit prediction error over the arc. Analysis of the impact of the new bus model on POD using NAPEOS To assess the impact of introducing our grid-based model of the spacecraft bus on the quality of orbit estimates, we ran a number of POD analyses using NAPEOS. The analysis uses 100 tracking stations of the IGS Multi-GNSS Experiment (MGEX) (Montenbruck et al. 2017b) and all observed GPS satellites, but the results presented here focus on the 13 GPS Block IIR and 7 Block IIR-M satellites that were on orbit during the analysis period. The data processing method broadly follows ESOC's IGS analysis strategy (ftp:// igs.org/pub/center/analysis/esa.acn) where the basic observables are undifferenced carrier phases and pseudoranges and the integer carrier phase ambiguities are resolved (Ge et al. 2005). The Earth gravity model used is EIGEN-GL05C up to degree and order 12 (Foerste et al. 2008), and the JPL Development Ephemerides 405 (DE405) is used for third-body gravitational forcing due to the Sun, Moon and all solar system planets including Pluto (Standish 1998). The effects of solar Earth tides, ocean tides, solid Earth pole tide, oceanic pole tide and general relativistic corrections are accounted for according to the IERS conventions Fig. 7 Comparison of orbit prediction error over a single instance of a 12-h arc for the GPS IIR satellite SVN 46, using different modelling strategies (Petit and Luzum 2010). The numerical integration uses the Adams-Bashforth/Adams-Moulton 8th order predictioncorrection multistep method, as described in Springer (2009). With the core data processing strategy fixed, we run the POD process using four different orbit modelling strategies, batch processed at 24-h intervals, from 00:00:00 to 23:59:30 in GPS time, thus completely independent from day to day. For the orbits, we generate estimates for the midnight epoch, such that there is an overlap between consecutive solutions at a single point. A full year (2016) is considered, so there are 366 independent solutions and 365 overlap points. In addition to the orbit model parameters, station coordinates and Earth rotation parameters are also estimated. The orbit models considered include: 1. ECOM: No a priori radiation force model, only the reduced ECOM (Springer et al. 1999) and three constrained along-track parameters (constant, cosine and sine with argument of latitude as argument). Here, the along-track parameters are included as soak-up parameters to absorb the effects of orbit mis-modelling, which tends to manifest strongly in the along-track direction, as the results of Sect. 5.1 demonstrate. 2. ECOM + BW: Same estimation strategy as ECOM-only, but here we also include an a priori radiation force model using the ESOC BW model of the GPS IIR and IIR-M spacecraft (Garcia-Serrano et al. 2016). 3. ECOM + UCL: Here, the only difference with the ECOM + BW strategy is that the box is replaced by the grid-based model. 4. ECOM-2: No a priori radiation force model, only the D4B1 extended ECOM (Arnold et al. 2015) along with the three constrained along-track parameters for consistency. Here, our analysis with the ECOM-2 model is not as comprehensive as it might be (as it was not in the scope of our original study plan). This will be addressed in our future work. The pseudorandom noise (PRN) code assigned to those satellites during the analysis period is indicated. Here, the radiation force modelling strategy that accounts for the effects of SRP, bus MLI TRR, ERP and AT is applied. Satellites in eclipse season during the analysis period are indicated with an asterisk (*) in the PRN column. Units: m In Table 5, we present statistics of the estimated ECOM parameters from methods 1-3. We do not present the statistics for ECOM-2 because comparison between models with different parameterizations cannot be made directly. In general, as the daily solutions are fully independent of each other, smaller absolute values for both the mean and the RMS indicate an improvement in the force modelling. Using the ECOM + UCL model, we see a reduction in the absolute value of both the mean and the RMS values of the D 0 , B 0 and all along-track parameters (except the mean of the A 0 parameter that is the same for both), when comparing with results using the ECOM + BW model. We see a reduction in the RMS of the Y 0 parameter, but the mean increases. The mean and RMS of the Bsin and Bcos parameters increase, which indicate the presence of systematic effects that the ECOM + UCL combination is not effectively dealing with. In Table 6, we show the statistics of the orbit overlap differences. A smaller value for both the mean and the RMS indicates an improvement in the force modelling. The RMS values in all components are smallest with ECOM + UCL approach. However, the mean values for both the radial and along-track components are smallest with the ECOM-2 approach and the mean value for the cross-track component is smallest with the ECOM approach. In the 3-d orbit overlap values, we see a drop of 9% and 4% in the mean and RMS values, respectively, when we compare the ECOM + UCL results against ECOM + BW. The performance of the ECOM, ECOM + BW and ECOM + UCL orbit modelling strategies was also assessed in a series of orbit prediction tests. In these tests, 3 days of independently estimated orbits were used to determine a best fitting orbit represented by position and velocity coordinates and the eight parameters of the ECOM method described above. This best fitting orbit was then propagated into the future for 14 days, after the end of the 3-day fit interval. These predicted orbits were compared against the estimated orbits, on the first day and the last day of the prediction interval. In these tests, ECOM + UCL is the reference model and the orbits estimated using ECOM + UCL are used as the basis of the 3-day orbit fit and as the ground truth. These tests are done over 2016. Thus, the first 1-day prediction interval considered is day 4 of 2016 and the first 14th-day prediction interval is day 17. The results from these tests are presented in Table 7. Comparing RMS orbit prediction errors using ECOM + UCL against ECOM + BW, after 1-day, we see the errors increase by 0.21 cm in the radial direction but fall by 2.20 cm and 1.81 cm in the along-track and cross-track directions, respectively. For the 14th day predictions, we see a reduction in the RMS orbit prediction errors of 20.35 cm and 13.79 cm in the radial and cross-track directions, but an increase of 15.52 cm in the along-track direction. Overall, these results suggest ECOM + UCL is outperforming ECOM + BW, in the day 1 and day 14 orbit prediction tests, but there are limitations to this analysis that should be addressed in future work for improved confidence in our findings. For example, because we use it as our reference model, it is possible that ECOM + UCL is favoured in these tests. Also, systematic errors, such as those that depend on the elevation of the Sun above the orbital plane, do not show up in the yearly statistics. A more complete picture of the comparative performance of the models should be investigated through time series analysis. Conclusions and discussion Recent developments to our radiation force modelling strategy were analysed using a new model for the GPS Block IIR and Block IIR-M satellites. Advances to our approach include: an enhanced bus model computation scheme (based on the spiral points algorithm) that uses ray-tracing to determine the radiation flux-spacecraft interaction from 10,000 points distributed uniformly on a sphere surrounding the spacecraft; an improved method (from a numerical stability perspective) for producing grids spaced at 1 • ×1 • intervals in latitude and longitude in the spacecraft frame using a padding process to extend the spiral points data in all directions to reduce the impact of edge effects; a quality assurance process that uses results from an EPS-sweep computation with 3960 points for selecting an optimal set of grids. The models produced, and the proposed implementation method, were refined using a series of verification tests within the UCL-ODL. The impact of introducing UCL's grid-based model into a full POD process was investigated by analysing the statistics of estimated orbit model parameters, orbit overlaps and orbit prediction errors. Combined, the results provide a good indication that introducing high-fidelity analytical force modelling into the POD process can improve the quality of the estimated orbits and further refinements of the approach to address current limitations are worth pursuing. One of the difficulties with the high-fidelity approach lies in acquiring the spacecraft data (geometry, surface material Bold values indicate the lowest value error statistics among the various modelling strategies being compared Units: cm properties, attitude history, mass and mass history) that is required to produce accurate models. It is hoped that the results in this paper adds evidence to the case for making this data available to the science and engineering community, where it is possible-especially the detailed geometry and surface material properties. Using an accurate spacecraft model, it is possible to compute the high-fidelity radiation force model. However, the model computation time remains a problem and limits the number of development and testing cycles that we are able to perform. A typical model computation involves a 5 × 5m 2 pixel array projected onto the spacecraft model at a 1 mm pixel spacing. In such a case, there are 2.5 × 10 7 rays per incoming radiation flux direction, 1 × 10 4 different directions in the spiral points computation scheme, and so this requires 2.5 × 10 11 rayspacecraft surface interaction calculations. As it stands, this process takes~3 days to compute (job run-time as opposed to CPU time) on the UCL high-performance computing facility, Legion@UCL (can take longer when the facility is under heavy load), followed by~1 day of analyst's time to work through the process of generating the grids. Therefore, it is worth exploring methods for reducing model production times. We are beginning to explore the use of a graphical processing unit (GPU) to exploit standard computer graphics techniques in the computation of the radiation fluxspacecraft surface interaction-a process that naturally lends itself to being parallelised. This idea is demonstrated in Grey et al. (2017) where an OpenCL implementation of a radiation source-satellite surface interaction model that includes accurate modelling of diffuse reflection and apparent size of illumination source is used to simulate the impact of photoelectron emission on spacecraft surface charging. Also, we are exploring the use of an algorithm that re-organises the UCL spacecraft model components into a k-dimensional tree data structure, to speed up the ray-tracing algorithm by greatly reducing the number of ray-surface interaction tests that need to be performed (Li et al. 2018).
2019-06-23T14:09:10.687Z
2019-06-10T00:00:00.000
{ "year": 2019, "sha1": "63ba6f61b7c155b2303566ae2441bdd2862642b8", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00190-019-01265-7.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e75af0e059f32484bb4622c0b5df4db687a12a2b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
119647286
pes2o/s2orc
v3-fos-license
Orthogonal basis for the Shapovalov form on $A_n$ Let $U$ be either classical or quantized universal enveloping algebra of $\s\l(n+1)$ extended over the field of fractions of the Cartan subalgebra. We suggest a PBW basis in $U$ over the extended Cartan subalgebra diagonalizing the contravariant Shapovalov form on generic Verma module. The matrix coefficients of the form are calculated and the inverse of the form is explicitly constructed. Introduction The contravariant bilinear form on Verma modules is a fundamental object in the representation theory of simple complex Lie algebras and quantum groups, which is responsible for many important properties including irreducibility, [1]. Its inverse is closely related with intertwining operators [2], the dynamical Yang-Baxter equation [3], and invariant star product on homogeneous spaces, [4,5]. Contravariant forms on highest weight modules descend from a bilinear form on the universal enveloping algebra with values in the Cartan subalgebra. It was introduced and studied by Shapovalov [6], who computed the determinant for its restriction to every weight subspace. It was extended to quantum groups in [7]. The determinant formula was further generalized for parabolic Verma modules over the classical universal enveloping algebras in [8]. These results provided a criterion for the corresponding modules to be irreducible, since the kernel of a contravariant form is invariant. Applications to mathematical physics require the knowledge of the inverse Shapovalov form, which explicit expression is an open problem for general simple Lie algebras. The most important advance in this direction was made in [9], where matrix coefficients of the pairing on Mickelsson algebras were calculated. However, [9] does not address the Verma modules focusing on different problem. Although the inverse Shapovalov form for the A n series can be derived from [9], a self-contained presentation is still missing in the literature. In the present paper we give an independent elementary derivation based on the definition of the quantum group. We construct the orthogonal basis of the Shapovalov form on U q gl(n + 1) and obtain a similar result for U gl(n + 1) via the classical limit. Of course, the classical case can be done directly, in an even simpler way. The ground field is fixed to C but can be changed to an arbitrary field of zero characteristic. We consider a system of "dynamical root vectors"ê ±µ in the Borel subalgebras. Upon appropriate ordering, it gives rise to a Poincaré-Birkhoff-Witt (PBW) basis over the (extended) Cartan subalgebra. The vectorsê ±µ are constructed from the Chevalley generators through generalized commutators with coefficients in the Cartan subalgebra. The positive and negative dynamical root vectors are related via ω(ê ±µ ) =ê ∓µ , where ω is the anti-algebra Chevalley involution. This PBW system diagonalizes the Shapovalov form on every Verma module M λ and is complete if the highest weight λ is away from a family of hyperplanes. This family is wider that the zero set of the Shapovalov determinant, which is known to be ∪ α∈R + {λ|(λ+ρ, α) ∈ N} for U(g) and ∪ α∈R + {λ|q 2(λ+ρ,α) ∈ q 2N } for U q (g). Our set of singular points is still contained in ∪ α∈R + {λ|(λ, α) ∈ Z} for U(g) and in ∪ α∈R + {λ|q 2(λ,α) ∈ q 2Z } for U q (g). Away from this set, the dynamical PBW system is a basis. We compute the matrix coefficients and construct the inverse form for generic weight, off the union of hyperplanes where some of the matrix coefficients vanish. The dynamical root vectors project to generators of the Mickelsson algebras associated with a chain of subalgebras sl(i) ⊂ sl(i + 1), i = 2, . . . , n. Essentially they are raising and lowering operators participating in construction of the Gelfand-Zetlin basis in finite dimensional U q (g)-modules, [10]. Elements of the Gelfand-Zetlin basis are formed by common eigenvectors of the commutative subalgebra generated by U q (h) and the center of U q sl(i) , i = 2, . . . , n + 1. The dynamical PBW monomials feature the same property and form the Gelfand-Zetlin basis in Verma modules. The paper is organized as follows. After the preliminary section containing the basics on the quantum group U q (sl(n + 1)), we introduce the dynamical root vectors and study their key properties. Then we show that, upon an appropriate ordering, the systems of positive and negative dynamical PBW monomials give rise to dual bases in right lower and left upper Verma modules with respect to the cyclic Shapovalov pairing. We compute the matrix coefficients and construct the inverse of the cyclic form. Further we pass from the cyclic form to contravariant and prove that the PBW system of negative dynamical root vectors yields an orthogonal basis. This should be regarded as a refinement of the cyclic result and it is based on a "row-wise commutativity" of dynamical root vectors proved therein. Further we illustrate the key steps on the example of A 2 . In the last section, we apply the dynamical root vectors to construction of singular vectors in the Verma modules. 2 Preliminaries: the quantum group U q (sl(n + 1)) For a guide in quantum groups, the reader is referred to [1] or [11], or to the original paper [12]. In this section we collect the facts about quantum sl(n + 1) that are relevant to this exposition. Let us fix some general notation. We work over the ground field C of complex numbers. By Z we denote the set of all integers, by Z + the subset of non-negative and by N the subset of strictly positive integers. Given a, b ∈ Z we understand by [a, b] ⊂ Z the interval of all integers from a to b inclusive. We also use the notation (a, b], [a, b), and (a, b) for intervals without one or two boundaries. Throughout the paper, g stands for the Lie algebra g = sl(n + 1), n 1. The case n = 1 is trivial, and we are mostly interested in n 2. Fix a Cartan subalgebra h ⊂ g and let R ⊂ h * denote the root system of g with a subsystem R + of positive roots, relative to h. The choice of R + facilitates a triangular decomposition, g = n − ⊕ h ⊕ n + , where n ± are nilpotent Lie subalgebras corresponding to the positive and negative roots. Let (., .) designate the canonical inner product on h * . For any pair of integers i, j ∈ [1, n] such that i j let g ij ⊂ g be the Lie subalgebra sl(j − i + 2) corresponding to the roots α i , . . . , α j ∈ Π + . We also consider the Cartan subalgebra h ij = g ij ∩ h and nilpotent subalgebras n ± ij = g ij ∩ n ± , so that g ij = n − ij ⊕ h ij ⊕ n + ij is a triangular decomposition compatible with the decomposition of g. We assume that q ∈ C is not a root of unity and define and the Serre relations The elements e i and f i are called, respectively, the positive and negative Chevalley gen- The quantum group can be also defined as an algebra over the ring of fractions of C[q, q −1 ] over the multiplicative system generated by q m − 1, m ∈ N. upon the substitution t ±1 i = q ±h i . This algebra, denoted by U (g), is a deformation of the classical universal enveloping algebra U(g). It is still convenient to use the notation . This q-version of the Cartan subalgebra is the polynomial ring on a torus, while U(h) is a polynomial ring on a vector space. Note that h ⊂ U q (h) contrary to U (h), which stands for the subalgebra in U (g) generated by {h i } n i=1 . Observe that U q (g ij ) is a natural subalgebra in U q (g) for any pair i, j ∈ [1, n] such that i j. Here are other subalgebras of importance in U (g). The elements e i and f i generate, respectively, the subalgebras U q (n + ) and U q (n − ). Their C[[ ]]-extensions U (n ± ) are deformations of the classical universal enveloping algebras U(n ± ). The quantum Borel subalgebras U q (b ± ) are generated by U q (n ± ) over U q (h). All positive roots in R + are sums α i + . . . + α j , where i j. Put e ii = e i , f ii = f i and extend this definition inductively by Here [x, y] q is the generalized commutator xy − qyx. Along with e ii , f ii we will also use the usual notation e i , f i . Note that the positive and negative root vectors are related via the Chevalley involution, ω(f ij ) = e ij . We define n ± in the q-case as the linear spans n + = {e km } k m ⊂ U q (g) and n − = {f km } k m ⊂ U q (g). These are U (h)-submodules, which are trivial deformations of the classical U(h)-modules n ± ⊂ g. Similarly, we put n ± km = n ± ∩ U q (g km ), so that n + km = Span{e ij } k i j m and n − km = Span{f ij } k i j m . In what follows, we deal with a general algebraic concept, which we recall here. Consider a unital associative algebra A and a non-empty subset I ⊂ A. Let AI denote the left ideal generated by I. We denote by A I the subset of elements x ∈ A such that Ix ⊂ AI. We write simply A a when I = {a} consists of one element a. Obviously A I is not empty, A I ⊃ AI, and is a subalgebra in A. It is the normalizer of AI, i.e. the maximal subalgebra in A where AI is a two-sided ideal. For every x ∈ A I the map x : AI → AIx ⊂ AI amounts to an anti-homomorphism A I → End A (AI), where the ideal AI is regarded as a natural In our setting, A will be U := U q (g). If I a subset of simple positive root vectors and g ′ is the corresponding reductive subalgebra in g, the quotient A I /AI is the Mickelsson algebra S(g, g ′ ), [13]. We finish our introduction to the quantum special linear group with two lemmas that will be used in what follows. Let S m denote the symmetric group of permutations of m symbols. Proof. Consider the case σ = id first, using induction on m. For m = 2 the statement immediately follows from the Serre relations: 2m U, while the second statement is obvious. Suppose that m > 2 and the lemma has been proved for all i from the interval [2, m). Then, for such i, the Serre relations give where ψ = f i+2 . . . f m and ψ = 1 if i = m − 1. By the induction assumption, the first term belongs to n − 2i U. In the second term, f i+1 commutes with f 1 . . . f i−1 . Therefore, the second term belongs to f i+1 U ⊂ n − 2m U, and the sum lies in n − 2m U. For i = m, we have This proves the statement for all m and σ = id. as σ(i) > i 2. This proves the statement for σ = id. Then for all u ∈ U q (n + ik ), [u, f j m ] ∈ n − j+1m U. Proof. Introduce a grading in U q (n + ) setting deg e j = 1 for all j ∈ [1, n]. Let u ∈ U q (n + ik ) be a Chevalley monomial. The statement is trivial for zero degree u, so we assume deg u > 0. Present u as a product u = u ′ e l for some e l , u ′ ∈ U q (n + ik ). If deg u ′ = 0 and u = e l , then the statement follows from the formula [e l , f j m ] = δ jl f j+1m q −h j , cf. Lemma 2.1. For deg u 1, induction on deg u gives where the last summand is present only if j + 2 m. The right-hand side is contained in Dynamical root vectors We set up an ordering on positive root vectors e ij induced by the lexicographic ordering on pairs (i, j), i j. The negative root vectors f ij are ordered in the opposite way. These orderings are normal and compatible with a reduced decomposition of the maximal element in the Weyl group of g. The ordered systems of root vectors generate a PBW basis in the algebras U q (n ± ), [11]. The Shapovalov form, which is the subject of our interest, is very complicated in this basis. We need a new basis suitable for our study, possibly on the extension of U q (g) over the ring of factions of U q (h) over some multiplicative system. This basis is introduced in this section. The right-hand side can be expressed through "generalized commutators" with coefficients from the Cartan subalgebra. For instance, Note that the Cartan coefficients inê i+1k commute with e i and can be gathered on the left. The name dynamical follows the analogy with the dynamical Yang-Baxter equation from the mathematical physics literature, [3]. In a representation, the Cartan coefficients are specialized at the weight of a particular vector the elementsê ij andf ij act upon. This dependence on the weight is "dynamical" rather than "statical" since the Cartan coefficients are not central in U q (g). The key properties of dynamical root vectors are described by the following proposition. Proof. We will check only the first line. The second line is obtained from it via the Chevalley involution. It is obvious thatf ij ∈ U e k for k > j, so we assume i < k j. For j = i + 1 we have The retained terms give hence [e j ,f ij ] ∈ Ue j , as required. For the right equality in the first line, we have where we have omitted the terms from Ue i . Modulo those terms, the last expression is equal This proves the proposition for j = k = i + 1. This immediately implies the inclusion [e k ,f ij ] ∈ U e k for such k, thanks to the recursive where the omitted terms lie in Ue k . By the induction assumption, the remaining terms give up to the terms from Ue i+1 . This is equal to the product of f ifk+1j (observe that f i commutes withf k+1j =f i+2j ) and the Cartan factor Therefore, [e i+1 ,f ij ] ∈ Ue i+1 , as required. To complete the induction, we need to check the rightmost equality: where we have dropped the terms from Ue i . The Cartan factor in the brackets is This completes the induction on l = j − i and the proof of the proposition. Let h α ∈ h denote the element determined by α(h α ) = (λ, α) for all λ ∈ h * . Consider the multiplicative system in U q (h) generated by [h α + m] q , α ∈ R + , m ∈ Z, and denote bŷ U q (h) the ring of fractions of U q (h) over this system. One can check that there is a natural extension,Û q (g), of U q (g) overÛ q (h). The algebraÛ q (g ij ) contains an idempotent p ij of zero weight such that p ijÛq (g) = {x ∈Û q (g) : [14,15]. It is called extremal projector of the subalgebraÛ q (g ij ). Proof. By construction,f ij p belongs to pU q (b − ) and hence to pU It follows thatê ij andf ij generate a PBW basis inÛ q (g) overÛ q (h). Verma modules Thanks to a PBW basis, the algebra where the middle arrow is the multiplication. The form is ω-contravariant, i.e. the conjugation operation factors through ω. The left ideal U q (g)n + lies in the kernel of the form, which therefore restricts to the quotient U q (g)/U q (g)n + . It is convenient to drop the extra structure of Chevalley involution and consider pairings between left and right modules, with cyclicity in place of contravariance. Recall that a pairing ., . : V ⊗W between a right module V and left module W is called cyclic if xu, y = x, uy for all x ∈ V , u ∈ W , and u ∈ U q (g). Specifically the cyclic Shapovalov form is defined similarly to contravariant but without the first arrow. It induces a cyclic pairing between the right and left quotient modules n − U q (g)\U q (g) and U q (g)/U q (g)n + . The Shapovalov form on U q (g) is equivalent to a family of forms on Verma modules parameterized by the highest weight λ ∈ h * . Consider a one dimensional representation of the Cartan subalgebra U q (h) determined by the assignment It extends to a representation of U q (b ± ) by letting λ(n ± ) = 0. We regard C as a left U q (b + )-module and right U q (b − )-module with respect to these extensions and denote it by C λ . Define the right and left Verma U q (g)-modules M ⋆ λ and M λ to be the induced modules and v λ ∈ M λ their canonical generators. They carry the highest weights. By construction, it is normalized to v ⋆ λ , v λ = 1 and it is a unique cyclic pairing between M ⋆ λ and M λ that satisfies this condition. In order to simplify formulas, we suppress the brackets Recall that a vector in M λ is called singular if it is annihilated by n + . Similarly, a vector in M ⋆ λ is called singular if it is annihilated by n − . Singular vectors generate submodules, where they carry the highest weights. For a subalgebra g ij ⊂ g we say that a vector in M λ is g ij -singular or n + ij -singular if it is killed by n + ij . Similarly, we say that a vector in M ⋆ λ is g ij -singular or n − ij -singular if it is killed by n − ij It is also convenient to extend the form to a cyclic paring M ⋆ µ ⊗ M λ → C by setting it nil for µ = λ. Given a root subsystem Π ′ ⊂ Π, consider the corresponding semisimple Lie Proof. The restriction of the form to M ⋆ A cyclic bilinear form between right and left Verma modules is unique up to an overall factor. The set {f (l), e(l)} l∈T ⊂ U q (g) is a PBW basis over U q (h). Similarly we definef (l) and e(l) using the dynamical root vectors in place of standard. We call {f (l),ê(l)} l∈T dynamical PBW system. In what follows, we study the set of vectorŝ Diagonalization of the Shapovalov form We prove that, upon a normalization, they form dual bases in generic M λ and M ⋆ λ with respect to the cyclic pairing. With respect to the contravariant form on generic M λ , the system {f (l)v λ } l∈T is an orthogonal basis. Note that the ordering of the dynamical root vectors is the same lexicographic ordering of the standard root vectors set up in Section 3. We call it normal. We have to consider different row-wise orderings as well. Let σ = (σ n , . . . , σ 1 ) ∈ S n × . . . × S 1 be an n-tuple of permutations. Defineê σ (l k ) = σ k ê(l k ) to be the result of permutation σ k applied to the simple factors ofê(l k ) and putê σ (l) =ê σ 1 (l 1 ) . . .ê σn (l n ). We prove in Section 7 thatê σ (l) is independent of σ but we have to distinguish between different orderings until then. We will suppress the subscript σ and understand byê(l) a monomial with arbitrary although fixed ordering. This convention stays in effect until the end of the section. In the subsequent sections, we use only two orderings: the normal and an alternative, for which we fix a special notation. The basis of positive (negative) root vectors allows us to identify the factorspaces n ± ij /n ± kj with the linear complements n ± ij ⊖ n ± kj ⊂ n ± ij , for all i, j, k ∈ [1, n] such that i k j. By U q (n ± ij /n ± kj ) we denote the subalgebras in U q (g) generated by n ± ij /n ± kj . Similarly we define U q (h)-submodulesn + ij = Span{ê lk } i l k j ,n − ij = Span{f lk } i l k j and n ± ij /n ± kj =n ± ij ⊖n ± kj ⊂n ± ij . By U q (n ± ij ) ⊂ U q (b ± ij ) we denote the subalgebras generated bŷ n ± ij and by U q (n ± ij /n − kj ) the subalgebras generated byn ± ij /n + kj . Clearly We shall see in Section 7 that the algebras U q (n ± in /n ± i+1n ) are commutative. Proof. An immediate consequence of Proposition 3.1. i ∈ [1, n], of weights λ l,i (mind the right action of U q (h) on M ⋆ λ ). Proof. Due to Lemma 5.1, for each i the vector . By construction, λ k,0 = λ l,0 = λ. Suppose that we have proved the equality λ k,i−1 = λ l,i−1 for some i ∈ [1, n). Then v ⋆ λê (k)f (l)v λ can be presented as the matrix coefficient v ⋆ λ l,i−1ê (k i ) . . .ê(k n )f (l n ) . . .f (l i )v λ l,i−1 of a cyclic paring between the U q (g i+1n )-modules M ⋆ λ k,i and M λ l,i . It is zero unless λ k,i = λ l,i . This is true for all i ∈ [0, n], by induction on i. The equalities λ k,i − λ k,i−1 = λ l,i − λ l,i−1 for i ∈ [1, n] translate to a triangular system of equations on the differences k is − l is : namely, n s=j (k is − l is ) = 0 for all j = i, . . . , n. It is immediate that k i = l i for all i ∈ [1, n] and therefore k = l. If follows that where v µ ∈ M λ and v ⋆ µ ∈ M ⋆ λ are g kn -singular vectors. This is done in the following section. We adopt the convention that products b i=a are not implemented (formally set to 1) once a > b. For every l ∈ T and every k ∈ [1, n] we define [µ sr − i + l s−1 + 1] q , l k = (l n , . . . , l k ). According to this definition, A l,k (µ) actually depends on the k-th row l k ∈ Z n−k+1 + of l. Proof. The elementf (l 1 ) is a monomial in the dynamical root vectorsf 1m , where m ranges This implies ψn . . . f σ(k) enteringf 1k with σ = id belongs to n − 2k U by Lemma 2.3. Therefore, the vector v ⋆ λê l 1 ψφ ∈ v ⋆ λê l 1 n − 2n U is nil. By this reasoning, we can consecutively replace eachf 1m with f 1 . . . f m m i=2 [h im + 1] q factor by factor from left to right. The Cartan coefficients produce scalar multipliers, which gather to the overall factor A l,1 (λ). Finally, we replace each f 1 . . . f m with f 1m by a similar reasoning moving in the opposite direction, from right to left. Next we calculate the matrix coefficient v ⋆ λê l 1n f l 1n v λ . For all k, m ∈ [1, n] such that k m we define polynomial functions C km : h * → C by Proof. We do induction on n. The statement for n = 1 immediately follows from the defining relations. Suppose that n > 1 and present f 1n as Observe that the relation f 2n f 1n = qf 1n f 2n easily follows from Lemma 2.1. Along with the relation [e 1 , f 1n ] = f 2n q −h 1 from the same lemma, this yields . . e s i n−1 and write Applying the Leibnitz rule to these commutators, we can ignore f l−1 1n : The omitted terms contain residual vectors coming from f 2n . They lie in n − 2n U and vanish in the matrix coefficient. Modulo n − 2n U, Lemma 2.1 yields where l ′ = l − 1. This proves the statement for φ i , i ∈ [2, n]. Consider the remaining case of φ 1 . Using the relation [e n , f 1n ] = −qf 1n−1 q hn and the relation f 1n−1 f 1n = q −1 f 1n f 1n−1 from Lemma 2.1, we get We have used [e i , f 1n ] = 0 for i ∈ [2, n − 1] in this calculation. Further, We replace the product f 1 f 2n with f 1n , since the calculation is done modulo n − 2n U. Thus, Notice that the factor in the brackets is equal to This completes the proof. The coefficients D i,l (λ) satisfy the reduction formulas which readily follow from their definition. As above, l ′ = l − 1. Proof. Let us calculate the vectorê 1n f l 1n v λ modulo n − 2n Uv λ . Consider the presentationê 1n = n i=1 a i (h)φ i + . . . with suppressed Chevalley monomials from Un + 2n−1 . They make zero contribution to the vectorê 1n f l 1n v λ , because n + 2n−1 commutes with f 1n and kills f l 1n v λ , by Lemma 2.1. We need the explicit expression only for a 1 (h) = (−1) n−1 n i=2 [h in ] q , which is readily found from the definition ofê 1n . We replaceê 1n with its specialization at the weight λ − l ′ α 1n and writê We have used the reduction formulas (6.7) in the second equality. Plug in here the expressions and the expression for the difference D 1,l (λ) − q −l ′ [l] q D 1,1 (λ − l ′ α 1n ) from (6.6). This gives the coefficient before f l 1n v λ in (6.9). It is divisible by q −l ′ [l] q n k=2 [λ kn − l ′ ] q , which can be factored out. The remaining factor is In the last section we demonstrate thatě(l) =ê(l), but the proof of this nontrivial fact is indirect and based on the knowledge of the matrix coefficients v ⋆ λě (l k )f (l k )v λ . Lemma 6.5. Put l 1 = (l n , . . . , Proof. The above factorization of the matrix coefficient is a consequence of the formula Let ν denote the weight of this vector and letẽ 1k ∈ U q (n + 1k ) be the specialization ofê 1n at ν. It follows from Lemma 2.3 and Lemma 2.2 that [ẽ 1k , ψ] ∈ n − 2n U. Therefore, we can replacê e 1k ψ with ψẽ 1k mod n − 2n U. Finally, observe that the Cartan coefficients ofê 1k are confined within U q (h 2k ) and consequently commute with ψ. Therefore, ψẽ 1k can be replaced with So far in this section we dealt with the matrix coefficients v ⋆ λě (l 1 )f (l 1 )v λ , i.e. of the form v ⋆ λ U q (n + 1n /n + 2n )U q (n − 1n /n − 2n )v λ . Upon obvious modifications, these results hold true for v ⋆ µě (l k )f (l k )v µ , for any k ∈ [1, n] and v ⋆ µ ∈ M ⋆ µ , v µ ∈ M µ being g kn -singular vectors. Proof. Replacement off (l k ) with f (l k ) yields a scalar multiplier A l,k (µ), as explained by Lemma 6.1; hence the last product. Factorization of v ⋆ µě (l k )f (l k )v µ is established by Lemma 6.5 and Lemma 6.4; hence the first product with the factorials. Corollary 6.8. Suppose the weight λ is such that B l (λ) = 0 for all l ∈ T. Then the system Recall from [6,7] that the Shapovalov form on M λ is invertible if and only if q 2(λ+ρ,α) ∈ q 2N (respectively, (λ, α) + (ρ, α) ∈ N for U(g)) for all α ∈ R + . In our notation, this criterion translates to q 2λ ij ∈ q 2Z + (respectively, λ ij ∈ Z + ) for all i, j such that i j. On the other hand, one can easily see that the set of zeros of B l (λ), l ∈ T, is larger although contained in the union ∪ α∈R + {λ|q 2(λ,α) ∈ q 2Z } (in the union of integer hyperplanes (λ, α) ∈ Z in the classical case). Therefore, the systemf (l)v λ , l ∈ T, fails to be a basis for special values of weights. We consider this effect in a more detail on the example of sl(3) in the last section. Example 6.9. Here is an example which will play a role in the next section. We need the explicit expression for the matrix coefficient [λ j m + 1] q C 1k (λ)C 1m (λ), (6.11) according to the general formula. As usual, the products are present only if the lower bounds do not exceed the upper bounds. The products before C 1k (λ)C 1m (λ) results from Contravariant Shapovalov form In this section we refine the obtained results and show that the dual bases in M ⋆ λ and M λ give rise to an orthogonal basis for the contravariant form on M λ . The key step is to prove that the dynamical positive (negative) root vectors commute within each row. This facilitates the equalitiesě(l k ) =ê(l k ) for all k ∈ [1, n] andě(l) =ê(l) for all l ∈ T. We start with the following simple case, which will be the base for a further induction. Proof. This is an immediate consequence of the Serre relation: We have not found a direct general proof, apart from the above simplest cases, and use a roundabout approach based on already obtained results. Namely, we will show that positive dynamical PBW system vanishes when paired with the element [f 1k ,f 1m ]v λ for all λ. Since it is a basis in M ⋆ λ and the pairing is non-degenerate for generic λ, that will be sufficient to prove the equality [f 1k ,f 1m ] = 0. Proposition 7.2. For every i ∈ [1, n), the algebra U q (n ± in /n ± i+1n ) is commutative. Proof. It is sufficient to check only U q (n − in /n − i+1n ), thanks to the Chevalley involution. This algebra is generated byf ik , k = i, . . . , n. To prove the equality [f ik ,f im ] = 0, we do induction on k − i, where k is assumed to be less than m. The case k − i = 0 is already established by Lemma 7.1. For higher k and m 3, we can restrict to v ⋆ λ U q (n + 1n /n + 2n ). By weight arguments, it is sufficient to calculate the matrix element v ⋆ λê 1mê1kf1kf1m v λ and check it against v ⋆ λê 1mê1kf1mf1k v λ , which is given in Example 6.9. Observe that h 2k commutes withê 1m . The second term gives −[λ 2k In accordance with our convention, the product is replaced by 1 if k = 2. We have used the fact thatf 1m v λ is n + 2k -singular andê 2kf2kf1m v λ = v ⋆ λê 2k ,f 2k f 1m v λ . Also, we have applied Lemma 7.1. The matrix coefficient in the right-hand side is standard, and can be specialized from the general formula (6.11). The contribution of the second term in (7.12) is (7.13) Here we have used C 2k (λ − α 1m ) = C 2k (λ), which is true for k < m. The first term in (7.12) gives The first matrix coefficient is standard and can be extracted from Theorem 6.7. The total contribution of this term to (7.12) is [λ jm + 1] q C 2k (λ)C 1m (λ), (7.14) since C 2k (λ−α 1m ) = C 2k (λ). Let us compute the matrix coefficient v ⋆ λê 1mê2k f 1 e 1f2kf1m v λ = v ⋆ λê 1m f 1ê2kf2k e 1f1m v λ . With the use of the right equalities from Proposition 3.1, we find it equal to by the induction assumption. The total contribution of this term to (7.12) is (7.15) where again the convention about the products is in effect. Pushing every copy of e 1 to the right produces zero contribution of the commutator [e 1 , f l Proof. Put b i the square norm of φ i , i = 1, 2, with respect to the Shapovalov form. Now suppose that φ 1 and φ 2 are not collinear. One can assume that their Gram matrix is diag(b 1 , b 2 ), thanks to the Gram-Schmidt orthogonalization algorithm. The Gram matrix is degenerate if and only if it is so for the subsystem at generic λ and hence of f α φ 1 , φ 2 f α . The standard higher root vectors f ij ∈ U q (g) are known to satisfy the identity f 1 f 2 2n = [2] q f 2n f 1n − f 2 2n f 1 = 0, which easily follows from the Serre relations. Further we need its dynamical version. Proof. The delta symbol is obvious. It is then sufficient to consider the case i = k = 1. This is an immediate consequence of (9.16). Corollary 9.2 with Proposition 9.5 gives For classical universal enveloping algebras, this result was obtained in [18].
2014-08-31T11:15:29.000Z
2012-06-16T00:00:00.000
{ "year": 2012, "sha1": "b53b2962dd82f94d52085c552e34f824fc1206ea", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b53b2962dd82f94d52085c552e34f824fc1206ea", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
11601918
pes2o/s2orc
v3-fos-license
A Quantum Time-Space Lower Bound for the Counting Hierarchy We obtain the first nontrivial time-space lower bound for quantum algorithms solving problems related to satisfiability. Our bound applies to MajSAT and MajMajSAT, which are complete problems for the first and second levels of the counting hierarchy, respectively. We prove that for every real d and every positive real epsilon there exists a real c>1 such that either: MajMajSAT does not have a quantum algorithm with bounded two-sided error that runs in time n^c, or MajSAT does not have a quantum algorithm with bounded two-sided error that runs in time n^d and space n^{1-\epsilon}. In particular, MajMajSAT cannot be solved by a quantum algorithm with bounded two-sided error running in time n^{1+o(1)} and space n^{1-\epsilon} for any epsilon>0. The key technical novelty is a time- and space-efficient simulation of quantum computations with intermediate measurements by probabilistic machines with unbounded error. We also develop a model that is particularly suitable for the study of general quantum computations with simultaneous time and space bounds. However, our arguments hold for any reasonable uniform model of quantum computation. Introduction Satisfiability, the problem of deciding whether a given Boolean formula has at least one satisfying assignment, has tremendous practical and theoretical importance. It emerged as a central problem in complexity theory with the advent of NP-completeness in the 1970's. Proving lower bounds on the complexity of satisfiability remains a major open problem. Complexity theorists conjecture that satisfiability requires exponential time and linear space to solve in the worst case. Despite decades of effort, the best single-resource lower bounds for satisfiability on general-purpose models of computation are still the trivial ones -linear for time and logarithmic for space. However, since the late 1990's we have seen a number of results that rule out certain nontrivial combinations of time and space complexity. One line of research [6,7,19,5,20], initiated by Fortnow, focuses on proving stronger and stronger time lower bounds for deterministic algorithms that solve satisfiability in small space. For subpolynomial (i.e., n o(1) ) space bounds, the current record states that no such algorithm can run in time n c for any c < 2 cos (π/7) ≈ 1.8019. A second research direction aims to strengthen the lower bounds by considering more powerful models of computation than the standard deterministic one. Diehl and Van Melkebeek [5] initiated the study of lower bounds for problems related to satisfiability on randomized models with bounded error. They showed that for every integer ℓ ≥ 2, Σ ℓ SAT cannot be solved in time n c by subpolynomial-space randomized algorithms with bounded twosided error for any c < ℓ, where Σ ℓ SAT denotes the problem of deciding the validity of a given fully quantified Boolean formula with ℓ alternating blocks of quantifiers beginning with an existential quantifier. Σ ℓ SAT represents the analogue of satisfiability for the ℓth level of the polynomialtime hierarchy; Σ 1 SAT corresponds to satisfiability. Proving nontrivial time-space lower bounds for satisfiability on randomized algorithms with bounded two-sided error remains open. Allender et al. [2] considered the even more powerful (but physically unrealistic) model of probabilistic algorithms with unbounded error 1 . They settled for problems that are even harder than Σ ℓ SAT for any fixed ℓ, namely MajSAT and MajMajSAT, the equivalents of satisfiability and Σ 2 SAT in the counting hierarchy. MajSAT is the problem of deciding whether a given Boolean formula is satisfied for at least half of the assignments to its variables. MajMajSAT is the problem of deciding whether a given Boolean formula ϕ on disjoint variable sets x and y has the property that for at least half of the assignments to x, ϕ is satisfied for at least half of the assignments to y. Recall that Toda [16] proved that the polynomial-time hierarchy reduces to the class PP, which represents polynomialtime probabilistic computations with unbounded two-sided error and forms the first level of the counting hierarchy. Apart from dealing with harder problems, the quantitative strength of the lower bounds by Allender et al. is also somewhat weaker. In particular, they showed that no probabilistic algorithm can solve MajMajSAT in time n 1+o(1) and space n 1−ǫ for any positive constant ǫ. We refer to [13] for a detailed survey of the past work on time-space lower bounds for satisfiability and related problems, including a presentation of the Allender et al. lower bound that is slightly different from the original one. In this paper we study the most powerful model that is considered physically realistic, namely quantum algorithms with bounded error. We obtain the first nontrivial time-space lower bound for quantum algorithms solving problems related to satisfiability. In the bounded two-sided error randomized setting, the reason we can get lower bounds for Σ ℓ SAT for ℓ ≥ 2 but not for ℓ = 1 relates to the fact that we know efficient simulations of such randomized computations in the second level of the polynomial-time hierarchy but not in the first level. In the quantum setting the situation is worse: we know of no efficient simulations in any level of the polynomial-time hierarchy. The best simulations to date are due to Adleman et al. [1], who showed that polynomial-time quantum computations with bounded two-sided error can be simulated in PP. Building on this connection, we bring the lower bounds of Allender et al. to bear on bounded-error quantum algorithms. Our main result shows that either a time lower bound holds for quantum algorithms solving MajMajSAT or a time-space lower bound holds for MajSAT. As a corollary, we obtain a single time-space lower bound for MajMajSAT. Corollary 1. MajMajSAT cannot be solved by a quantum algorithm with bounded two-sided error running in time n 1+o (1) and space n 1−ǫ for any ǫ > 0. Unlike in the deterministic and randomized cases, it is not obvious how to define a model of quantum computation that allows us to accurately measure both time and space complexity. The existing models give rise to various issues. For example, intermediate measurements play a critical role as they are needed for time-space efficient simulations of randomized computations by quantum computations. Several of the known models only allow measurements at the end of the computation but not during the computation. As another example, the classical time-space lower bounds hold for models with random access to the input and memory. This makes the lower bounds more meaningful as they do not exploit artifacts due to sequential access. Extending the standard quantum Turing machine model [3] to accommodate random access leads to complications that make the model inconvenient to work with. In Section 2 we discuss these and other issues in detail, and we survey the known models from the literature. We present a model that addresses all issues and is capable of efficiently simulating all other uniform models that may be physically realizable in the foreseeable future. Thus, lower bounds in our model reflect true problem hardness. The main technical novelty for establishing Theorem 1 consists of an improved time-and spaceefficient simulation of quantum computations by unbounded-error probabilistic computations. The previously known simulations, such as the one by Adleman et al., do not deal with intermediate measurements in a space-efficient way. We show how to cope with intermediate measurements in a space-efficient way and without loss in running time. Our construction works even when the sequence of local quantum operations can depend on previous measurement outcomes; i.e., we handle more powerful models than uniform quantum circuits. Our simulation makes use of a result on approximating quantum gates due to Solovay and Kitaev [11]. Theorem 1 follows from our simulation and the Allender et al. lower bound. The quantitative strength of our lower bound derives from the latter; our translation does not induce any further weakening. The rest of this paper is organized as follows. We start with a discussion of the model in Section 2, we derive our results in Section 3, and we conclude with some open problems in Section 4. Throughout we assume basic background in quantum computation; see for example [14,12]. Models of Quantum Computation In this section we develop the model that we use for the exposition of our arguments. Section 2.1 contains a discussion of the issues that arise in choosing a model of quantum computation that accurately reflects time and space complexity. In Section 2.2 we describe how previously studied models fit into our taxonomy. We motivate and precisely define our chosen model in Section 2.3. Although we consider the development of such a model as a contribution of our paper, the crux of our main result can be understood at an abstract level. As such, a reader who would like to quickly get to the heart of our paper can skip Section 2. Issues Our model should capture the notion of a quantum algorithm as viewed by the computer science and physics communities and allow us to accurately measure the resources of time and space. For example, the model should allow us to express important quantum algorithms such as Shor's [15] and Grover's [9] in a way that is natural and faithfully represents their complexities. This forms the overarching issue in choosing a model. Below we discuss eight specific aspects of quantum computation models and describe how the corresponding issues are handled in the classical setting. Sublinear space bounds. Many algorithms have the property that the amount of work space needed is less than the size of the input. Models such as one-tape Turing machines do not allow us to accurately measure the space usage of such algorithms because they charge for the space required to store the input. In the deterministic and randomized settings, sublinear space bounds are accommodated by considering Turing machines with a read-only input tape that does not count toward the space bound and read-write work tapes that do. In the quantum setting, we need a model with an analogous capability. Random access to the input and memory. In order to accurately reflect the complexity of computational problems, our model should include a mechanism for random access, i.e., the ability to access any part of the input or memory in a negligible amount of time (say, linear in the length of the address). For example, there is a trivial algorithm for the language of palindromes that runs in quasilinear time and logarithmic space on standard models with random access, but the timespace product of any traditional sequential-access Turing machine deciding palindromes is at least quadratic. The latter result does not reflect the complexity of deciding palindromes, but rather exploits the fact that sequential-access machines may have to waste a lot of time moving their tape heads back and forth. Classical Turing machines can be augmented with a mechanism to support random access; our quantum model should also have such a mechanism. Intermediate measurements. Unlike the previous two issues, intermediate measurements are specific to the quantum setting. In time-bounded quantum computations, it is customary to assume that all measurements occur at the end. This is because intermediate measurements can be postponed by introducing ancilla qubits to store what would be the result of the measurement, thus preventing computation paths with different measurement outcomes from interfering with each other. However, this has a high cost in space -a computation running in time t may make up to t measurements, so the space overhead could be as large as t, which could be exponential in the original space bound. Hence, to handle small space bounds our model should allow intermediate measurements. Indeed, this is crucial for our model to meet the expectation of being at least as strong as randomized algorithms with comparable efficiency parameters; the standard way to "flip a coin" in the quantum setting is to apply a Hadamard gate to a qubit in a basis state and then measure it. Also, many quantum algorithms, such as Shor's factoring algorithm, are naturally described using intermediate measurements. We also need to decide which measurements to allow. Projective measurements in the computational basis are the most natural choice. Should we allow projective measurements in other bases? How about fully general measurements (see Section 2.2.3 in [14]), where the measurement operators need not be projections? General measurements can be performed by introducing ancilla qubits (at a cost in space), performing a change of basis (at a cost in time), and doing a projective measurement in the computational basis, one qubit at a time. It is reasonable to charge the complexity of these operations to the algorithm designer, so we are satisfied with allowing only single-qubit measurements in the computational basis. Obliviousness to the computation history. Computations proceed by applying a sequence of local operations to data. We call a computation nonoblivious if at each step, which local operation to use and which operands to apply it to may depend on the computation history. A generic deterministic Turing machine computation is nonoblivious. We can view each state as defining an operation on a fixed number of tape cells, where the operands are given by the tape head locations. In each step, the outcome of the applied operation affects the next state and tape head locations, so both the operation and the operands can depend on the computation history. In contrast, a classical circuit computation is oblivious because neither the operation (gate) nor the operands (wires connected to the gate inputs) depend on the computation history (values carried on the wires). In the randomized and quantum settings, the notion of a computation history becomes more complicated because there can be many computation paths. In the randomized setting, applying a randomized operation to a configuration may split it into a distribution over configurations, and the randomized Turing machine model allows the next state and tape head locations to depend on which computation path was taken. In the quantum setting, applying a quantum operation to a basis state may split it into a superposition over several basis states, and general nonoblivious behavior would allow the next operation and operands to depend on which computation path was taken. However, it is unclear whether such behavior is physically realizable, as currently envisioned technologies all select quantum operations classically. An intermediate notion of nonobliviousness, where the operations and operands may depend on previous measurement outcomes but not on the quantum computation path, does seem physically realistic. Classical control. There is a wide spectrum of degrees of interaction between a quantum computation and its classical control. On the one hand, one can imagine a quantum computation that is entirely "self-sufficient," other than the interaction needed to provide the input and observe the output. On the other hand, one can imagine a quantum computation that is guided classically every step of the way. Self-sufficiency is inherent to computations that are nonoblivious to the quantum computation path, whereas measurements are inherently classically controlled operations. Incorporating intermediate measurements into computations that are nonoblivious to the quantum computation path would require some sort of global coordination among the quantum computation paths to determine when a measurement should take place. Syntax. Our model should be syntactic, meaning that identifying valid programs in the model is decidable. If we are interested in bounded-error computations, then we cannot hope to decidably distinguish programs satisfying the bounded-error promise from those that do not. However, we should be able to distinguish programs that evolve according to the postulates of quantum mechanics from those that do not. Allowing nonobliviousness to the quantum computation path complicates this syntax check. If different components of the superposition can undergo different unitary operations then the overall operation is not automatically unitary, due to interference. Extra conditions on the transition function are needed to guarantee unitarity. Complexity of the transition amplitudes. Care should be taken in specifying the allowable transition amplitudes. In the randomized setting, it is possible to solve undecidable languages by encoding the characteristic sequences of these languages in the transition probabilities. This problem is usually handled by using a certain universal set of elementary randomized operations, e.g., an unbiased coin flip. In the quantum setting, the same problem arises with unrestricted amplitudes. Again, one can solve the problem by restricting the elementary quantum operations to a universal set. However, unlike in the randomized setting, there is no single standard universal set like the unbiased coin flip with which all quantum algorithms are easy to describe. Algorithm designers should be allowed to use arbitrary local operations provided they do not smuggle hardto-compute information into the amplitudes. Absolute halting. In order to measure time complexity, we should use a model that naturally allows any algorithm to halt absolutely within some time bound t. In the randomized setting, one can design algorithms whose running times are random variables and may actually run forever. We can handle such algorithms by clocking them, so that they are forced to halt within some fixed number of time steps. Our quantum model should provide a similar mechanism. Earlier Models Now that we have spelled out the relevant issues and criteria, we consider several previously studied models as candidates. Bernstein and Vazirani [3] laid the foundations for studying quantum complexity theory using quantum Turing machines. Their model uses a single tape and therefore cannot handle sublinear space bounds. Like classical one-tape Turing machines, their model is sequential-access. It does not allow intermediate measurements. On the other hand, their model is fully nonoblivious: the transition function produces a superposition over basis configurations, and the state and tape head location may be different for different components of the superposition. Their model represents the self-sufficient extreme of the classical control spectrum. In their paper, Bernstein and Vazirani prove that their model is syntactic by giving a few orthogonality constraints on the entries of the transition function table that are necessary and sufficient for the overall evolution to be unitary. These conditions are somewhat unnatural, and can be traced back to the possibility of nonobliviousness to the quantum computation path. Bernstein and Vazirani restrict the transition amplitudes by requiring that the first k bits of each amplitude are computable deterministically in time poly(k). Their model is nontrivial to clock; they require that the transition function be designed in such a way that the machine always halts, meaning that it reaches a superposition in which all nonhalting basis configurations have zero amplitude. Bernstein and Vazirani detail how to design such mechanisms. In [18], Watrous considers a model similar to Bernstein and Vazirani's, but with one readwrite work tape and a read-only input tape not counting toward the space bound. The model naturally allows for sublinear space bounds, but it is still sequential-access. It allows intermediate measurements but only for the halting mechanism: a special register is measured after each time step, with the outcome indicating "halt and output 1", "halt and output 0", or "continue". The model is nonoblivious like the Bernstein-Vazirani model. It has more classical interaction due to the halting mechanism, but this is arguably not "classical control." The syntax conditions on the transition function are similar to those for the Bernstein-Vazirani model. The results in [18] require the transition amplitudes to be rational, which is somewhat unappealing since one may often wish to use Hadamard gates, which have irrational amplitudes. Similar to the Bernstein-Vazirani model, the model is nontrivial to clock. In fact, the results in [18] rely on counting an infinite computation as a rejection. The main issue with the above models for our purposes is their sequential-access nature. It is possible to handle this problem by imposing a random-access mechanism. However, the conditions on the entries of the transition function table characterizing unitary evolution become more complicated and unnatural, making the model inconvenient to work with. Again, the culprit is the nonobliviousness to the quantum computation path. Since this behavior does not appear to be physically realizable in the foreseeable future anyway, the complications arising from it are in some sense unjustified. In [17], Watrous considers a different model of space-bounded quantum computation. This model is essentially a classical Turing machine with an additional quantum work tape and a fixedsize quantum register. Sublinear space bounds are handled by charging for the space of the classical work tape and the quantum work tape but not the input tape. All three tape heads move sequentially. This model handles intermediate measurements. It is oblivious to the quantum computation path; the state and tape head locations cannot be in superposition with the contents of the quantum work tape. However, the computation is nonoblivious to the classical computation history, including the measurement outcomes. The finite control is classical; in each step it selects a quantum operation and applies it to the combination of the qubit under the quantum work tape head together with the fixed-size register. The register is needed because there is only one head on the quantum work tape, but a quantum operation needs to act on multiple qubits to create entanglement. The allowed operations come from the so-called quantum operations formalism (see Chapter 8 of [14]), which encompasses unitary operations and general measurements, as well as interaction with an external environment. Each quantum operation produces an output from a finite alphabet -the measurement outcome in the case of a measurement. This outcome influences the next (classical) transition. This model is syntactic just like classical Turing machines, with the additional step of testing that each quantum operation satisfies the definition of a valid quantum operation. For his constructions, Watrous needs the transition amplitudes to be algebraic. This model is trivial to clock, since all the control is done classically and thus the machine can halt in a fixed number of steps, just as in the classical setting. The latter model is convenient to work with since the essence of the quantum aspects of a computation are isolated into local operations that are chosen classically and applied to a simple quantum register. This models the currently envisioned realizations of quantum computers. We adopt this model for the exposition of our results, but we need to make some modifications in order to address the following issues. • Algorithms like Grover's require quantum access to the input, i.e., an operation that allows different basis states in a superposition to access different bits of the input simultaneously. On inputs of length n, this is done with a query gate that effects the transformation |i |b → |i |b ⊕ , and x i is the ith bit of the input. The model from [17] does not have such an operation and thus cannot express algorithms like Grover's. While this operation seems no more physically realistic than nonobliviousness to the quantum computation path if we view the input as stored in a classical memory, it does make sense when the input is actually the output of another computation. For these reasons, we include such an operation in our model. • We want our model to have random access to emphasize the fact that our time-space lower bound does not exploit any model artifacts due to sequential access. We can make the model from [17] random-access by allowing each of the tape heads to jump in unit time to a location whose address we have classically computed, just as can be done for deterministic and randomized Turing machines. • The quantum operations used in the model from [17] are more general than we wish to consider. Since we are focusing on the computational aspects of the model, we choose to restrict the set of allowed operations to unitary operations and projective measurements in the computational basis. The quantum operations formalism models the evolution of open quantum systems, which is of information-theoretic rather than algorithmic concern and can be simulated with unitary operations by introducing an additional "environment" system at a cost in space. • The restriction to algebraic transition amplitudes is unnecessary in the present setting. We feel that a reasonable way to restrict the amplitudes is the one chosen by Bernstein and Vazirani; i.e., the first k bits of each amplitude should be computable deterministically in time poly(k). Our Model For concreteness, we now describe and motivate the particular model we use for the exposition of our arguments. Our model addresses all the issues listed in Section 2.1, and is an adaptation of Watrous's model from [17], as described at the end of Section 2.2. In terms of obliviousness, our model corresponds to the physically realistic middle ground where a classical mechanism determines which quantum operation to apply based on the previous measurement outcomes but independent of the actual quantum computation path. Our arguments are robust with respect to the details of the model as long as it has the latter property. In particular, we can handle uniform quantum circuits. Our results also hold for more general models allowing nonobliviousness to the quantum computation path, but this requires more technical work; see the remarks in Section 3.4. Model Definition We define a quantum Turing machine as follows. There are three semi-infinite tapes: the input tape, the classical work tape, and the quantum work tape. Each cell on the input tape holds one bit or a blank symbol. Each cell on the classical work tape holds one bit. Each cell on the quantum work tape holds one qubit. The input tape contains the input, a string in {0, 1} n , followed by blanks, and the classical and quantum work tapes are initialized to all 0's. There are a fixed number of tape heads, each of which is restricted to one of the three tapes. There may be multiple heads moving independently on the same tape. The finite control, the operations on the classical work tape, and all head movements are classical; each operation on the quantum work tape can be either a unitary operation or a singlequbit projective measurement in the computational basis. In each step of the computation, the finite control of the machine is in one of a finite number of states. Each state has an associated classical function, which is applied to the contents of the cells under the heads on the classical work tape, and an associated quantum operation, which is applied to the contents of the cells under the heads on the quantum work tape. The next state of the finite control and the head movements are determined by the current state, the contents of the cells under the input tape heads and classical work tape heads at the beginning of the computation step, and the measurement outcome if the quantum operation was a measurement. Each head moves left one cell, moves right one cell, stays where it is, or jumps to a new location at a precomputed address that is written on the classical work tape between two of the classical work tape heads. The latter type of move is classical random access. We also allow "quantum random access" to the input by optionally performing a query that effects the transformation |i |b → |i |b ⊕ x i on the qubits between two of the quantum work tape heads, where i ∈ {0, 1} * is an address located on the quantum work tape, b ∈ {0, 1}, and x i is the ith bit of the input of length n or 0 if i > n. Among the states of the finite control are an "accept" state and a "reject" state, which cause the machine to halt. Although not needed in this paper, the machine can be augmented with a one-way sequential-access write-only classical output tape in order to compute nonboolean functions. Let us motivate our model definition. In terms of physical computing systems, the input tape corresponds to an external input source, the classical work tape corresponds to classical memory, and the quantum work tape corresponds to quantum memory. The bits and qubits under the heads correspond to the data being operated on in the CPU. We use multiple heads on each tape for several reason. One reason is that creating entanglement requires multiple-qubit operations and hence multiple quantum work tape heads. Another reason is that having multiple heads offers a convenient way of formalizing random access. Since we are studying how algorithm performance scales with the input size, addresses have non-constant length and thus cannot fit under the tape heads all at once. A number of mechanisms are possible for indicating where an address is stored for random access. The one we have chosen, namely that the address is delimited by two tape heads, is artificial and is chosen only for convenience because it makes the model simple and clean. Another possible mechanism is to associate with each head a special index tape used for writing addresses; see [13] for a discussion of this type of model. A minor issue arises with our multiple head approach: an operation on a work tape may not be well-defined if two of the heads are over the same cell. Rather than requiring programs to avoid this situation, which would make the model non-syntactic, we can just assume that no operation is performed on the violating work tape when this situation arises. We allow the heads to move sequentially because if we only allowed random access, then constructing an address would require storing a pointer to the location where that address is stored. The pointer would have have nonconstant size, so we would need a pointer to that pointer, and so on. This chicken-and-egg problem does not appear in physical computing systems, and we explicitly avoid it by allowing sequential traversal of memory without having to "remember" where the head is. Complexity Classes The running time of a quantum Turing machine at input length n is the maximum over all inputs of length n and over all computation paths of the number of steps before the machine halts. The space usage is the maximum over all inputs of length n and over all computation paths of the largest address of a classical work tape head or quantum work tape head during the computation. Either the time or the space may be infinite. Note that we maximize over all computation paths, even ones that occur with probability 0 due to destructive interference. Our definition of space usage allows the space to be exponential in the running time, since in time t a machine can write an address that is exponential in t and move a head to that location using the random-access mechanism. However, the space usage can be reduced to at most the running time with at most a polylogarithmic factor increase in the latter by compressing the data and using an appropriate data structure to store (old address, new address) pairs. (See Section 2.3.1 of [13] for a similar construction.) We are now set up to define quantum complexity classes within our model. Definition. BQTISP(t, s) is the class of languages L such that for some quantum Turing machine M running in time O(t) and space O(s), • if x ∈ L then Pr(M accepts x) ≥ 2 3 , and We also require that for each entry in the matrix representation of each unitary operation of M in the computational basis, the first k bits of the real and imaginary parts are computable deterministically in time poly(k). We define BQTIME(t) similarly but without the space restriction. As evidence in support of our model of choice, we note that the following results hold in our model. holds because a quantum algorithm in our model can directly simulate a randomized algorithm; the only issue is producing unbiased coin flips. For this, the simulation can apply a Hadamard gate to one qubit on the quantum work tape and then measure it. This qubit can be reused to generate as many random bits as needed. • Grover's algorithm shows that OR ∈ BQTISP(n 1/2 · polylog(n), log n), where OR denotes the problem of computing the disjunction of the n input bits. • Shor's algorithm shows that a nontrivial factor of an integer of bit length n can be computed in time O(n 3 · polylog(n)) and space O(n) with error probability at most 1/3 in our model. Time-Space Lower Bound In this section we prove our results. Section 3.1 contains an outline of the two main steps of the proof. In Section 3.2 we argue that we can restrict our attention to a special case of our model using a finite universal set of gates. We show in Section 3.3 how to efficiently simulate this special case on unbounded-error probabilistic algorithms. Results and Proof Outline Using the notation introduced in Section 2.3.2, we can formalize Theorem 1 and Corollary 1 as follows. Theorem 1 follows immediately from the following two results. The first gives a lower bound for MajSAT and MajMajSAT on unbounded-error probabilistic algorithms, and the second translates this lower bound to the quantum setting by giving a time-and space-efficient simulation of quantum algorithms by unbounded-error probabilistic algorithms. Recall that PTISP(t, s) denotes the class of languages decidable by unbounded-error probabilistic algorithms running in time O(t) and space O(s). Lemma 2 is our main technical contribution, and its proof occupies the remainder of Section 3. The first step is to show that we can assume without loss of generality that our model only uses a certain finite universal set of quantum gates. A key ingredient is the Solovay-Kitaev theorem [11], which shows how to approximate any single-qubit unitary gate to within ǫ in the 2-norm sense using only polylog(1/ǫ) gates from a finite universal set. The efficiency afforded by the Solovay-Kitaev theorem is critical for obtaining our lower bound. The second step is to simulate this special case of our model time-and space-efficiently with unbounded-error probabilistic algorithms. Our strategy builds on known simulations of quantum computations without intermediate measurements by probabilistic machines with unbounded error [1,8]. The basic idea of these simulations is to write the final amplitude of a basis state as a simple linear combination of #P functions, where each #P function counts the number of quantum computation paths leading to that state with a certain path amplitude. Taking advantage of our choice of universal set, we can use simple algebraic manipulations to express the probability of acceptance as the difference between two #P functions, up to a simple common scaling factor. Standard techniques then result in a time-and space-efficient simulation by an unbounded-error probabilistic machine. The above approach only handles unitary operations with one final measurement. To handle intermediate measurements, we first adapt this approach to capture the probability of observing any particular sequence of measurement outcomes. The acceptance probability can then be expressed as a sum over all sequences of measurement outcomes that lead to acceptance, where each term is the scaled difference of two #P functions. We can combine those terms into a single one using the closure of #P under uniform exponential sums. However, the usual way of doing this -nondeterministically guess and store a sequence and then run the computation corresponding to that sequence -is too space-inefficient. To address this problem, we note that the crux of the construction corresponds to multiplying two #P functions on the same input. The standard approach runs the two computations in sequence, accepting iff both accept. We argue that we can run these two computations in parallel and keep them in synch so that they access each bit of the guessed sequence at the same time, allowing us to reference each bit only once. We can then guess each bit when needed during the final simulation and overwrite it with the next guess bit, allowing us to meet the space constraint. Regarding the conditions in Lemma 2, we assume t and s are at least logarithmic so that they dominate any logarithmic terms arising from indexed access to the input. We henceforth ignore the technical constructibility constraints on t and s. For Theorem 1, we only need to consider "ordinary" polynomially-bounded functions, which are computable in time polynomial in the length of the output written in binary, which is sufficient for our purposes. Efficient Approximation With a Universal Set A BQTISP(t, s) computation can be viewed as applying a sequence of O(t) classically selected quantum gates to a register of O(s) qubits. There are three types of gates: • Unitary gates selected from a finite library of gates associated with the machine. • Query gates, which effect the transformation |i |b → |i |b ⊕ x i , where i is an index into the input x. • Measurement gates, which perform a single-qubit projective measurement in the computational basis. The first step in the proof of Lemma 2 is to show that we can restrict our attention to machines whose library is a fixed universal set. It is well-known that every unitary transformation can be effected exactly using CNOT gates and single-qubit gates. Defining BQTISP ′ (t, s) to be BQTISP(t, s) with the restriction that each gate in the library either is CNOT or acts on only one qubit, we have the following. For completeness, we sketch a proof of Lemma 3 in Appendix B. It is also well-known that finite universal sets exist which can approximate any unitary operation to arbitrary accuracy. We say that a single-qubit unitary operation U ǫ-approximates a single-qubit unitary operation U if || U − e iθ U || ≤ ǫ for some (irrelevant) global phase factor e iθ . We say that a set S of single-qubit unitary gates is universal for single-qubit unitary gates if for all single-qubit unitary gates U and all ǫ > 0 there is a sequence U 1 , . . . , U ℓ of gates from S such that the operation U 1 · · · U ℓ ǫ-approximates U . We use the fact that the set {F, H} is universal for single-qubit unitary gates 2 , where H is the Hadamard gate and F = 1 0 0 3 5 + 4 5 i in the computational basis. We can restrict our attention to quantum Turing machines with library {CNOT, F, H} by replacing each single-qubit gate in the library of a BQTISP ′ machine with an approximation using F and H gates. We need to satisfy the following requirements: • The transformation should not increase the number of gates applied by too much. • The new sequence of gates should still be efficiently computable by a classical algorithm. • The probability an input x is accepted should not change by too much when we apply the transformation. The following key theorem allows us to meet these constraints. Lemma 4 (Solovay and Kitaev [11]). If S is universal for single-qubit unitary gates and is closed under adjoint, then for all single-qubit unitary gates U and all ǫ > 0 there is a sequence of at most polylog(1/ǫ) gates from S that ǫ-approximates U . Moreover, such a sequence can be computed deterministically in time polylog(1/ǫ) provided the first k bits of the matrix entries of U and the gates in S are computable in time poly(k). The proof by Solovay and Kitaev gives an algorithm for computing an approximation in time polylog(1/ǫ), ignoring the complexity of arithmetic (see [4] and Section 8.3 of [12]). We cannot do exact arithmetic since the entries of our gates may require infinitely many bits to specify, but in each step of the algorithm it suffices to work with poly(ǫ)-approximations to all of the matrices. Computing each matrix entry to O(log (1/ǫ)) bits suffices for this because the matrices have only constant size. By our complexity constraint on the transition amplitudes and the fact that the entries of F and H are also efficiently computable, this incurs only a polylog(1/ǫ) time (and space) overhead. We argue that approximating each single-qubit unitary gate to within Θ(1/t) ensures that the probability of acceptance of a quantum Turing machine running in time t only changes by a small amount. Note that our model is nonoblivious to measurement outcomes, so there may be exponentially many classical computation paths corresponding to the different measurement outcomes. A simple union bound over these paths does not work since it would require exponentially small precision in the approximations, which we cannot afford. However, the approximation errors are relative to the probability weights of the paths. As a result, the overall error cannot grow too large. Define BQTISP ′′ (t, s) to be BQTISP(t, s) with the restriction that the library of gates is {CNOT, F, F † , H, I}. We include "identity gates" I for technical reasons -we need to allow computation steps that do not change the state of the quantum tape. We include F † gates because the Solovay-Kitaev theorem requires the universal set to be closed under adjoint. Lemma 5. For all sufficiently constructible t and all s, We defer the proof of Lemma 5 to Appendix A. So consider a language L ∈ BQTISP ′′ (t, s) and an associated quantum Turing machine M . We fix an arbitrary input x and assume for simplicity of notation that on input x, M uses exactly s qubits and always applies exactly t quantum gates, exactly m of which are measurements, regardless of the observed sequence of measurement outcomes. This can be achieved by padding the computation with gates that do not affect the accept/reject decision of M . Computation Tree PP can be characterized as the class of languages consisting of inputs for which the difference of two #P functions exceeds a certain polynomial-time computable threshold. Thus, we would like to express the acceptance probability of M on input x as the ratio of the difference of two #P functions and some polynomial-time computable function. To facilitate the argument, we model the computation of M on input x as a tree, analogous to the usual computation trees one associates with randomized or nondeterministic computations. We can express the final amplitude of a basis state as a linear combination of #P functions, where each #P function counts the number of root-to-leaf paths in the tree that lead to that basis state and have a particular path amplitude. The coefficients in this linear combination are the path amplitudes, which are the products of the transition amplitudes along the path. In order to rewrite the linear combination as a ratio of the above type, we guarantee certain properties of the transition amplitudes in the tree. • First, our choice of universal set allows us to cancel a common denominator out of any given gate in such a way that the numerators become Gaussian integers. We can make the product of the common denominators the same for all full computation paths, and we will eventually absorb it in the polynomial-time computable threshold function. • Second, the Gaussian integers, such as the 3 + 4i numerator in the F gate, are handled by separating out real and imaginary parts, as well as positive and negative parts, and by multiplicating nodes such that we effectively only need to consider numerators in {1, −1, i, −i}. By multiplicating nodes we mean that we allow one node to have multiple children representing the same computational basis state. For example, the 3 + 4i numerator results in seven children. We formally define the computation tree for our fixed input x as follows. It has depth t. Each level τ = 0, . . . , t represents the state of the quantum tape after the τ th gate is applied and before the (τ + 1)st gate is applied. Each node v has five labels: Note that α(v) will be the product of the numerators of the transition amplitudes along the path that leads to v, and similarly for β(v). Labels of nodes across a given level need not be unique; if v and u are at the same level and σ(v) = σ(u) and µ(v) = µ(u), then v and u represent interference. We now define the tree inductively as follows. The root node v is at level τ (v) = 0 and has µ(v) = ǫ representing that no measurements have been performed yet, σ(v) = 0 s representing the initial state, and α(v) = β(v) = 1 representing that |0 s has amplitude 1 initially. Now consider an arbitrary node v. If τ (v) = t then v is a leaf. Otherwise, v has children at level τ (v) + 1 that depend on the type and operands of (τ (v) + 1)st gate applied given that µ(v) is observed. Let G denote this gate. • If G = H then v has two children v 0 and v 1 . Suppose G is applied to the jth qubit and let σ(v 0 ) and σ(v 1 ) be obtained from σ(v) by setting the jth bit to 0 for σ(v 0 ) and to 1 for σ(v 1 ). Let . The 1 and −1 multipliers for the α-labels correspond to the transition amplitudes of G, except that the common 1/ √ 2 has been factored out and absorbed in the β-label. • If G = F then we consider two cases. Suppose G is applied to the jth qubit. If σ(v) j = 1 then v has seven children, all with the same σ-label σ(v). Three of the children have α-label α(v) and the other four have α-label i · α(v), and all children have β-label 5β(v). This corresponds to an amplitude of 3+4i 5 , where the numerator 3 + 4i has been spread out across multiple children so as to maintain the property that all α-labels are in {1, −1, i, −i}. If σ(v) j = 0 then v has five children, again all with the same σ-label σ(v), and now all with the same α-label α(v) and all with β-label 5β(v). This corresponds to an amplitude of 5/5, so that a common denominator of 5 can be used for all nodes resulting from the application of G. • If G = F † then the children of v are constructed as in the case G = F except that the children with α-label i · α(v) now have α-label −i · α(v). For the cases where G is unitary, we put µ(u) = µ(v) for all children u. If G is a measurement gate then we put µ(u) = µ(v)σ(v) j , where u is the unique child of v and j is the index of the qubit measured by G. Note that the denominator β(v) can be written as , where f (v) denotes the number of F and F † gates along the path from the root to v, and h(v) the number of H gates. In fact, f (v) and h(v) can be viewed as functions of τ (v) and µ(v) only, and we will write f (τ, µ) and h(τ, µ) accordingly. This is because the sequence of gates that leads to a node v only depends on µ(v) (and on the fixed input x). The latter reflects the obliviousness of the model to the quantum computation path. In order to describe how the computation tree reflects the evolution of the quantum tape, we introduce the following notation: Suppose we run M but do not renormalize state vectors after measurements. Then after τ gates have been applied, we have a vector for each sequence of measurement outcomes µ that could have occurred during the first τ steps. The nodes in V τ,µ together with their amplitudes give the vector for µ, since these are exactly the nodes whose computation paths are consistent with the measurement outcomes µ 1 · · · µ |µ| . More precisely, an inductive argument shows that the vector for µ equals v∈Vτ,µ α(v)|σ(v) . The squared 2-norm of each such vector equals the probability p µ of observing µ. In particular, at the end of the computation we obtain the following key property. We present a formal proof of Claim 1 in Appendix A. Machine Construction With Claim 1 in hand, we now show how to construct a probabilistic machine N running in time O(t) and space O(s + log t) such that for all inputs x, • if Pr(M accepts x) > 1/2 then Pr(N accepts x) > 1/2, and • if Pr(M accepts x) < 1/2 then Pr(N accepts x) < 1/2. This suffices to prove Lemma 2. We first construct nondeterministic machines M 1 , M −1 , M i , M −i , each taking as input a triple (x, µ, σ) where x ∈ {0, 1} n , µ ∈ {0, 1} m , and σ ∈ {0, 1} s . (Recall that most of our notation, such as m and s, is with reference to the particular input x.) For each α ∈ {1, −1, i, −i}, M α will run in time O(t) and space O(s + log t) and satisfy #M α (x, µ, σ) = |V t,µ,σ,α |, where #M α (x, µ, σ) denotes the number of accepting computation paths of M α on input (x, µ, σ). Since t is constructible, we can assume without loss of generality that all machines are constructed so as to have exactly 2 g computation paths for some constructible function g = O(t). This allows us to compare numbers of accepting paths to numbers of rejecting paths. We simply have M α (x, µ, σ) nondeterministically guess a root-to-leaf path in the computation tree. The only information about the current node v it needs to keep track of is σ(v) and α(v), taking space O(s). It keeps a pointer into µ, taking space O(log t). It determines the correct sequence of gates by simulating the classical part of M , taking O(t) time and O(s) space. When processing a measurement gate G, M α checks that applying G to the current σ(v) yields the next bit of µ. It rejects if not and otherwise continues, using that bit of µ as the measurement outcome. When it reaches a leaf v, M α checks that σ(v) = σ and α(v) = α and accepts if so and rejects otherwise. As constructed, M α has the desired behavior. Fix µ ∈ {0, 1} m . By Claim 1, the probability of observing µ satisfies where M + is a nondeterministic machine that guesses α ∈ {1, −1, i, −i} and then runs two copies of M α , accepting iff both accept, and M − is a nondeterministic machine that guesses α ∈ {1, −1, i, −i} and then runs a copy of M α and a copy of M −α , accepting iff both accept. We run the copies in parallel, keeping them in synch so that they access each bit of µ at the same time. Note that since M + and M − can reject after seeing a single disagreement with µ, the two copies being run will apply the same sequence of gates and thus access each bit of µ at the same time. It follows that both M + and M − need to reference each bit of µ only once. As we show shortly, this is critical for preserving the space bound. Both M + and M − run in time O(t) and space O(s + log t). In order to capture the probability of acceptance of M , we would like to sum over all complete sequences of measurement outcomes µ that cause M to accept. We assume without loss of generality that f (t, µ) and h(t, µ) are independent of µ, say f (t, µ) = f and h(t, µ) = h for all µ, so that the scaling factor 1/25 f (t,µ) 2 h(µ) can be factored out of this sum. To achieve this, we can modify M so that it counts the number of F and F † gates and the number of H gates it applies during the computation and then applies some dummy gates at the end to bring the counts up to the fixed values f and h. We construct nondeterministic machines N + and N − , both running in time O(t) and space O(s + log t), such that and We have N + (x) run M + (x, µ, σ) for a nondeterministically guessed µ ∈ {0, 1} m and σ ∈ {0, 1} s and accept iff M + accepts and µ causes M to accept, and similarly for N − . Since every accepting execution of M + or M − follows an execution of M with measurement outcomes µ, we know at the end whether µ causes M to accept. However, letting N + just nondeterministically guess µ and σ and then run M + (x, µ, σ) does not work because it takes too much space to store µ. Since M + and M − were constructed in such a way that each bit of µ is only referenced once, we can nondeterministically guess each µ j when needed and overwrite the previous µ j−1 . The space usage of σ is not an issue, so σ can be guessed and stored at any time. Constructed in this way, N + and N − have the desired properties. It follows that the probability M accepts x is Thus, We can now use a standard technique to obtain the final PTISP simulation N . We assume without loss of generality that h ≥ 1. Recall that we assume N + and N − each always have exactly 2 g computation paths for some constructible function g = O(t). By nondeterministically picking N + or N − to run, and flipping the answer if N − was chosen, we get #N + (x) − #N − (x) + 2 g accepting computation paths. We can generate 2 g+1 dummy computation paths, exactly 2 g + 25 f 2 h−1 of which reject, to shift the critical number of accepting paths to exactly half the total number of computation paths. To do this very time-and space-efficiently, we take advantage of our use of a universal set, which gives the number of rejecting dummy paths a simple form. We have N nondeterministically guess g + 1 bits; if the first bit is 0 it rejects, and otherwise it ignores the next h − 1 bits, groups the next 6f bits into groups of 6 forming a number in {0, . . . , 31}, accepts if any group is at least 25, and otherwise rejects iff the remaining guess bits are 0. Since t is constructible, we can take f = O(t) and h = O(t) to be constructible functions so that N can compute f and h without affecting the complexity parameters. As constructed, N runs in time O(t) and space O(s + log t) and accepts x with probability greater than 1/2 if x ∈ L and with probability less than 1/2 if x ∈ L. This finishes the proof of Lemma 2. Remarks Notice that the proof of Lemma 2 shows that the unbounded-error version of BQTISP ′′ (t, s) is contained in PTISP(t, s + log t). Since Theorem 1 only operates at the granularity of polynomial time and space bounds, it would suffice to have Lemma 5 show that BQTISP ′ (t, s) is contained in the unbounded-error version of BQTISP ′′ (t 1+o(1) , s + t o (1) ). This allows us to prove Theorem 1 under a more relaxed definition of BQTISP(t, s): • We could relax the error probability to 1/2 − 1/poly(t) and relax the time for computing the first k bits of the amplitudes to 2 o(k) , since the overhead in computing amplitudes would still be subpolynomial and the Solovay-Kitaev algorithm could still produce 1/poly(t)approximations, for arbitrarily high degree polynomials, in t o(1) time. The argument in Lemma 5 still proves that the error probability remains less than 1/2 in this case. • Alternatively, we could keep the amplitude efficiency at poly(k) and relax the error probability to 1/2 − 1/2 t o(1) ; then the Solovay-Kitaev algorithm would need to compute 1/2 t o(1)approximations, which would still only take t o(1) time. A natural goal is to strengthen Lemma 2 to unbounded-error quantum algorithms; the problem is that the error probability gets degraded during the approximation process of Lemma 5 and thus needs to be bounded away from 1/2 by a nonnegligible amount to begin with. Finally, we remark that nothing prevents our proof of Lemma 2 from carrying over to any reasonable model of quantum computation that is nonoblivious to the quantum computation path. In this case, the sequence of gates leading to a node v in the computation tree does not only depend on µ(v), but this fact does not present a problem for our proof. However, the proof becomes more technical since, e.g., approximating each local unitary operation may lead to overall nonunitary evolution since the local operations are themselves applied in a superposition. These complications arise for the same reason as the unnatural conditions on the transition function in the models from [3] and [18]. We feel that working out the details of such a result would not be well-motivated since the currently envisioned realizations of quantum computers do not support such nonoblivious behavior. Conclusion Several questions remain open regarding time-space lower bounds on quantum models of computation. An obvious goal is to obtain a quantitative improvement to our lower bound. It would be nice to get a particular constant c > 1 such that MajMajSAT cannot be solved by quantum algorithms running in n c time and subpolynomial space. The lower bound of Allender et al. does yield this; however, the constant c is very close to 1, and determining it would require a complicated analysis involving constant-depth threshold circuitry for iterated multiplication [10]. Perhaps there is a way to remove the need for this circuitry in the quantum setting. A major goal is to prove quantum time-space lower bounds for problems that are simpler than MajMajSAT. Ideally we would like lower bounds for satisfiability itself, although lower bounds for its cousins in PH and ⊕P would also be very interesting. The difficulty in obtaining such lower bounds arises from the fact that we know of no simulations of quantum computations in these classes. The known time-space lower bounds for satisfiability and related problems follow the indirect diagonalization paradigm, which involves assuming the lower bound does not hold and then deriving a contradiction with a direct diagonalization result. For example, applying this paradigm to quantum algorithms solving Σ ℓ SAT would entail assuming that Σ ℓ SAT has an efficient quantum algorithm. Since Σ ℓ SAT is complete for the class Σ ℓ P under very efficient reductions, this hypothesis gives a general simulation of the latter class on quantum algorithms. To reach a contradiction with a direct diagonalization result, we seem to need a way to convert these quantum computations back into polynomial-time hierarchy computations. Strengthening Corollary 1 to MajSAT instead of MajMajSAT may currently be within reach. Recall that the result of [2] only needs the following two types of hypotheses to derive a contradiction: • MajMajSAT ∈ PTIME(n c ), and • MajSAT ∈ PTISP(n d , n 1−ǫ ). Under the hypothesis MajSAT ∈ BQTISP(n 1+o(1) , n 1−ǫ ), Lemma 2 yields the second inclusion but not the first. One can use the hypothesis to replace the second majority quantifier of a MajMajSAT formula with a quantum computation. However, we do not know how to use the hypothesis again to remove the first majority quantifier, because the hypothesis only applies to majority-quantified deterministic computations. Fortnow and Rogers [8] prove that PP BQP = PP, and their proof shows how to absorb the "quantumness" into the majority quantifier so that we can apply the hypothesis again. However, their proof critically uses time-expensive amplification and is not efficient enough to yield a lower bound for MajSAT via the result of [2]. It might be possible to exploit the space bound to obtain a more efficient inclusion. It might also be possible to exploit more special properties of the construction in [2] to circumvent the need for the amplification component. A Postponing Measurements In this appendix we describe a framework for analyzing quantum algorithms with intermediate measurements by implicitly postponing the measurements and tracking the unitary evolution of the resulting purification. We stress that we are doing so for reasons of analysis only; our actual simulations do not involve postponing measurements. This framework facilitates the proofs of Claim 1 and Lemma 5. We first describe the common framework and then use it for those two proofs. Consider a quantum Turing machine M running in time t and space s. We fix an arbitrary input x and assume for simplicity of notation that on input x, M uses exactly s qubits and always applies exactly t quantum gates, exactly m of which are measurements, regardless of the observed sequence of measurement outcomes. This can be achieved by padding the computation with gates that do not affect the accept/reject decision of M . We conceptually postpone the measurements in the computation by • introducing m ancilla qubits initialized to all 0's, • replacing the ith measurement on each classical computation path by an operation that entangles the ith ancilla qubit with the qubit being measured (by applying a CNOT to the ancilla with the measured qubit as the control), and • measuring the m ancilla qubits at the end. In the τ th step of the simulation, we apply a unitary operation U τ on a system of s + m qubits, where U τ acts independently on each of the subspaces corresponding to distinct sequences of measurement outcomes that can be observed before time step τ . More precisely, consider the set of µ ∈ {0, 1} ≤m such that given that µ is observed, the τ th gate is applied after µ is observed but not after the (|µ| + 1)st measurement gate is applied. Let U τ be the set of µ such that the τ th gate is unitary, and let M τ be the set of µ such that the τ th gate is a measurement. For ν ∈ {0, 1} m , let P ν denote the projection on the state space of the ancilla qubits to the one-dimensional subspace spanned by |ν . For µ ∈ U τ , let G τ,µ denote the unitary operator on the state space of s qubits induced by the τ th gate applied given that µ is observed. Then U τ acts as G τ,µ ⊗ I on the range of I ⊗ P µ0 m−|µ| . For each µ ∈ M τ , U τ applies an entangling operation E τ,µ that acts only on the range of I ⊗ (P µ0 m−|µ| + P µ10 m−1−|µ| ). The behavior of U τ on the remaining subspaces does not matter; we can set it arbitrarily to the identity operator. Thus, where R is a term that expresses the behavior on the remaining subspaces. It is well-known, and can be verified from first principles, that the probability of observing any sequence of measurement outcomes µ ∈ {0, 1} m when M is run equals the probability of observing µ after the evolution U = U t U t−1 · · · U 2 U 1 with all of the ancilla qubits initialized to 0. That is, Pr(µ observed) = (I ⊗ P µ )U |0 s+m 2 . We next prove Claim 1 and Lemma 5. These proofs both use the above framework but are otherwise independent of each other. A.1 Proof of Claim 1 Recall that we have a tree expressing the computation of M on the fixed input x, and we wish to show that the probability of observing any complete sequence of measurement outcomes µ ∈ {0, 1} m satisfies Consider the above postponed measurement framework. The state of the system after τ steps is given by U τ · · · U 1 |0 s+m . We can also write this state as a sum of contributions from all nodes in the τ th level of the tree. More precisely, we claim that where V τ = µ V τ,µ and for each node v, Note that |ψ(v) is the basis state of v multiplied by its amplitude, with the ancilla qubits set to indicate the sequence of measurement outcomes that leads to v. We argue that the decomposition (1) holds by induction on τ = 0, . . . , t. The base case τ = 0 is trivial. For τ > 0, by induction it suffices to show that for each node v ∈ V τ −1 , U τ |ψ(v) = u∈c(v) |ψ(u) , where c(v) denotes the set of children of v. There are two cases. If µ(v) ∈ U τ then it can be verified directly from the construction of the tree that If µ(v) ∈ M τ then it can be directly verified that U τ |ψ(v) = E τ,µ(v) |ψ(v) = |ψ(u) , where u is the child of v. This completes the induction step. A.2 Proof of Lemma 5 Consider a language L ∈ BQTISP ′ (t, s) and an associated quantum Turing machine M ′ , and fix an arbitrary input x. We assume as above that on input x, M ′ uses exactly s qubits and always applies exactly t gates, exactly m of which are measurements. We transform M ′ into a machine M ′′ running in time t · polylog(t) and space s + polylog(t) accepting L with error probability bounded away from 1/2 by a constant. By standard amplification techniques, the error probability can be made at most 1/3, so L ∈ BQTISP ′′ (t · polylog(t), s + polylog(t)). Using Lemma 4, we have M ′′ run M ′ but replace each single-qubit unitary gate with a 1/20tapproximation consisting of at most polylog(t) gates from the set {F, F † , H}. The time and space overhead is polylog(t), so M ′′ runs in time t·polylog(t) and space s+polylog(t) and still operates on s qubits. We now show that the probability M ′′ accepts x differs from the probability M ′ accepts x by at most 1/10. This suffices to prove the lemma. Let U ′ = U ′ t · · · U ′ 1 be the evolution on the state space of s + m qubits obtained by implicitly postponing measurements in the computation of M ′ as described above, and let the notation U τ and G τ,µ be as above for this computation. Since the value of µ uniquely determines whether M ′ accepts, we have that Pr(M ′ accepts) = P U ′ |0 s+m 2 , where P denotes sum of I ⊗ P µ over all µ consistent with acceptance. Now let U ′′ = U ′′ t · · · U ′′ 1 be the same evolution as U ′ but where each unitary operation G τ,µ is replaced by the operation G τ,µ that uses the 1/20t-approximation of the τ th gate M ′ applies given that µ is observed, found by the Solovay-Kitaev algorithm. Since multiplying by global phase factors does not affect a computation, we can assume that the approximation used in G τ,µ is at distance at most 1/20t from the original gate of M ′ . Tensoring with the identity does not change the 2-norm of an operator, so we also have G τ,µ −G τ,µ ≤ 1/20t. Now since U ′′ is equivalent to the postponed measurement transformation applied to M ′′ , we have Pr(M ′′ accepts) = P U ′′ |0 s+m 2 . By standard applications of the triangle inequality (see Box 4.1 in [14]), we have that Thus in order to show that the acceptance probability of M ′ and M ′′ on input x differ by at most 1/10, it suffices to show that U ′′ τ − U ′ τ ≤ 1/20t for all τ . The latter holds since for any unit vector |ψ in the state space of s + m qubits, we have B Decomposing Quantum Gates In this appendix we prove Lemma 3, which follows from results proven in Chapter 4 of [14]. We include a proof for reasons of completeness. For the nontrivial inclusion, consider a language L ∈ BQTISP(t, s) and an associated quantum Turing machine M . We convert M into another machine M ′ by replacing each application of a library gate U with a sequence of gates that effects the same operation as U , where each either is CNOT or acts on only one qubit. Then M ′ accepts an input x with the same probability as M , so to show that L ∈ BQTISP ′ (t, s) we just need to check that the efficiency parameters are only affected by constant factors and that the matrix entries of the new gates are still efficiently computable in the required sense. The time parameter clearly only goes up by a constant factor that depends on the gates in the library of M . The transformation is done in two steps and uses results proven in Sections 4.3 and 4.5 of [14]. First, it is shown in [14] that every unitary gate can be decomposed as the product of unitary gates each of which acts nontrivially on only two computational basis states (two-level gates). Applying this transformation to the library gates associated with the family does not affect s. The matrix entries of these two-level gates are obtained via standard math operations from the matrix entries of the original gates and are thus efficiently computable. Second, it is shown in [14] that each two-level gate can be decomposed into a product of CNOTs and single-qubit gates. This transformation can be done in three steps. • First, a two-level gate can be decomposed into a product of operations each of which is a controlled single-qubit operation that conditions on many qubits. This is done by using controlled X gates to interchange adjacent computational basis states in a Gray code order so that the two basis states acted on by the two-level gate differ only in a single qubit. Then a controlled single-qubit gate is used to carry out the nontrivial 2 × 2 submatrix of the two-level gate, and then the basis states are mapped back to their original values (Figure 4.16 in [14]). This does not increase the number of qubits, and all entries in these gates are 0 or 1 or come from the two-level gate, and are hence efficiently computable. • Second, each large controlled operation can be reduced to X gates, Toffoli gates, and a controlled single-qubit gate that conditions on one qubit (Figure 4.10 in [14]). To accomplish this, X gates are first used on some of the control qubits to make the controlled operation condition on all qubits being 1. Then these control qubits are ANDed together into some ancilla qubits using a series of Toffoli gates, and the heart of the operation is carried out by a controlled single-qubit gate that conditions on the ancilla qubit holding the AND of the original control qubits. The ANDing operations are reversed so that the ancilla qubits are reset to 0 and can hence be reused and only increase the number of qubits by an additive constant. • Third, the Toffoli and controlled single-qubit gates can be implemented with special-purpose circuits using only CNOTs and single-qubit gates (Figures 4.9 and 4.6 in [14]). This does not increase the number of qubits, and the matrix entries in this implementation of a controlled-U gate are obtained from the matrix entries for U via standard math operations and are thus efficiently computable. This finishes the proof of Lemma 3.
2007-12-15T23:58:17.000Z
2007-12-01T00:00:00.000
{ "year": 2008, "sha1": "3699bca679f3d05af5f99051cec4f590d2f3e267", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "98c535b2ed4682a1a9b09ad93bfa4478a23c8400", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics", "Physics" ] }
228100353
pes2o/s2orc
v3-fos-license
Manipulating neutrophil degranulation as a bacterial virulence strategy Armed with an arsenal of antimicrobial mechanisms, neutrophils are among the first innate immune cells recruited to the site of bacterial infection. Neutrophils utilize both oxidative and non-oxidative strategies to kill invading microorganisms. Components for non-oxidative killing include antimicrobial proteases that are packaged within intracellular vesicles (“granules”). Neutrophil granules are preformed vesicles of a defined composition that are released in a regulated manner [1]. The process by which neutrophils mobilize granules is called degranulation. Degranulation can occur at the plasma membrane for extracellular release (killing extracellular microorganisms) or to the phagosome for intracellular delivery (killing intracellular microorganisms) [1]. Extracellular degranulation is a double-edged sword of neutrophil antimicrobial function: The antimicrobials contained within granules can kill bacteria, but excessive degranulation can damage host tissue [2]. Neutrophil granules can be broadly categorized into 4 main types: secretory, tertiary, secondary, and primary. The release of each granule type occurs sequentially, with secretory granules released readily throughout the neutrophil life span to replenish cell surface receptors and primary granules requiring the greatest stimulus for release [1]. These granule types are classified based on the specific proteins contained within the lumen or in the membrane of each vesicle [3,4]. Secretory granules contain plasma proteins and Fc and complement receptors. The contents of tertiary granules include matrix metalloproteases such as matrix metallopeptidase 9 (MMP9). Secondary granules contain proteins such as lysozyme, pre-cathelicidin, and lactoferrin. Primary granules contain the most pro-inflammatory and antimicrobial proteins, such as myeloperoxidase, defensins, elastase, and azurocidin [5]. Additionally, during a normal neutrophil degranulation response, most of primary and secondary granule release is directed to the phagosome as a mechanism for minimizing damage to host tissue [6]. The extracellular release of each granule type can be assessed experimentally by detecting amounts of granule proteins present in supernatants by western blotting or ELISA or by quantifying the display of specific membrane-bound proteins (such as CD66b for secondary granules and CD63 for primary granules) on the surface of the neutrophil by flow cytometry or immunofluorescence microscopy. Several bacterial pathogens are known to manipulate neutrophil degranulation as a virulence strategy (Fig 1). By disrupting, dysregulating, or inducing excessive neutrophil degranulation, bacteria can skew the protective effects of neutrophil degranulation in a way that ultimately benefits the pathogen and worsens disease. Understanding the mechanisms by which bacteria alter neutrophil degranulation can provide greater insight into bacterial pathogenesis as well as advance our understanding of neutrophil vesicle trafficking. PLOS PATHOGENS Mechanisms of inhibiting neutrophil degranulation Inhibiting neutrophil degranulation can promote bacterial survival by preventing the targeting of antimicrobial proteins either intracellularly to the phagosome or extracellularly to the plasma membrane. Such inhibition would allow intracellular pathogens to utilize neutrophils as an infectious niche or allow extracellular pathogens unrestricted growth and/or dissemination. One strategy for inhibiting neutrophil degranulation is through targeting neutrophil cell surface receptors. Stimulation of integrins, G protein-coupled receptors, or L-selectin at the neutrophil surface can lead to intracellular signaling events that trigger increases in intracellular calcium levels, which induces granule exocytosis [1,5]. Chlamydia trachomatis produces Bacterial modulation of neutrophil degranulation. Several pathogens either induce or inhibit neutrophil degranulation to promote infection. Uptake of Mycobacteria by CR3 passively prevents fusion of granules with the phagosome. Chlamydia produces CPAF, which cleaves FPR2 to inhibit degranulation. Yersinia injects effectors via the type III secretion system to inhibit degranulation, and Neisseria that does not display Opa reduces fusion of neutrophil granules with the phagosome. On the other hand, Filifactor alocis induces TLR2 signaling, triggering degranulation. Staphylococcus aureus produces PSMs, of which PSMα4 stimulates degranulation through FPR2. Streptococcal species that produce M protein also induce neutrophil degranulation by complexing with fibrinogen and binding β1 integrins. Finally, Anaplasma phagocytophilum induces neutrophil degranulation, although the exact mechanism remains unknown. CPAF, chlamydial protease-like activity factor; CR3, complement receptor 3; FPR2, N-formyl peptide receptor 2; Opa, opacity-associated protein; PSM, phenol-soluble modulin; TLR2, Toll-like receptor 2. https://doi.org/10.1371/journal.ppat.1009054.g001 the protease chlamydial protease-like activity factor (CPAF), which is released extracellularly as Chlamydia-infected epithelial cells lyse, and CPAF cleaves N-formyl peptide receptor 2 (FPR2) from the neutrophil cell surface [7]. FPR2 is a G protein-coupled receptor that signals through phosphoinositide 3-kinase (PI3K) to induce calcium flux and cytoskeletal rearrangements that trigger neutrophil degranulation. CPAF-mediated cleavage of FPR2 prevents neutrophil degranulation as measured by CD11b (contained in multiple granule types) and CD35 (secretory granules), as well as respiratory burst and neutrophil extracellular trap (NET) production [7]. The authors propose that this CPAF-mediated inhibition of neutrophil function via FPR2 targeting allows Chlamydia to escape neutrophil killing when the bacteria are released from infected epithelial cells. Mycobacteria also target cell surface receptors to inhibit neutrophil degranulation, albeit by a passive mechanism. Mycobacterium smegmatis engages neutrophils through complement receptor 3 (CR3), which induces phagocytosis but prevents the downstream fusion of primary granules with the phagosome [8]. This inhibition occurs whether M. smegmatis are live or heat killed, and opsonizing the bacteria to induce phagocytosis through the Fc gamma receptor (FcγR) instead of CR3 stimulates granule fusion [8]. Cougoule and colleagues speculated that neutrophil granule fusion downstream of CR3 is triggered following clustering of the receptor, whereas opsonization is sufficient to trigger degranulation via FcγR. Early during infection, unicellular M. smegmatis is able to bypass this CR3-mediated activation by engaging with a single receptor [8]. Neisseria gonorrhoeae also inhibits granule fusion with the phagosome through selective targeting of neutrophil surface receptors. N. gonorrhoeae that display opacity-associated (Opa) proteins on their surface bind carcinoembryonic antigen-related cell adhesion molecules (CEACAMs), which triggers Src kinase signaling required for primary granule fusion with the phagosome [9]. Opa− N. gonorrhoeae-containing phagosomes have less primary granule fusion and greater intracellular survival [9]. Lastly, intracellular Streptococcus pyogenes inhibits granule fusion with the phagosome as well. Staali and colleagues determined that fewer primary granules fuse with phagosomes that contain S. pyogenes, and this inhibition is dependent on the production of M protein or M-like proteins [10]. However, secondary granule fusion with the phagosome is not inhibited by S. pyogenes M protein [10]. M protein is a bacterial surface protein, and therefore, it may also mediate selective engagement of neutrophil cell surface receptors without triggering downstream granule fusion with the phagosome, similar to Mycobacteria and N. gonorrhoeae. Bacterial pathogens can also inhibit neutrophil degranulation downstream of cell surface receptors by blocking cell signaling pathways. Yersinia spp. encode a type III secretion system that injects effectors (Yersinia outer proteins [Yops]) directly into the neutrophil cytoplasm. Two effectors, YopE and YopH, cooperate to inhibit secondary granule release from neutrophils in Yersinia pseudotuberculosis [11] and primary and secondary granule release in Yersinia pestis [12,13]. Through the use of various chemical inhibitors, Taheri and colleagues demonstrated that Y. pseudotuberculosis inhibits secondary granule release through YopE-/YopH-mediated effects on calcium flux, actin dynamics, and PI3K signaling [11]. Additionally, Y. pestis blocks primary granule release from neutrophils through YopE inhibition of Rac signaling and YopH inhibition of calcium flux [12]. Minor roles for the effectors YpkA and YopJ, particularly in the absence of YopE or YopH, have been proposed in inhibiting neutrophil degranulation, but the precise neutrophil signaling pathways targeted by YpkA and YopJ remain to be determined [13]. results from virulence factor or bacterial engagement of specific neutrophil surface receptors. Staphylococcus aureus secretes a variety of toxins, including phenol-soluble modulins (PSMs). Lin and colleagues determined that PSMα4 activates FPR2 to trigger degranulation [14], and this is the same receptor cleaved by C. trachomatis to inhibit degranulation, as discussed previously [7]. The degranulating neutrophils release heparin-binding protein, which induces vascular leakage in vivo that contributes to the severity of S. aureus infection [14]. S. pyogenes induces neutrophil degranulation and the release of heparin-binding protein through M protein, which complexes with fibrinogen and binds β1 integrins to trigger neutrophil degranulation [15]. As discussed previously, M protein has also been shown to prevent fusion of primary granules with the Streptococcus-containing phagosomes in neutrophils [10]. The opposing effects of M protein may depend on the location of S. pyogenes in reference to the neutrophil (intracellular versus extracellular bacteria). In addition to G protein-coupled receptors and integrins, engagement of Toll-like receptors (TLRs) can also trigger neutrophil degranulation. The oral pathogen Filifactor alocis induces extracellular degranulation through engagement of TLR2, which triggers p38 mitogen-activated protein kinase (MAPK) activation to release secondary, but not primary, granules [16]. Additional cell surface receptors are also engaged by various pathogens to stimulate degranulation during infection. Anaplasma phagocytophilum is an obligate intracellular pathogen that induces neutrophil extracellular degranulation of both primary and secondary granules [17]. It was recently discovered that A. phagocytophilum uses an adhesin, Asp1, to bind protein disulfide isomerase (PDI) on the neutrophil surface, promoting invasion [18]. As PDI is contained within neutrophil secondary and tertiary granules [19], it is possible that A. phagocytophilum induction of degranulation increases the presence of PDI on the neutrophil surface, enhancing invasion. Alternatively, Asp1 engagement with PDI may play a role in stimulating neutrophil degranulation itself, as PDI binds various integrins and facilitates neutrophil activation [20]. Other pathogens such as Helicobacter pylori and Peptoanaerobacter spp. have also been shown to induce significant neutrophil degranulation, although the bacterial and host factors mediating the enhanced neutrophil degranulation are unknown [21,22]. Outcomes of modulating neutrophil degranulation Typically, microorganisms that inhibit neutrophil degranulation have greater survival following interactions with neutrophils. N. gonorrhoeae delays fusion of primary granules with the phagosome, allowing for increased intracellular bacterial survival [9]. This delay may allow N. gonorrhoeae to adapt to the neutrophil phagosome environment and persist within these cells [23]. Extracellular pathogens also inhibit neutrophil degranulation to promote survival. Y. pestis mutants that lack the type III secretion system effectors YopE and YopH (which are required to inhibit neutrophil primary granule release) are killed to a greater extent than wildtype Y. pestis during infection with isolated human neutrophils in vitro [12,24]. While inhibiting neutrophil degranulation allows for bacterial outgrowth, pathogens that induce neutrophil degranulation are frequently associated with severe pathology and host tissue damage that does not affect bacterial growth. For example, neutrophil elastase (contained within primary granules) contributes to lethality by damaging the lungs following intranasal infection with Burkholderia thailandensis, a model organism for studying melioidosis. Elastase-deficient mice inoculated with B. thailandensis survived intranasal challenge and all wildtype mice succumbed to the infection, and it was shown that elastase contributes to lung damage and vascular leakage. However, bacterial burdens were similar between the 2 mouse strains, indicating that neutrophil degranulation of elastase is harmful rather than protective during B. thailandensis infection [25]. Similarly, intravenous injection of the streptococcal M protein into mice is sufficient to induce neutrophil granule-mediated lung damage and vascular leakage, contributing to the development of acute lung injury [26]. Neutrophil degranulation worsens Shigella infection by augmenting its pathogenicity. Antimicrobial proteins released by degranulating neutrophils enhance Shigella flexneri adherence to and invasion of HeLa cells, likely by altering the surface charge of the bacteria to promote interactions with the host cell surface [27]. Thus, the inhibition of neutrophil degranulation can enhance bacterial survival by preventing deployment of antimicrobials contained within granules, whereas enhanced degranulation releases proteases that exacerbate infection through bystander damage of host tissues. Effects of tissue environment on neutrophil degranulation Many studies that analyze neutrophil degranulation use freshly isolated neutrophils and a mono-infection tissue culture assay, and these experiments are critical to our understanding of neutrophil degranulation mechanisms. However, the local infection environment can impact neutrophil physiology and function. As such, it is also important to analyze neutrophil degranulation responses either in vivo or under conditions that more closely mimic the host environment. For instance, many infection sites are hypoxic. Hypoxia increases the magnitude of neutrophil degranulation responses for all granule types via signaling through PI3K [28]. Another complication of in vivo infection includes coinfecting organisms that alter immune responses. In a coculture system, neutrophils have greater degranulation responses when cultured with epithelial cells infected with respiratory syncytial virus (RSV) compared to mockinfected epithelial cells [29]. This enhanced degranulation could alter the capacity of neutrophils to respond to a secondary bacterial infection (a common complication of respiratory viral infection) due to changes in neutrophil function or viability following viral infection. Neutrophils also have a relatively short life span, making them susceptible to control by circadian rhythms. Neutrophils contain a cell-intrinsic program to reduce granule content during various times throughout the day, likely to minimize bystander tissue damage as neutrophils enter and exit various tissues [30]. Lastly, neutrophil responses to infection are typically studied as a whole population, such as measuring granule proteins in supernatants. However, recent imaging studies reveal disparate distributions of host proteins (such as calprotectin) among infectious abscesses even within the same organ [31]. This suggests that individual neutrophils can have varied degranulation responses throughout an infection, depending on a variety of host-and pathogen-dependent factors. Targeting neutrophil degranulation is a successful strategy for several pathogens, resulting in enhanced disease either through greater bacterial growth or greater damage to host tissues (Fig 1). The conditions of the infection can also affect the capacity of neutrophils to degranulate in response to invading microorganisms and alter disease outcome. Understanding the mechanisms by which bacteria alter neutrophil degranulation to promote severe infection may reveal novel therapeutic targets for skewing the deleterious effects of neutrophils back toward the benefit of the host rather than the pathogen.
2020-12-12T14:07:53.559Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "2bb22acdc059f6064924078274b1b441b7602fcf", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1009054&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ae933f12ded4feedf15e32ab907f3bdb45b1543", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
10385530
pes2o/s2orc
v3-fos-license
Uniform Twistor-Like Formulation of Massive and Massless Superparticles with Tensorial Central Charges We construct the manifestly Lorentz-invariant twistorial formulation of N=1 D=4 superparticle with tensorial central charges which describes massive and massless cases in a uniform manner. The tensorial central charges are realized in terms of even spinor variables and central charge coordinates. The full analysis of the number of conserved supersymmetries has been carried out. In the massive case the superparticle preserves 1/4 or 1/2 of target-space supersymmetries whereas the massless superparticle preserves two or three supersymmetries. Introduction In a recent paper [1] we proposed a new relativistic formulation of massive superparticle with tensorial central charges [2]- [9]. The model contains a commuting Weyl spinor as a collection of coordinates of the configuration space and describes a superparticle whose presence breaks two or three of N = 1, D = 4 target-space supersymmetries. It is interesting that in the background of central charges the massive superparticle is equivalent to massive spinning particle [10], [11] if a quarter of target-space supersymmetry is preserved. In a certain sense the commuting spinor variables of the model play the role of index spinor variables [12]- [14]. This model does not contain any special coordinates for the tensorial central charges. Analogous model of massive superparticle preserving 1/4 of target-space supersymmetries has been formulated in [15] without explicit Lorentz covariance. It should be mentioned that D. V. Volkov and his collaborators have proposed one of the first twistor-like models for the massless superparticle [16] and established the equivalence between the spinning particle and the usual superparticle without central charges at least on the classical level. The idea of identifying the κ-symmetry of the superparticle with the local worldline supersymmetry of the spinning particle has been a basic one for the superfield formulation of massless superparticle theory [16] and its generalization to the superembedding description of superbranes [17]. In this paper we present a twistorial formulation of the superparticle with tensorial central charges in which massive and massless cases are described in uniform manner. The model uses both the central charge coordinates and the auxiliary bosonic spinor variables simultaneously. Due to the use of spinors the analysis is simplified by reducing the tensorial quantities to scalar ones. For zero mass our model reduces to the twistorial formulation of the massless superparticle with tensorial central charges [18] in which one or two of target-space supersymmetries are broken. In the massive case we have a bitwistorial formulation of the massive superparticle with tensorial central charges preserving 1/4 or 1/2 of targetspace supersymmetries. For description of the superparticle with tensorial central charges we take the action in twistorlike form Here the one-forms are invariant under global supersymmetry transformations acting in the extended superspace parametrized by the usual superspace coordinates x µ , θ α ,θα and by the tensorial central charge coordinates y αβ ,ȳαβ. The quantities P µ , Z αβ = Z βα ,Zαβ =Zβα, which play the role of the momenta for x µ , y αβ , yαβ, are taken as the sums of products of two bosonic spinors v α a ,vα a Zαβ =vα avβ bC ab , where C ab ,C ab = (C ab ) are symmetric constant matrices. These expressions are completely general with respect to the four-momentum P αβ but imply some constraints on the central charges Z αβ ,Zαβ. Here we do not give the explicit formulation of these constraints. Due to the kinematic constraints which are equivalent to and enter the action (1) with Lagrange multipliers we have det(v α a ) = m and Thus the constant |m| plays the role of the mass. It should be noted that the change of the sign of m is equivalent to antipodal transformations v 1 α ↔ v 2 α of bosonic spinors in "internal space" which leaves invariant the quadratic expressions (4)-(6) for the energy-momentum vector and central charges of the model. In the massless case (m = 0) the spinors v α 1 and v α 2 are proportional to each other v α 1 ∼ v α 2 as the consequense of the kinematic constraints (8). As a result one obtains a formulation of massless superparticle with one bosonic spinor from which both the massless four-momentum and the tensorial central charge are constructed. Such a model has been analyzed in [18]. The number of preserved SUSY is equal two or three in this model. In the proposed model (1) we use a minimal number of bosonic spinors, which is two, for constructing the energy-momentum vector with arbitrary mass [20]. Therefore we regard our formulation as the twistor-like one and concentrate on the massive case in the following. Coefficients in the expansion of the symmetric central charge matrix C ab in terms of the Pauli matrices (σ i ) a b form a complex dimensionless "internal" three-vector C = i(E + iH), real and imaginary parts of which we denote by analogy with electrodynamics. Thus and One can simplify the matrix C of the central charges using redefinitions of the bosonic spinors with unitary unimodular transformation acting on the indices a, b, ... and leaving intact the fourmomentum matrix and kinematic constraints. In fact with some loss of generality we could take the matrix C to be diagonal from the beginning. κ-symmetry transformations The variations of bosonic coordinates under the local κ-symmetry transformations [21], [22] has the same form in terms of the variations of odd spinor coordinates as SUSY variations but are opposite in sign Further, for the one-forms (2) in the action we have The corresponding variation of the Lagrangian is The most general variations of the Grassmann spinors under κ-symmetry are with two complex local Grassmann parameters κ a (τ ),κ a (τ ) = (κ a ). Taking into account the normalization conditions for the bosonic spinors (8) we arrive at The number of preserved supersymmetries is defined by the number of independent functions κ a ,κ a for which δL = 0. Hence the equations should have nontrivial solutions when there is κsymmetry. These equations can be written in the matrix form The matrix ∆ is Hermitian, ∆ = ∆ + , therefore it is unitary diagonalizable. The number of the independent κ-symmetries (solutions of eqs. (19)) coincides with the number of the zero eigenvalues of the matrix ∆. One can easily obtain that So the necessary condition for the presence of κsymmetries (one or more) consists in equality Some algebra gives with Λ ≡ 1 − λ. The characteristic equation reads where the coefficients are Let us now consider all possible eigenvalues of the matrix ∆. 3/4 unbroken SUSY The presence of three zero eigenvalues means that the characteristic equation (25) must be of the form This gives us the conditions k 2 = k 1 = k 0 = 0 on the coefficients of eq. (25). However as one can see from the explicit expressions for the coefficients in our model the inequality k 1 = k 2 is always fulfilled. Therefore the presence of three zero eigenvalues is not possible in the massive case of the model under consideration. So one can not get three first class fermionic constraints and 3/4 unbroken SUSY in this case. 1/2 unbroken SUSY For two zero eigenvalues or 1/2 unbroken SUSY the equation on the eigenvalues λ 2 (λ − λ 1 )(λ − λ 2 ) = 0 means that k 1 = k 0 = 0 in eq. (25). This gives us two conditions on parameters of the central charges C abC ab = 2 , C ab C abC cdC cd = 4 or equivalently in the 3-vector form Thus in this case the vectors E and H are parallel, and they are not equal to zero simultaneously. If two eigenvalues are zero then two nonzero eigenvalues are both equal to 2. Note that the above conditions, which define the case with 1/2 ubrokne SUSY, are equivalent to C abC ab = 2 , C abC bc C cdC da = 2 which are obtained by the Fierz transformation Due to the first condition C acC cb = δ a b + A a b , the matrix A is traceless, A b b = 0 and Hermitian, A + = A. But due to the second condition we obtain the equation A a b A b a = 0 which gives us A a b = 0. Thus in the case of two κ-symmetries (1/2 SUSY preserved) the coefficient matrix of the central charges is unitary The solutions of eqs. (19), provided that condition (26) is fulfilled, can be obtained after the diagonalization of the matrix ∆ To verify the unitarity of the matrix V and the equality (27) we have used the condition (26). Thus eq. (19) takes a simple form Obviously the solution of eq. (28) is However the condition of mutual conjugacyκ a = κ a of the upper and lower part of the column K should be taken into account. To this end let us represent the symmetric unitary matrix C as a square of a symmetric unitary matrix √ C, whose explicit form is not required. Then for an arbitrary real odd two-component quantity ρ the quantity ν = √ Cρ satisfies the required conjugation condition. Thus we have demonstrated that the parameter space of the κ-transformations is actually a two-dimensional real space. Eigenvectors corresponding to the eigenvalue 2 can be obtained in the similar way. But now K ′ = 0 ν ′ whereν ′ = √ Cρ ′ and this space is parameterized by two arbitrary real odd quantities collected in the two-component "vector" ρ ′ . 1/4 unbroken SUSY For a single zero eigenvalue or for 1/4 unbroken SUSY we have the single condition which in term of the vectors E and H has the form In this case the characteristic equation is and the three nonzero eigenvalues are λ = 2, λ = 1 ± C abC ab − 1. As it has been noted above the arbitrary symmetric matrix C can be reduced to the diagonal form by means of the "internal" SU (2)-transformation V . Here ρ 1 , ρ 2 and ϕ 1 , ϕ 2 are real. One can easily obtain that ρ 2 1,2 = E 2 + H 2 ± |E 2 × H 2 |. The case when ρ 1 = ρ 2 = 1 and the matrix C is unitary has been considered in the previous subsection. Now we have The eigenvalues of the matrix ∆ are 1 − ρ 1 and 1 − ρ 2 . The case of a single preserved SUSY is reached if only one of the moduli of the nonzero elements in the diagonal matrix C ′ is equal to 1, for definiteness let it to be ρ 1 , ρ 1 = 1. After the diagonalization of the matrix C the eq. (20) requires vanishing of all entries in K ′ except for Ime iϕ1/2 κ ′ 1 = ν which is arbitrary. This value plays a role of the parameter of the single unbroken SUSY. Further, for the κ-symmetry parameters (17) one has where U is a unitary unimodular matrix diagonalizing the matrix C ab . Thus we have shown that the model of the massive superparticle described by the twistor-like action (1) possesses one or two independent local κ-transformations which correspond to BPS configurations preserving 1/4 or 1/2 of the targetspace supersymmetry. The case with 3/4 unbroken supersymmetry is not realized in the massive case of the presented model. Constraints of the model Phase space of the model is parametrized by the coordinate variables and by corresponding canonically conjugate momenta p A = (p µ , z αβ ,zαβ; π α ,πα; ω α a ,ωα a ) . We take the standard definition of the Legendre transformation p A = ∂ r L/∂q A and of the graded Poisson brackets q A , p B = δ A B for all basic phase variables. The Lagrangian (1) is homogeneous with respect to all velocities, therefore the expressions for all momenta lead to the primary constraints where P αβ , Z αβ ,Zαβ have the expressions (4)- (6) in terms of the bosonic spinors. In adition, the whole system of constraints includes the kinematic constraints which are explicitly introduced into the action. The kinematic constraints are secondary ones if Lagrange multipliers are assigned to canonical variables. Any other constraints do not appear in the model. The analysis of the κ-symmetry is based on the consideration of the odd constraints. Their Poisson bracket algebra is The analysis of the constraints is simplified when they are projected on the spinors v a α ,vα a . For the fermionic constraints we get Due to the kinematic constraints (8) the canonically conjugate momenta for the Grassmann variables θ a ≡ θv a andθ a ≡ −v aθ = (θ a ) are π a = v a π/m,π a =πv a /m and θ a , π b = θb ,π a = δ b a . In terms of these variables the fermionic constrains acquire simple form Their Poisson brackets are The algebras of the Lorentz-spinor constraints D α ,Dα and of the Lorentz-scalar constraints D a , D a are identical. But in the second case the role of the central charges is played by the Lorentzscalar quantities C ab ,C ab instead of Z αβ ,Zαβ and by the static momentum p 0 = m, p = 0 instead of the usual four-momentum. The consideration in terms of quantities with indices a, b, ... is Lorentz covariant due to the use of the bosonic variables v a α which play the role of harmonic variables [23]- [25] parametrizing an appropriate homogeneous subspace of the Lorentz group. The matrix of the Poisson brackets (43) is in fact the matrix ∆. Its eigenvalues and odd eigenvectors have been found above. Thus, the separation of the first and second class Fermi constraints can be done straightforwardly. It is convenient to introduce the new constraints Xa is an even normalized eigenvector of the matrix ∆ with an eigenvalue λ i.e. ∆X (λ) = λX (λ) . The eigenvectors with different eigenvalues are orthogonal, the eigenvectors having the same eigenvalue can be chosen orthogonal. Here we do not need to distinguishing the special notation of different eigenvectors corresponding to the same eigenvalue. The algebra of new constraints takes a very simple form ∆ (λ) , ∆ (λ ′ ) = 2im 2 λδ λλ ′ . Thus the repetition of the analysis which was made in the previous section allows us to obtain the full system of orthonormal eigenvectors X (λ) of the matrix ∆ and to construct the first class constraints D (0) which correspond to the zero eigenvalues and generate the κ-symmetry transformations. Conclusion In this paper we have constructed the twistorlike model of the superparticle with tensorial central charges. The proposed model uniformly describes cases of massive and massless superparticles. For the description of the degrees of freedom associated with tensorial central charges we have used coordinates of central charges as well as additional bosonic spinors. The latter variables have also been used for the twistor-like representation of the momentum. In the case of zero mass one can obtain the twistor-like formulation of the superparticle with tensorial central charges preserving 1/2 and 3/4 of target-space supersymmetry. In the massive case our model has one or two κ-symmetries and preserves 1/4 or 1/2 of the target-space supersymmetry. The additional bosonic spinors have been used as Lorentz harmonic variables. This allowed us to eliminate auxiliary and gauge degrees of freedom without the violation of the Lorentz invariance.
2014-10-01T00:00:00.000Z
2001-04-20T00:00:00.000
{ "year": 2001, "sha1": "9eeb5c94f6e235a386aaa2b29b9fb3c4f3d126e4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0104178", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "03a5b78ccdf3f73eb7b414c630d2ca7c54683f12", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
52302512
pes2o/s2orc
v3-fos-license
Extreme Scale De Novo Metagenome Assembly Metagenome assembly is the process of transforming a set of short, overlapping, and potentially erroneous DNA segments from environmental samples into the accurate representation of the underlying microbiomes's genomes. State-of-the-art tools require big shared memory machines and cannot handle contemporary metagenome datasets that exceed Terabytes in size. In this paper, we introduce the MetaHipMer pipeline, a high-quality and high-performance metagenome assembler that employs an iterative de Bruijn graph approach. MetaHipMer leverages a specialized scaffolding algorithm that produces long scaffolds and accommodates the idiosyncrasies of metagenomes. MetaHipMer is end-to-end parallelized using the Unified Parallel C language and therefore can run seamlessly on shared and distributed-memory systems. Experimental results show that MetaHipMer matches or outperforms the state-of-the-art tools in terms of accuracy. Moreover, MetaHipMer scales efficiently to large concurrencies and is able to assemble previously intractable grand challenge metagenomes. We demonstrate the unprecedented capability of MetaHipMer by computing the first full assembly of the Twitchell Wetlands dataset, consisting of 7.5 billion reads - size 2.6 TBytes. Abstract-Metagenome assembly is the process of transforming a set of short, overlapping, and potentially erroneous DNA segments from environmental samples into the accurate representation of the underlying microbiomes's genomes. State-ofthe-art tools require big shared memory machines and cannot handle contemporary metagenome datasets that exceed Terabytes in size. In this paper, we introduce the MetaHipMer pipeline, a high-quality and high-performance metagenome assembler that employs an iterative de Bruijn graph approach. MetaHipMer leverages a specialized scaffolding algorithm that produces long scaffolds and accommodates the idiosyncrasies of metagenomes. MetaHipMer is end-to-end parallelized using the Unified Parallel C language and therefore can run seamlessly on shared and distributed-memory systems. Experimental results show that MetaHipMer matches or outperforms the state-of-the-art tools in terms of accuracy. Moreover, MetaHipMer scales efficiently to large concurrencies and is able to assemble previously intractable grand challenge metagenomes. We demonstrate the unprecedented capability of MetaHipMer by computing the first full assembly of the Twitchell Wetlands dataset, consisting of 7.5 billion reads -size 2.6 TBytes. I. INTRODUCTION Metagenomics is currently the leading technology in studying uncultured microbial diversity and delineating the structure and function of the microbiome, which is the collection of microorganisms in a particular environment, e.g. the body. Improvements in sequencing techonolgy (in terms of cost reduction) have significantly outpaced Moore's Law [1], enabling the collection of large numbers of human and environmental samples that comprise hundreds or even thousands of microbial genomes. Assembly of metagenome samples into long contiguous sequences is critical for the identification of long biosynthetic clusters and gene finding in general [2], and is also key for enabling the discovery of novel lineages of life and viruses [3]. However, for most microbial samples, there is no existing reference genome, so a first step in analysis is de novo assembly: transforming a set of short, overlapping, and potentially erroneous DNA segments (called reads) from these samples into the accurate representation of the underlying microbiomes's genomes. The exact de novo assembly of a genome is an NP-hard problem in general [4]. The metagenome assembly is further complicated by identical sequences across different genomes, polymorphisms within species and the variable abundance of species within the sample. The bioinformatics community has therefore developed special algorithms [5]- [12] to overcome these challenges. Nevertheless, the vast majority of these tools are not parallelized for distributed-memory systems. As a result, they require specialized, large shared-memory machines with hundreds of GB of memory in order to deal even with modestly-sized metagenomic datasets. At the same time, the concurrency of these tools is limited by the core count of a single node (typically 10s of cores) and consequently the execution times even for small datasets are in the order of days. The only exception among the metagenome assemblers is Ray Meta, which is a tool designed for distributed-memory systems. However, Ray Meta is not scalable to massive concurrencies [13] and its assembly quality has been shown to be worse compared to other state-of-the-art tools [10]. Currently, existing tools for high-quality metagenome assembly are incapable of processing large, realistic datasets due to the large memory and computational requirements. In this work we introduce MetaHipMer, the first massively scalable, high quality metagenome assembly pipeline. MetaHipMer implements an iterative de Bruijn graph approach similar to IDBA-UD [6] and Megahit [9], [12] to generate long, contiguous and accurate sequences called contigs. MetaHipMer also performs specialized scaffolding to stitch together multiple contigs and further increase contiguity. The result of our work is the first distributed-memory metagenome assembler that achieves comparable quality to the state-of-theart tools, but scales efficiently to tens of thousand of cores and decreases the execution times by orders of magnitude compared to single-node tools. While our primary novelty is in enabling the high-quality assembly of larger datasets that current tools struggle to deal with, MetaHipMer can also be seamlessly executed on shared memory machines. Overall this study makes several contributions including: • An iterative contig generation algorithm that relies on efficient, distributed hash tables, and combines best practices from state-of-the-art tools with new ideas tailored for metagenome datasets. The new algorithm obviates the need for an expensive, explicit input error-correction step that other tools rely on. This iterative approach allows MetaHip-Mer to directly handle large metagenome samples without an expensive error correction step, which could eliminate some data that would be valuable in the assembly. • A new parallel graph algorithm (also using distributed hash tables) that operates on partially assembled data to resolve ambiguities and errors and further extend the assembled regions in a process called scaffolding. This new algorithm also optimizes the accurate assembly of highly conserved Algorithm 1 Iterative contig generation 1: Input: A set of paired reads R 2: Output: A set of contigs C 3: C ← ∅ 4: prev k -mer set ← ∅ 5: for k = k min to k max with step s do 6: k-mer set ← K-MERANALYSIS(k, R) 7: new k -mers ←MERGE(k -mer set, prev k -mer set) 8: C k ← DEBRUIJNGRAPHTRAVERSAL(new k -mers) 9: Alignments k ← ALIGNREADSTOCONTIGS(R, C k ) 12: C ← LOCALASSEMBLY(R, C k , Alignments k ) 13: prev k -mer set ←EXTRACTKMERS(C, k + s) 14 II. ITERATIVE CONTIG GENERATION Before diving into MetaHipMer's algorithm, we introduce some terminology that is used throughout the paper. Reads are typically short fragments of DNA sequence that are produced by DNA sequencers; current sequencing technology can only read the genome in fragments. These reads contain errors and may also come in pairs (e.g. see Figure 1, where pairs of reads -light blue pieces -are connected with dashed lines). Reads are strings of four possible nucleotides/bases: A, C, G and T. A read library is a source of DNA template fragments that the reads are generated from and is typically characterized by an insert size, which is the distance between the two ends of the paired reads. Every genomic region/sample is covered by multiple, overlapping reads, which is necessary to identify and exclude errors from the reads. K-mers are short overlapping substrings of length k that are typically extracted from reads. A de Bruijn graph is an efficient way to represent a sequence in terms of its k-mer components. In this type of graph, vertices are k-mers and two k-mers that overlap in k − 1 consecutive bases are connected with an edge. Contigs are contiguous sequences of k-mers (i.e. k-mers that are errorfree with high confidence) and represent underlying genomic regions. Contigs are typically longer than the input reads. Finally, scaffolds are long genomic sequences that consist of oriented contigs which are stitched together. The genomes comprising a metagenome dataset have generally variable read coverage, since some species may exist in the environmental sample with much higher abundance than others. Choosing an optimal value of k for the de Bruijn graph is therefore challenging because there is a tradeoff in k-mer size that affects high and low frequency species differently. Typically, a small k is appropriate for low-coverage genomes since it allows a sufficient number of overlapping k-mers to be found and as a result the underlying sequences can be assembled to contigs. On the other hand, a large k is better suited for the high-coverage genomes since a sufficient number of overlapping, long k-mers can be found and repetitive regions are disambiguated by such long k-mers. Iterative contig generation (Algorithm 1 and Figure 1) aims to eliminate the quality trade-off that different k-mer sizes induce in de Bruijn graph-based assemblers [6], [9], [10]. The algorithm starts by constructing the de Bruijn graph with a small k and extracts a set of contigs by traversing the graph. After performing a series of transformations on the set of contigs, k is increased by a step size s and MetaHipMer builds the corresponding de Bruijn graph from the input reads with (k + s)-mers while the graph is enhanced with (k + s)-mers extracted from the previous contig set. This iterative process is repeated until k reaches a user-specified maximum value. The quality of the assembly is improved by additional transformations that refine the de Bruijn graphs (as shown in steps 3 to 6 in Figure 1). More specifically, "bubble structures" are merged and "hair" tips (short, dead-end dangling contigs) are removed since they are potentially created from erroneous vertices. Then, the graph is iteratively pruned in order to eliminate branches that do not agree with the coverage of the neighboring vertices; such branches are likely to be created by erroneous edges. Finally, a local assembly algorithm extends the contigs remaining in the de Bruijn graph, using localized reads aligned to each contig, enabling the retrieval of k-mers which otherwise would be excluded from the de Bruijn graph because of global conflicts. Before describing the stages of the MetaHipMer pipeline, we provide an overview of distributed hash tables in UPC; this data structure is the backbone of all of our parallel algorithms. A. High Performance Distributed Hash Tables in MetaHipMer Our hash tables utilize a chaining rule to resolve collisions in the buckets. The hash table entries are stored in the shared address space of UPC and thus they can be accessed by any processor with simple assignment statements. This feature of UPC facilitates the design of highly irregular, distributed memory algorithms via a shared-memory programming paradigm. Note that the hash tables involved in our algorithms can be gigantic (hundreds of Gbytes up to tens of Tbytes) and cannot fit in a typical shared-memory node. Therefore it is crucial to distribute the hash table buckets over multiple nodes and in this quest the global address space of UPC is convenient. Here we identify a handful of use cases for the distributed hash tables that allow specific optimizations in their implementation. These use-cases will be used as points of reference in the sections that detail our parallel algorithms. • Use case 1 -Global Update-Only phase The operations performed in the distributed hash table are only global updates with commutative properties (e.g. inserts only). The global hash table will have the same state (although possibly different underlying representation due to chaining) regardless of insert order. The global update-only phase can be optimized by dynamically aggregating fine-grained updates (e.g. inserts) into batch updates. In this way we can reduce the number of messages and synchronization events. We can also overlap computation/communication or pipeline communication events to further hide the communication overhead. An example of this use-case is storing the k-mers in a distributed hash table in preparation for the de Bruijn graph traversal. • Use case 2 -Global Reads & Writes phase The operations performed during this phase are global reads and writes over the already inserted entries. Typically we can't batch reads and/or writes since there might be race conditions that affect the control flow of the governing parallel algorithm. However, we can use global atomics (e.g. compare-and-swap) instead of fine-grained locking in order to ensure atomicity. The global atomics may employ hardware support depending on the platform and the corresponding UPC implementation. We can also build synchronization protocols at a higher level that do not involve the hash table directly but instead are triggered by the results of the atomic operations. Finally, we can implement the delete operation of entries with UPC atomics and avoid locking schemes. An example of this usecase is accessing the k-mers during de Bruijn graph traversal. • Use case 3 -Global Read-Only phase In such a use case, the entries of the distributed hash table are read-only and a degree of data reuse is expected. The optimization that can be readily employed is to design software caching schemes to take advantage of data reuse and minimize communication. These caching schemes can be viewed as "on demand" copying of remote parts of the hash table. Note that the read-only phase guarantees that we do not need to provision for consistency across the software caches. Such caching optimizations can be used in conjunction with localityaware partitioning to increase the effectiveness of the expected data reuse. Initially even if the data is remote, it is likely to be reused later locally. An example is the use of software caches for seed lookup during alignment. • Use case 4 -Local Reads & Writes phase In this use case, the entries in the hash table will be further read/written only by the processor owning them. The optimization strategy we employ in such a setting is to use a deterministic hashing from the sender side and local hash tables on the receiver side. The local hash tables ensure that we avoid runtime overheads and also high-performance, serial hash table implementations can be seamlessly incorporated into parallel algorithms. For example, consider items that are initially scattered throughout the processors and we want to send occurrences of the same item to a particular processor for further processing (e.g. consider a "word-count" type of task). Each processor can insert the received items into a local hash table and further read/write the local entries from there. An example of this use case is the distributed histogram that gets constructed during k-mer analysis. We emphasize that this is not an exhaustive list of use cases for distributed hash tables. Nevertheless, it captures the majority of the computational patterns we identified in our parallel algorithms that will be detailed in the following sections. In the following subsections we describe the various stages of iterative contig generation. B. K-mer Analysis using Distributed Histograms The first step of the contig generation is parallel k-mer analysis, which splits the input reads into k-mers that overlap by k − 1 consecutive bases, keeping a count for each k-mer occurring more than times ( ≈ 2, 3) in order to implicitly exclude sequencing errors. K-mer analysis additionally requires keeping track of all possible extensions of each kmer from either side (bases before/after a k-mer in a read). If a nucleotide on an end appears more times than a threshold t hq , it is characterized as a high quality extension. In MetaHipMer, we integrate the parallel implementation of k-mer analysis described in HipMer [13], [14], which uses distributed histograms (Global Update-Only phase and Local Reads & Writes phase), all-to-all exchanges of k-mers, and distributed Bloom filters (to avoid the memory footprint explosion that is induced by erroneous k-mers). Of particular importance to metagenome assemly, the HipMer implementation uses a specialized streaming algorithm to identify and count "heavy hitters", which are k-mers that occur millions of times and can potentially cause load imbalance issues if not treated with a specialized algorithm. Such "heavy hitters" are likely common in metagenomic datasets where highly abundant organisms yield multiple copies of the same k-mers. C. De Bruijn Graph Traversal via a Distributed Hash Table The de Bruijn graph of the k-mers stemming from the kmer analysis is traversed in order to form contigs, which are paths in the de Bruijn graph formed by k-mers with unique high quality extensions. These paths represent "confidently" assembled sequences and can be seen in Figure 1(a) by removing the branches (vertices with dashed incident edges) and considering the connected components in the resulting graph. Note that the vertices with dashed incident edges (i.e. "fork" vertices) do not have unique high quality extensions and can be used later to discover the connectivity among the contigs. In MetaHipMer, the de Bruijn graph traversal is implemented using a distributed hash table, similar to the approach introduced in HipMer (Global Update-Only phase and Global Reads & Writes phase). Due to the nature of DNA, the de Bruijn graph is extremely sparse. For example, the human genome's adjacency matrix that represents the de Bruijn graph is a 3 · 10 9 × 3 · 10 9 matrix with between two and eight non-zeros per row for each of the possible extensions. Using a direct index for the k-mers is not practical for realistic values of k, since there are 4 k different k-mers. A compact representation can be leveraged via a hash table: A vertex (k-mer) is a key in the hash table and the incident vertices are stored implicitly as a two-letter code [ACGT][ACGT] that indicates the unique bases that immediately precede and follow the k-mer in the read dataset. By combining the key and the two-letter code, the neighboring vertices in the graph can be identified. Also, the underlying graph of k-mers is characterized by high-diameter where parallel Breadth First Search (BFS) approaches do not scale well and HipMer's specialized traversal overcomes this challenge [15]. However, the HipMer algorithm was designed for single genomes and assumes uniform depth coverage. This is usually not the case with metagenomes, where the coverage of some genomes may be thousands of times higher than others. Thus, the graph traversal implemented in MetaHipMer differs in this aspect compared to the HipMer implementation. In HipMer, a k-mer with depth greater than is extended in the graph traversal only if there are no more than t hq alternative extensions to the most common extension of that k-mer. The value of t hq is global, and is used for all k-mers. This is potentially a problem in metagenomes because k-mers from genomes with high coverage COV high (e.g. abundance in the dataset of 1,000 or more) will typically have more than COV high × e alternates if the sequencing error rate is e. Therefore, setting a t hq below COV high × e can in theory start to fragment high coverage genomes that are prevalent in the dataset, even though such genomes should be the easiest to assemble. On the other hand, setting a t hq above COV high × e will fragment the genomic regions with low coverage. The solution we introduce in MetaHipMer is to replace the global threshold, t hq , with one that depends on the depth d k-mer of the k-mer that is being extended. In MetaHipMer, a k-mer with count d k-mer is extended in the de Bruijn graph traversal if there are no more than t hq = max(t base , e × d k-mer ) extensions that contradict the most common extension. Here t base is a hard limit in the t hq value and e is a single parameter model for the sequencing error. D. Parallel Bubble Merging with Speculative Graph Traversal A single-nucleotide polymorphism (SNP) represents a difference in a base between two genomic sequences. SNPs create similar contigs (paths in the de Bruijn graph with the same length) except in one position; these contigs also have the same k-mers as extensions of their endpoints and as a result form bubble structures in the de Bruin graph [16]- [18]. In this step we identify these bubbles and merge them into a single contig. Additionally, dead-end dangling contigs shorter than 2k nucleotides are considered hair and are likely to be false positive structures in the graph, hence we remove them [16], [18], [19]. See Figure 1(a) for examples of a bubble and a hair contig in the first graph. MetaHipMer also supports optional merging of long bubble-paths (longer than 2k), similar to the Megahit [9] assembler. This option trades-off contiguity for preserving species/strain variations. The first step in bubble merging is to build a bubble-contig graph, which MetaHipMer does in parallel by employing a distributed hash table (Global Update-Only phase and Global Reads & Writes phase). This graph is orders of magnitude smaller than the original k-mer de Bruijn graph because the connected components (contigs) of the original graph have been contracted to super-vertices. Once the bubble-contig graph is built, it is traversed to merge eligible contigs (e.g. by picking one of the contigs from the bubble structures). This parallel traversal uses a speculative algorithm. The processors pick random seeds (contigs) from the bubble-contig graph and initiate independent traversals. Once an independent traversal is terminated, we store the resulting path. However, if multiple processors work on the same path, they abort their traversals and allow a single processor to complete them. More specifically, each vertex (contig) has a "used" binary flag that indicates if this vertex has been traversed and the processors atomically set this flag for the vertices they are visiting (Global Reads & Writes phase of hash table). If a processor attempts to traverse a "used" vertex/contig, then it infers that yet another processor works on the same path and aborts the current traversal. Eventually, processor 0 picks up the aborted traversals and completes them. E. Iterative Graph Pruning The remaining graph after bubble merging and hair removal is iteratively pruned in order to eliminate branches that do not agree with the coverage of the neighboring vertices. Such branches are likely to be created by false-positive edges in the contig graph due to sequencing errors. Algorithm 2 Algorithm 2 Iterative graph pruning 1: Input: A contig set C, length k and thresholds α, β, τ 2: Output: A pruned contig set C pruned 3: τ ← 1 4: while τ < maximum contig depth of C do 5: for each contig c ∈ C do 6: if length(c) ≤ 2 · k and 7: depth(c) ≤ min(τ , β · neighbors-depth(c)) then 8: Remove c from C 9: implements an iterative pruning strategy similar to the pruning module in IDBA-UD [6]. The parallel version of Algorithm 2 starts by reading in parallel the contig set C along with their depths and also stores the k-mers from the k-mer analysis step in a distributed hash table (Global Update-Only phase). In particular, we are interested in the "fork" k-mers since they contain information regarding the connectivity of the contigs; in graph (a) of Figure 1 the vertices with dashed incident edges represent "fork" k-mers. Each one of the P processors is then assigned 1/P contigs; the processor extracts the last k-mers in the two endpoints of each contig c, looks them up in the distributed hash table and gets the contig-neighborhood information for c. The parallel execution then proceeds in the main loop (line 5) of Algorithm 2. Each processor visits the contigs assigned to it, and if a contig is both short and has relatively small depth compared to its neighborhood (lines 6 and 7), it is removed from the contig graph. At the end of the iteration, each processor updates the neighborhoods of its contigs since some may have been removed. The depth-cutoff threshold τ is then increased geometrically and the algorithm proceeds to the next iteration. The parallel algorithm terminates if no contigs are pruned by any processor during an iteration. In order to detect if any contigs have been pruned from the graph: (1) every processor sets a local binary variable pruned_flag to 1 if any of its contigs have been pruned, otherwise the binary variable is set to 0, and (2) we perform an all-reduce operation on the pruned_flag variables with the max function as argument. If the max-reduction result is 0, no changes have been made in the contig-graph and the parallel algorithm terminates (i.e. it has reached a converged state). F. Alignment of Reads to Contigs In this step of the pipeline the goal is to map the original reads onto the pruned contigs. This mapping provides information about the read pairs that are aligned towards the ends of the contigs (e.g. Figure 1(d)). We determine this mapping using merAligner [20], a distributed memory, scalable, endto-end parallel sequence aligner that implements a seed-andextend algorithm. G. Local Assembly with Dynamic Work Stealing In this step we try to extend the remaining contigs using a local assembly methodology that leverages the alignments of reads to contigs. Because the assembly is localized, erroneous k-mers stemming from high-coverage regions are isolated from similar k-mers in low-depth areas, so we can retrieve kmers which otherwise would be excluded from the de Bruijn graph. Figure 1(e) shows that after local assembly, the contigs have been extended with the orange vertices via "mer-walks". For each contig, we first accumulate all reads that can be used to extend that contig. Each thread reads a portion of the reads file, and stores the reads into a global hash table. Then each thread processes a local subset of contigs, and extracts the reads relevant to each contig to local storage. The reads selected are those that can be aligned onto a contig and whose paired reads do not align onto the same contig. In addition, for paired reads we can use the library insert size to project unaligned reads to either side of a contig (e.g. see Figure 1(e)). Second, the reads are used to extend the contigs through mer-walking, which is a modified, localized version of the contig generation extension algorithm described earlier in Section II-C. The first modification is that extension bases are accepted or rejected based on the number and quality category of the extending bases, which allows for uncontested extensions of lower quality than used in the original k-mer analysis. The second modification is that the mer-size used is dynamically adjusted in an iterative loop, being upshifted (increased by L) when a fork is encountered, or downshifted (decreased by L) when no extensions are encountered (a deadend). The walk terminates when it encounters a fork after downshifting, or a deadend after upshifting. Once the involved reads are localized, the actual merwalking of the gaps does not require communication and is embarrassingly parallel (Local Reads & Writes phase). However, if the contigs are statically assigned to processors, severe load imbalance can occur because the computational cost of walking the contigs exhibits a high degree of unpredictable variability. To ameliorate this problem, we implement a simple dynamic work-stealing strategy. Each processor performs a block of independent walks, and upon completion, uses a global atomic variable to select another block without overlap with other processors. Although we use a single global atomic, in practice, we achieve good load balance with large block sizes, resulting in few steals and little contention. H. Merging k-mer Sets via a Distributed Hash Table The MetaHipMer pipeline utilizes the final contigs C of iteration i to enrich the k-mer set that will be generated by kmer analysis in the following iteration i+1. Since contigs in C were assembled with a smaller k value than the one that will be used in the i+1 iteration, k-mers stemming from low coverage organisms are likely represented in C, while such organisms may not be represented via confident, "error-free" (k + s)mers in the result of the following k-mer analysis. Therefore we extract from contigs in C all the (k + s)-mers (henceforth called prev k-mer set) and treat them as "error-free" (k+s)mers with unique high quality extensions in the i + 1 iteration (see arrow 7 in Figure 1 where the orange (k + s)-mers are extracted from the final contigs of the previous iteration). The prev k-mer set has to be merged with the (k+s)-mers stemming from the original reads and are generated by the kmer analysis of the i + 1 iteration. First, all processors store in parallel the new k-mers resulting from the k-mer analysis step in a distributed hash table (Global Update-Only phase). Then, they extract in parallel (k + s)-mers from contigs in C and store them in the same distributed hash table. The resulting distributed hash table represents the merged k-mer set, where duplicated k-mers (existing in both prev k-mer set and the new k-mer set from k-mer analysis) are collapsed in a single occurrence. All the k-mer stores in the distributed hash table are done via aggregated, asynchronous one-sided messages. I. Minimizing Communication via Read Localization It has been shown [13], [20], [21] that the reads to contigs alignment step is dominated by fine-grained, irregular lookups of seeds (substrings of reads) in a distributed seed index (hash table that indexes the contigs). The authors of merAligner have therefore implemented a software cache to exploit potential seed index reuse and avoid off-node communication in a distributed memory environment. However they also point out that the input read files do not exhibit any inherent locality, hence at large scale the expected data reuse (and consequently the software cache benefit) is provably limited. However, MetaHipMer uses iterative contig generation, which presents an opportunity to infer read locality in the first iteration to improve performance in subsequent iterations. Reads that align onto the same contig region should be similar, and hence most of the substrings of every read (the seeds) should be identical to substrings of other reads aligned to the same contig region. If these reads are assigned for alignment to the same processor, merAligner's software cache will be able to serve most of the pertaining seed lookups, reducing significantly the off-node communication and the total execution time (Global Read-Only phase). Being motivated by the aforementioned observations, we implement a parallel read localization algorithm to speedup the alignment steps in the following iterations. Given the first set of reads to contigs alignments, each processor assesses in parallel an equal chunk of the alignments. Assuming a read R is aligned to contig c R , the processor sends R to the processor with id (c R mod P ), where P is the number of available processors. We leverage the one-sided communication capabilities of UPC and all the reads are distributed via aggregated, asynchronous messages. As a result of these read redistributions, all the reads that are mapped to a contig c R (which, according to the previous reasoning, are similar) will be sent to the same processor. In the subsequent iterations of the pipeline, this shuffled set of reads is used for enhanced locality and to minimize communication incurred in the alignment steps. An additional side-effect of read localization also benefits the k-mer analysis phase. In the k-mer analysis phase, when a processor receives a bunch of k-mers from remote processors, it updates a locally owned hash table that keeps the individual counts of the received k-mers. In principle, this local hash table update is characterized by low locality, since the received k-mers are uniformly spread out based on a hash function; hence the memory accesses pertaining to the hash table update exhibit little to no cache reuse. However, after read localization we expect (with high probability) almost all the occurrences of the same k-mer to be sent by a remote processor in the same aggregated message. As such, when the receiving processor tries to insert the k-mers in the local counting hash table, most of the updates will result in cache hits and will improve the attained performance. III. SCAFFOLDING The main goal of the scaffolding Algorithm 3 in MetaHip-Mer is to connect together contigs and form scaffolds which are long chains of contigs. The first step of scaffolding comprises of aligning the input reads onto the contigs generated by the iterative algorithm. Then, by leveraging the reads to contigs alignments and the information from paired reads we introduce additional links/edges in the graph of contigs which we call henceforth contig graph. Note that paired reads with large insert sizes can be used to generate long-range links among contigs that could not be found from the k-mer de Bruijn graph. Afterwards, we traverse the updated contig graph and form chains of contigs that constitute the final scaffolds. In the following subsections we give more details regarding the scaffolding submodules. A. Alignment of Reads onto Contigs In this step of the pipeline the goal is to map the original reads onto the final contigs generated from the iterative contig generation. This mapping provides information about the relative ordering and orientation of the contigs. Again here we employ merAligner. B. Contig Link Generation with Distributed Hash Tables The next step is to process the alignments and identify splints, which are single reads that bridge the gap between two neighboring contigs by virtue of aligning to both of them. Essentially, if a particular segment of a read aligns to the ends of two different contigs we conclude that these contigs form a splint (see Figure 2(b) for a splint example between contigs 6 and 7). Additionally, by processing paired reads's alignments we identify spans, which are read pairs associated with particular pairs of contigs. For example, consider that the first read of a pair aligns with contig i while the second read of that pair aligns with contig j. It can thus be concluded that the read pair forms a span (see Figure 2(b) for a span example between contigs 9 and 10). Also, we know the insert size of the read library and therefore we can estimate the gap size between contigs i and j. Once splints and spans are created, they can be aggregated to generate links among pairs of contigs. More specifically, if a sufficient number of splints supports a particular distance and mutual orientation between contig k and contig m, we generate a SPLINT link for that pair of contigs. Analogously, if a sufficient number of paired reads's alignments supports a particular span between contig i and contig j we generate a SPAN link for that pair of contigs. Regarding the parallelization of the SPLINT-link generation, first each of the P processors independently processes 1/P of the total read alignments and stores the splints's information locally. Then, a distributed hash table is required, where the keys are pairs of contigs and values are the splint/overlap information. Each processor is accessing the local splints and stores them in the distributed hash table. Here, we again apply aggregated, one-sided asynchronous messages to minimize the number of messages and the synchronization cost (Global Update-Only phase). When all splints are stored in the distributed hash table, each processor iterates over its local buckets to further assess/count the splint entries (Local Reads & Writes phase). The parallel algorithm for the SPAN-link generation is identical to the one for SPLINT-links. C. Contig Traversal with Connected Components Partitioning The splint and span links from the previous step provide essentially the edges in the contig graph (see Figure 2(b) for a contig graph where the vertices are the contigs -green pieces -and the edges are the splint/span links -red pieces). By traversing this contig graph we form sequences of contigs we call scaffolds. The traversal is done by selecting traversal seeds (traversal seeds are contigs) in order of decreasing length; this heuristic tries to "lock" together first long, confident contigs (the classification into long and short contigs relies on a userdefined threshold). There are numerous heuristics involved in the traversal of the contig graph. We call a contig's end extendable if it does not have any competing links (links to multiple contigs's ends projected in similar distance from that end). First, edges between long contigs and extendable ends are prioritized in the traversal. If no such edge exists, then we traverse the edge pointing to the closest extendable contig's end; we estimate the distance between contigs's ends based on the links's gap size information. The contig graph traversal also attempts to resolve repeats. Repeat contigs are typically linked to multiple contigs on both of the endpoints as shown in Figure 2(b), where contig 3 is a repeat contig and is connected to four contigs 1, 2, 4 and 5. Repeat contigs create competing links and hinder further traversal of the graph. However, if there are span links that unambiguously "jump over" a repeat contig and connect distant pairs of contigs, then the repeat contig is suspended from the graph, effectively removing competing links and allowing further extensions. For instance, contigs 1 and 2 have a span link that jumps over the repeat contig 3 and as such the latter can be suspended and the repeat can be resolved. The contigs that are classified as suspendable should have length at most equal to the insert-size of the library under consideration. Finally, the suspended contigs will be reconstructed during the gap closing module described in the next subsection. Another metagenome specific rule we introduce in MetaHipMer's contig graph traversal involves contigs that belong in conserved ribosomal genomic regions. Accurate and effective reconstruction of such ribosomal regions is important for downstream metagenome analysis, e.g. for reconstructing phylogenies [22]. Therefore, MetaHipMer tries to recognize such ribosomal contigs by using profile Hidden Markov Models (HMM) and in particular we integrate the HMMER pipeline [23]. HMMER builds HMM models of these ribosomal regions and efficiently identifies if a given contig fits the HMM models; in this case we call such a contig an HMM hit. If a contig of sufficient length is recognized as HMM hit, then we designate both of its ends as extendable even in presence of competing links. With an HMM hit contig as source, we initiate aggressive depth first search traversal and we aim to build paths that contain other contigs with similar average k-mer depths, which are also HMM hits. This approach allows us to reconstruct long pieces of conserved ribosomal regions without sacrificing accuracy. The parallelization of the contig graph traversal is nontrivial due to multiple reasons. First, the traversal is done by selecting traversal seeds in order of decreasing length and this rule is fundamentally sequential. Second, the metagenomicspecific rule described in the previous paragraph relies on depth-first search, which is known to be difficult to parallelize. We overcome these parallelization roadblocks by exploiting the nature of the metagenome's contig graph. In particular, we observe that contigs should form connected components/clusters in the contig graph. These clusters can be processed in parallel and we can thus apply our contig graph traversal algorithm independently on each cluster. For instance, in Figure 2(b) we see that there are three independent clusters of contigs. The first step in order to extract parallelism is to identify the connected components in the contig graph. We implemented a simple variant of the Shiloach-Vishkin [24] algorithm which is trivially parallelized. We further increase the efficiency of our approach by excluding from the contig graph links with multiplicity less than a user specified threshold. We know that such links will be rejected during the graph traversal algorithm as they are considered unreliable. By excluding such links, we decrease the connectivity of the contig graph and we extract more connected components, or equivalently expose more parallelism for the contig graph traversal. After discovering the connected components, we randomly assign them to processors in order to minimize load imbalance. Finally each processor concurrently traverses the assigned connected components to form scaffolds. D. Gap Closing with Load Balancing After the scaffold creation it is possible that there are gaps between pairs of contigs. Figure 2(c) shows an example where three out of four generated scaffolds contain unclosed gaps. Therefore, we further process the reads to contigs alignments and locate the reads that are placed into these gaps. In MetaHipMer we adopt the parallel gap closing algorithm of HipMer [13] which has been shown to scale efficiently. The alignments are processed in parallel and projected into the gaps (Global Update-Only phase of hash tables). These gaps are then divided into subsets and each set is processed by a separate processor, in a completely parallel phase. Several methods are available for constructing gap closures [13] and they differ substantially in computational intensity. Given that it is not predicable a priori which method will successfully close a gap, the computational time can vary by orders of magnitude from one closure to the next. To prevent load imbalance in the gap closing phase, the gaps are distributed in a Round Robin fashion across all the available processors. This suffices to prevent most imbalance because it breaks up the gaps from a single scaffold, which tend to require similar costs to close. The outcome of this step constitutes the result of the MetaHipMer assembly pipeline, which are gap closed scaffolds (see Figure 2(e)). IV. RESULTS This section presents experimental results that demonstrate MetaHipMer's efficient scalability to thousands of cores on a distributed memory supercomputer, while producing results comparable in quality to state-of-the-art metagenome assemblers. A. Experimental Datasets MG64: This is a synthetic dataset comprising a mixture of 64 diverse bacteria and arterial microorganisms [25]. It totals 108.7 million paired-end Illumina HiSeq 100-pb reads, for a total size of 24GB. Wetlands: This is massive-scale metagenomics dataset, containing wetlands soil samples that are a time-series across several physical sites from the Twitchell Wetlands in the San Francisco Bay-Delta [26], [27]. It totals 7.5 billion paired-end To the best of our knowledge this is the largest metagenomic soil sample ever collected. MGSim: To conduct a weak scaling performance analysis of MetaHipMer, we developed a tool for generating arbitrarily large and complex metagenome assembly inputs, called MGSim. MGSim samples multiple genomes and utilizes the short-read simulator WGSim [28] to generate reads. The genomes are sampled with weights calculated from a phylogenetic tree, and each sampled genome is assigned a relative abundance drawn form a log-normal distribution. The BB tools (https://sourceforge.net/projects/bbmap/) with default parameters were used for adapter trimming and removing typical contaminants from all the datasets. For the evaluation we used metaQUAST 4.3 [29] with default parameters. B. Quality Assessment Quality of assemblies are usually assessed on datasets with known reference genomes, such as the MG64 dataset. We therefore conduct quality experiments using MG64 and compare MetaHipMer to several other metagenome assemblers, including MetaSPAdes, Megahit and Ray Meta. We also include a non-metagenome assembler, HipMer [13] (targeted at single genomes) to demonstrate how an assembler without algorithms specifically tailored for metagenomes can underperform on the same dataset. These runs were all carried out on an 80core Intel ® Xeon ® E7-8870 2.1GHz server, with 500GB of memory. This platform is used because most other assemblers cannot use distributed memory systems and require a large shared-memory node. When determining quality, there is a trade-off between contiguity (the length of the assembly), coverage (how much of the reference was assembled), and correctness. We use several metrics, determined by running the metaQUAST 4.3 [29] with the default parameters. The results are shown in Table I. Contiguity is captured by the length metric, which shows how many base pairs of the assembly are contained in contigs of lengths ≥ 5000, ≥ 25000 and ≥ 50000 base pairs. As can be seen from the table, MetaHipMer has the second best contiguity, very close to MetaSPAdes. Coverage is captured by the genome fraction, where all the metagenome assemblers score approximately 94 to 95%, except for Ray Meta. Broken down into the 64 individual genomes, the genome fraction is over 80% for all but one, which is around 4% (for all assemblers). This latter genome is very poorly represented in the sample, and so coverage is poor. Finally, correctness is indicated by the misassemblies metric, which shows the number of misassembled scaffolds in the final assembly. As can be seen from the table, MetaHipMer has the lowest misassemblies count of all the metagenome assemblers (excluding HipMer). Appendix VIII provides more detailed contiguity/misassembly comparison between MetaHipMer and MetaSPAdes on MG64. In Table I we also show a metric called rRNA count and it is the number of ribosomal RNA structures found in the assembled genomes. This metric is of particular importance to biologists interested in classifying and identifying the organisms that are being assembled. MetaHipMer finds the most rRNAs, followed closely by Ray Meta. The quality results for HipMer, the single genome assembler, clearly illustrate why we need assemblers built specifically for metagenomes. Although HipMer shows low error rates, it does so at the cost of contiguity (the length over 50k is less than half of MetaHip-Mer), coverage (85% compared to 94% with MetaHipMer) and rRNA (almost 3 times fewer found). C. Performance Results To measure MetaHipMer's parallel scalability, we utilize the NERSC's Cori Cray XC40 supercomputer, consisting of 2388 compute nodes, each containing two 16-core Intel ® Xeon ® E5-2698 2.3GHz processors, for a total of 32 cores per node, with 128GB per node. The nodes are connected with a Cray Aries network with Dragonfly topology with 5.625 TB/s total bandwidth. We built the software using Berkeley UPC v 2.26.0 with Intel ® 17.0.2.174 backend compilers. Impact of read localization optimization: Figure 3 presents the impact of our read localization optimization on two of the pipeline stages: k-mer analysis and alignment, when assembling the MG64 dataset on Cori. The improvement is especially noticeable at lower concurrencies for alignment, with a 2.2× speedup at 16 nodes. In general this optimization improves alignment more than k-mer analysis; in regard to the alignment phase, this optimization reduces the off-node communication, while regarding k-mer analysis this optimization improves cache reuse on a single node. Strong-scaling: To demonstrate the strong-scaling efficiency, we ran MetaHipMer on a subset of the Wetlands dataset, consisting of three lanes of reads (about 14% of the total). To assemble the full dataset requires at least 512 nodes, and so it is not suitable for strong-scaling studies. Figure 4 shows strong scaling efficiency of 61% from 32 (the minimum required due to memory constraints) to 1024 nodes. The scaling is near perfect until 512 nodes. Most of the computational time is taken by the iterative contig generation phase. The breakdown of MetaHipMer's stages is shown in Figure 5. At smaller concurrencies, most of the time is taken up by the alignment phase (about 50%), but at higher scale, increasing load imbalance in the local assembly stage results in larger overhead and reduces scalability. As previously discussed, we implement dynamic work-stealing for local assembly; this improves load balance from about 0.33 to 0.55 at 1024 cores, but that is still low enough to cause a gradual drop in overall scaling. In future work we will address this imbalance by exploiting characteristics of the local contigs, such as the number of reads that map to a contig. The only other metagenome assembler (that we are aware of) that scales on distributed memory systems is Ray Meta [5]. Ray Meta was too slow to run using the 3-lane Wetlands dataset, so for comparison we use the smaller MG64 dataset. Results show that Ray Meta scales poorly from 16 to 64 nodes, running at 3,407 secs and 2,931 secs at 16 and 64 nodes respectively (29% efficiency). By contrast, MetaHipMer takes 512 secs at 16 nodes and 180 secs at 64 nodes (71% efficiency). At 64 nodes, MetaHipMer is 16× faster than Ray Meta. Weak-scaling: We examine MetaHipMer's weak scaling efficiency by using four datasets generated with MGSim, of increasing size and complexity. The datasets consist of 5, 10, 20 and 40 genomic taxas that generate 125, 250, 500 million and 1 billion reads, and are run on 128, 256, 512 and 1024 nodes, respectively. Table II nodes. Grand challenge: While our strong-scaling results examined performance on a three-lane subset of the Wetlands data, MetaHipMer enables, for the first time, a full assembly of the 2.6 TByte, 21-lane sample. This took 3 hours and 25 minutes on 512 nodes (16,384 cores) of Cori. Assembling datasets of this size has been previously proved intractable; we anticipate that this capability will open up a new era in metagenomic analysis. The benefits of assembling the full dataset over a subset become apparent when comparing the assembly to that from the three-lane dataset. The full Wetlands assembly is 41.5gbp (giga base pairs) in length, which is 18× larger than the 2.3gbp assembly length for the three-lane subset. Furthermore, the coverage is much improved, with 42% of the full set of reads mapping back to the full assembly, compared to only 7.6% mapping back to the subset assembly. V. RELATED WORK As the comparison of the assembly results between metagenome focused assemblers is beyond the scope of this paper, and has been covered in recent work [30]- [32], we restricted our performance comparison to the Ray Meta [5] metagenome assembler that scales on distributed memory systems. Ray Meta is a parallel de novo metagenome assembler based on de Bruijn graphs that utilizes MPI and exhibits strong scaling. One drawback of Ray Meta is the lack of parallel I/O support. Ray Meta performs best on those organisms that are highly covered within a sample and generally has lower contiguity than MetaHipMer. The results in Section IV showed limited Ray Meta parallel efficiency for our test problem. MetaSPAdes [10], is a single-node metagenome assembler that is de Bruijn graph-based, which has excellent quality metrics and performs well on small to medium size datasets. By default MetaSPAdes includes a read correction stage which we disabled to better match the full workflow of comparable assemblers, including MetaHipMer, when comparing performance, since in most workflows read correction can be treated as pre-processing step before assembly. MetaSPAdes is limited to problems that fit in the memory of a single node, and thus cannot assemble grand-challenge datasets. Megahit [12], is a single-node metagenome assembler based on de Bruijn graphs that is fast and does an excellent job in assembling low abundance genomes within a metagenome. Megahit can assemble low and medium sized datasets and is optimized for GPU processing and low memory consumption. A couple of distributed memory algorithms have been recently developed and tackle only parts of the metagenome assembly pipeline. Kmerind [33] is a parallel library for kmer indexing and has been shown to scale efficiently on distributed memory systems. Also, Flick et. al. [34] introduced a parallel connectivity algorithm for de Bruijn graphs in metagenomic applications and the results illustrate very good strong scaling on large concurrencies. However, none of the two aforementioned algorithms constitutes a complete end-toend pipeline tailored for de novo metagenome assembly. VI. CONCLUSIONS AND FUTURE WORK Metagenomic analysis promises to revolutionize numerous fields of study including biomedicine and environmental sciences. The de novo assembly of these microbial communities is one of the most challenging problems in bioinformatics due to the computational complexity and irregularity of these huge data sets. Unlike state-of-the-art metagenome assemblers that are mostly limited to single-node memory footprints and processing capability, we present MetaHipMer, the first endto-end, massively scalable, high-quality metagenome assembly pipeline. MetaHipMer reduces computing runtimes by orders of magnitude and enables a new era of metagenome assemblies that were previously considered intractable. MetaHipMer's efficient scaling required numerous algorithmic innovations to develop its iterative high-quality approach, coupled with novel parallel computing optimizations. The parallel scalability of MetaHipMer is built on our distributedmemory implementation of irregular data structures, including histograms, hash tables and graphs, which leverage onesided communication and remote atomics using UPC's global address space capabilities. Additionally, we employed a variety of techniques to maximize performance including localityaware hashing, software caching, read localization, partitioning via connected components, and dynamic load balancing. To evaluate the efficacy of MetaHipMer, we first examined assembly quality metrics across leading metagenome assemblers and demonstrated comparable results on the frequently studied MG64 synthetic data set. We then explored scaling behavior on the Cori Haswell system and showed efficient strong-scaling behavior on up to 1024 nodes (32,768 cores) using a subset (3 lanes) of the Twitchell Wetlands dataset. Next we successfully validated our metagenome assembler's parallel efficiency in a weak scaling regime, by developing MGSim and generating appropriate simulated data sets. Finally, to highlight the new capability of MetaHipMer we conducted a full assembly of the 2.6 TByte Twitchell Wetlands environmental sample -to the best of our knowledge, the largest, high-quality de novo metagenome assembly completed to date. Currently, metagenomic studies are conducted overwhelmingly using high-throughput Illumina short read data. As the cost of third-generation sequencing technologies, such as those offered by companies Pacific Biosciences and Oxford Nanopore, continue to come down, metagenomic studies might also benefit from longer reads. However, third-generation longer reads have significantly higher error rates and require more computational power in order to get assembled, which will increase the importance of parallelism in metagenome assemblers. The distributed data structures and techniques presented in this paper are predicted to be instrumental in the large-scale parallel assembly of future datasets, regardless of the prevailing sequencing technology. C. Installation On NERSC's Cori Cray XC40 system: HIPMER_ENV_SCRIPT=.cori_deploy/env.sh ./bootstrap_hipmer_env.sh install The MetaHipMer distribution has several scripts to support building on multiple platforms including Linux and Mac OS X D. Experiment workflow All datasets and config files are already set up on Cori's scratch filesystem (please contact the authors for the paths on Cori). The distribution provides detailed examples on how to set up the required data sets and config files. Exemplary execution script of MetaHipMer on 32 nodes of Cori: #!/bin/bash set -e export NODES=32 export CORES_PER_NODE=32 export THREADS=$((CORES_PER_NODE * NODES)) export CACHED_IO=1 export UPC_SHARED_HEAP_SIZE=2000 export HIPMER_INSTALL=$SCRATCH/hipmer-install-cori/ export PATH=$PATH:$HIPMER_INSTALL export HIPMER_DATA_DIR=$SCRATCH/hipmer_metagenome_data export All other assemblers evaluated in this paper were run with their default/suggested parameters. E. Evaluation and expected result Performance can be simply assessed by log output files that are automatically generated and runtime is reported in seconds. The accuracy of the resulting assemblies are evaluated with metaQUAST 4.3 (http://quast.sourceforge.net/metaquast) -used default parameters. F. Experiment customization The MetaHipMer distribution supports various exemplary experimental setups with customizable config files, run scripts for various platforms and sample read data sets. VIII. METAHIPMER AND METASPADES NGA50 COMPARISON ON MG64 The results presented in Table I show the metrics for the whole MG64 assembly. This obscures the variation across the 64 genomes that comprise the dataset. To get a better idea of this variation, Figure 6 presents the NGA50 metric [29] for each individual genome. The NGA50 metric is designed to capture contiguity in the presence of errors, and so can be thought of as a compact measure of both length and misassemblies. We can see from the figure that MetaHipMer and MetaSPAdes have very similar NGA50 for almost all genomes, except for two outliers. In these cases, there are so few contigs in the genomes that a single misassembly can change the NGA50 dramatically.
2018-09-19T04:53:39.000Z
2018-09-19T00:00:00.000
{ "year": 2018, "sha1": "66509edf226cf1d040604508706e685845078a96", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1809.07014", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "52be27b43336f1b241ba2e5b21f240974eea56d5", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Biology" ] }
11026700
pes2o/s2orc
v3-fos-license
An Efficient Two-Stage Sparse Representation Method There are a large number of methods for solving under-determined linear inverse problem. Many of them have very high time complexity for large datasets. We propose a new method called Two-Stage Sparse Representation (TSSR) to tackle this problem. We decompose the representing space of signals into two parts, the measurement dictionary and the sparsifying basis. The dictionary is designed to approximate a sub-Gaussian distribution to exploit its concentration property. We apply sparse coding to the signals on the dictionary in the first stage, and obtain the training and testing coefficients respectively. Then we design the basis to approach an identity matrix in the second stage, to acquire the Restricted Isometry Property (RIP) and universality property. The testing coefficients are encoded over the basis and the final representing coefficients are obtained. We verify that the projection of testing coefficients onto the basis is a good approximation of the signal onto the representing space. Since the projection is conducted on a much sparser space, the runtime is greatly reduced. For concrete realization, we provide an instance for the proposed TSSR. Experiments on four biometrics databases show that TSSR is effective and efficient, comparing with several classical methods for solving linear inverse problem. Introduction Linear inverse problems arise throughout engineering and the mathematical sciences. In most applications, these problems are ill-conditioned or under-determined, so we must apply additional regularizing constraints in order to obtain interesting solutions. Most modern approaches use the sparsity of the solution as a regularizer [1,2]. In this paper we first give a brief view of these algorithms for sparse approximation. Then we propose a new two-stage sparse representation method solving linear inverse problem. Roughly there are four class of approaches to solve linear inverse problems: greedy, convex relaxation, proximal and combinatorial methods. Orthogonal Matching Pursuit (OMP) [3] is one of the important greedy algorithms. OMP finds one atom at a time for approximating the solution of the l 0 problem: where y is a target signal, ǫ > 0 is some error tolerance. We refer to the vector x as representing coefficient of y respect to the dictionary Φ. We say x is K-sparse when x 0 ≤ K. The dictionary Φ ∈ R m×N is a real matrix whose columns have unit Euclidean norm: ϕ j 2 =1 for j = 1, 2, . . . , N. OMP accumulates the vectors which have the least residual with the representing coefficients. The accuracy is restricted since OMP does not consider that the multiple correlated atoms might be jointly selected. The other greedy algorithms, including st-OMP [4], ROMP [5], etc, use l 1 -norm to replace the NP-hard l 0 -norm minimization: They work well when x is very sparse, but will deviate from the ideal solution of Eqn. (1) when the number of non-zero entries in x increases, as illustrated in the paper [6]. CoSaMP [7], as a combinatorial method for solving (P 1 ), is a widely used method which avoids the pure greedy nature of OMP which can never remove any atom once they are selected. It provides rigorous bounds on the runtime that are much better than the available results for interior-point methods [8] -e.g. L 1 -Magic [9]. The uniformity property of CoSaMP shows it can recover all signals given a fixed sampling matrix and the stability property guarantees its success when the samples are contaminated with noise. The convex relaxation methods, as another branch, relaxes the (P 0 ) form by (P 1 ). Basis Pursuit denoising (BPDN) [10], solves a regularization problem with a trade-off between having a small residual and making coefficients simple in the l 1 sense. Basis Pursuit (BP) series methods are far more complicated than OMP series, because these methods obtain the global solution of the optimal problem in each iteration. L 1 -Magic [9] is a collection of matlab routines which are based on standard interior-point methods. One class of algorithms within reformulates linear inverse problem as the second-order cone program, and solve it with logbarrier method, which use conjugate gradient method as inner core. Least-angle regression (LARS) [11], as an active set method, performs model select to find the optimal point iteratively. It produces a full piecewise linear solution path, which is useful in cross-validation or similar attempts to tune the model. Among the proximal methods, Iterative Shrinkage-Thresholding Algorithm (ISTA) [12] solves the variant of the problem (P 1 ), where λ is a regularization parameter. Roughly speaking, each iteration comprises of a multiplication by Φ and its adjoint, along with a scalar shrinkage step on the obtained result. A short survey on the applications of ISTA series can be found in [13]. FISTA [14], as a quicker version of ISTA, is proposed recently. Still FISTA needs many iterations for solving inverse problem if λ is small which is required for a good approximation of Eqn. (2). IHT [16] is based on the surrogate objective from [17]: For Φ 2 < 1, the above is a majorisation of the l 0 regularized sparse coding objective function. Using a fixed threshold, the author show IHT converges to a local minimum. It also needs many iterations and the per-iteration cost is about the same as Matching Pursuit. GISA [18] cleverly extended the soft-thresholding operator for l p -norm regularized sparse coding problem. Instead of a fixed threshold (e.g. .5 in IHT), the authors derived the following non-linear equation for the threshold τ : using the above threshold GST can always find the correct solution to the l pminimization problem min The authors show one iteration of GISA is sufficient for image deconvolution. Hence it is very efficient. To reduce the iterations for a proximal approach, the Linearized Bregman (LB) algorithm [19] is produced which is equivalent to gradient descent applied to a certain dual formulation. The analysis shows that LB has the exact regularization property; namely, it converges to an exact solution of (P 1 ) whenever its smooth parameter α is greater than a certain value. The LB algorithm returns the solution to (P 1 ) by solving the model: LB replaces the quadratic penalty in (P 1 ) with a linear term and uses a mixture of l 1 and l 2 norm for the regularization. This key modification produces a strictly convex differentiable objective function. The LB method requires O( 1 ǫ ) iterations to obtain an ǫ-optimal solution, while Accelerated Linearized Bregman Method (ALB) [23] reduces the iteration complexity to O( 1 √ ǫ ) while requiring almost the same computational effort on each iteration. ALB converges much quicker than LB on three types of sensing matrices generated by the randn(m, n) function [23], which are standard Gaussian matrix, Normalized Gaussian matrix and Bernoulli +1/-1 matrix. The other merit is that the relative errors obtained by ALB as a function of the iteration number are much smaller than LB does. How to quickly represent data has been an open problem to deal with in academic and industrial area. Many algorithms similar to above representative approaches adopt onestage coding technique, which encodes the original samples in a large projected space. Among them many recast (P 1 ) as a convex program with quadratic constraints, the computational cost for practical applications can be prohibitively high for large-scale problems. Honglak Lee [20], etc., however, proposed an efficient sparse coding algorithm, iteratively solving two convex optimization problems: an L 1 regularized least squares problem and an L 2 -constrained least square problem. In [21], an online tracking algorithm with two stage optimization was proposed to jointly minimize the target reconstruction error and maximize the discriminative power by selecting a sparse set of features. It is very effective in handling a number of challenging sequences. TSR [22] proposed a robust and fast sparse representation method based on divide and conquer strategy. It divided the procedure of recognition into outlier detection stage and recognition stage. FSR [24], first uses KSVD method [25] to construct an dictionary for sparse representation, then applies OMP [3] to the dictionary to generate coefficients, and forms a reduced dictionary with the coefficients for the sparse coding. It runs much quicker than L 1 -Magic solver, but the KSVD method assumes an overcomplete dictionary and has trivial problem when the dimension of the signal is larger than that of the dictionary. As signals can be modelled by a small set of atoms in a dictionary, FSRM [26] exploits the property and shows that the l 1 -norm minimization problem can be reduced from a large and dense linear system to a small and sparse one. It exploits CoSaMP to generate sparse coefficients to do the next coding. Experimental results with image recognition indicate FSRM achieves doubledigit gain in speed with comparable accuracy compared with the L 1 -Magic solver, and solves the trivial problem FSR has. The above two methods shares the same two-stage structure. The core of both is to design a projected sparsifying basis in the first stage to represent the signals, and do sparse coding in the second stage. Motivated by it, we propose a Two-Stage Sparse Representation (TSSR) method to design the basis for rapid speed. TSSR makes the sparse approximation computationally tractable without sacrificing stable convergence. It represents the data in a smaller space for discriminative representation and reduces the runtime dramatically. The remainder of the paper is organized as follows. Section 2 presents the derivation of TSSR for verification and the algorithm flowchart as well as the complexity analysis. Section 3 gives an example to facilitate the method and shows the experimental results with discussion, and finally Section 4 offers the conclusion. Two-Stage Sparse Representation We first describe the derivation of TSSR, which generates a new form of (P 1 ), then give an example to show the performance of the method. The Derivation of TSSR The signal y ∈ R m can be approximated in two ways: 1. Φ is viewed as a dictionary, Φ ∈ R m×N . x ∈ R N is then the generated sparse coefficient vector over Φ, and where Φ = [φ 1 , . . . , φ i , . . . , φ N ]. Φ is supposed to satisfy the Restricted Isometry Property (RIP) of order 2K [27], and e 1 is the residual. The illustration of Eqn. (7) is shown as Figure 1(a). 2. We can generate a new space ΨΩ, here Ψ is viewed as a measurement dictionary, and Ω = [ω 1 , . . . , ω i , . . . , ω N ] represents a sparsity basis or dictionary in which ω i expresses the i-th sparse vector. Then we have another description of the signal y over the ΨΩ other than Eqn. (7). The mechanism of TSSR is illustrated in Figure 1(b). To minimize the residual of the signal Here Ψ ∈ R m×N , Ω ∈ R N ×N , and e 2 is supposed to obey the Gaussian distribution in the new space ΨΩ for sparse reconstruction. Then we can obtain After measuring the signal y, we can describe the intermediate representing . By introducing the residual signals to Eqn. (10) and integrating Eqn. (9), we can obtain Here e 4 is the same residual as e 1 in Eqn. (7) and e 4 2 ≤ ǫ, for a small constant ǫ. With bi-Lipschitz property [28], we can derive given {Ωx} ∈ R N . In fact, assume x is K-sparse, from Eqn. (11), we have Let s = z − Ωx, which is also a sparse vector, we have Ψs 2 ≤ (K + 1)ǫ. Recall that Ψ satisfies the RIP of order K with constant δ K < 1 if We can derive the upper bound for s using Eqn. (13) and Eqn. (14). From Eqn. (15) we have Then the projection of z onto Ω as shown in Figure 1(b) is the approximation to that of y onto Ψ. So the original problem (P 1 ) is modified by Since the Ψ is a constant, which has little effect on the solution of the object function, Eqn. (17) then can be approximated by the following equation ǫ 1 and ǫ 2 are small different constants. Since the projection at the second stage is conducted on a much sparser space Ω than Φ in the first one, the runtime is greatly reduced. Also the derivation above verifies the stability of TSSR, which satisfies the RIP condition under certain assumption. The space Φ is approximated by the space ΨΩ as Eqn. (8) indicates, and the Ψ and Ω are designed as follows: We can design Ψ to obey or nearly obey Gaussian distribution with the training data through matrix computation. Since Gaussian distribution with bounded support is Sub-Gaussian [28], we can exploit the concentration property which only requires Sub-Gaussian. Any distribution satisfying a concentration inequality [29] will provide both the canonical RIP and the universality with respect to a certain sparsity basis Ω. Here Ω is represented as an identity N × N matrix and x ∈ Σ K , each K-dimensional subspace from K = K (Ω) is mapped to a unique K-dimensional hyperplane in R N . Once K has a sufficient amount of independence, the concentration of measure tends to be sub-gaussian in nature. Then we can acquire signals Ωx that are sparse or compressible in practice. So Ω is designed to approach an identity matrix after the first-stage implementation to have the property described above. By choosing K-dimensional subspaces spanned by sets of K columns of Ψ, theorem 5.2 of [39] establishes the RIP for ΨΩ for each of the distributions. The transformation from the first stage to the second one can be motivated by Concentration of Measure principle [29] and bi-Lipschitz theory [28]. Further derivations are found in the Appendix. The Algorithm Flowchart of TSSR The flowchart of TSSR is described as follows, 1. Input: A test signal y ∈ R m , training signals Y ∈ R m×N , sparsity level K, a measurement matrix Ψ ∈ R m×N with each column normalized. Since the Ω is contained in a much sparser space than ΨΩ does, the implementing speed increases dramatically, while the solution x approaches the original one solving (P 1 ). Complexity Analysis We give a brief description of the complexity analysis comparison between solving Eqn. (2) and Eqn. (18). The computational cost for the former hinges on Φ representing y, while that for the latter lies in Ω representing z. For the Φ is contained in the space generated by the original data, the implementation over it completely depends on the ability of coding. Differently, the Ω can be designed to approach an identity matrix or a sparse matrix which has maximum value at the diagonal and small number of minor values off-diagonal of it. To do the coding task with the same optimization method, the time to take one sweep over the columns of Φ and Ω then need quite different cost. For instance, if we use CoSaMP [7] for coding, we need O(m × N) and nearly O(N) flops for Φ and Ω respectively. As a result, the recognition rate of the proposed TSSR is much quicker than many one-stage methods. To demonstrate the effectiveness of the method, we give an application to the method in the next section. An Instance of TSSR An instance of TSSR is shown in the following. In the first stage, we adopt the ALB [23] to generate sparse coefficients of samples over Ψ, which has merits of very small relative errors, and achieves the desired convergence with small number of iterations. Ψ can be formed by the training dataset Y which are normalized, as shown in Figure 1, and we can make Ψ obey or nearly obey the Gaussian distribution. The sparse coefficients (features) Ω corresponds to training data Y , and forms a square matrix (dictionary), in which each column is composed of the representatives of one sample. With sparse coding by ALB for different datasets, the generated Ω is similar to an identity matrix or a sparse matrix which has maximum value at the diagonal and relatively small values off-diagonal of it. Then we use CoSaMP [7] to acquire the sparse coefficients z of test data y over Ψ and then represent z over Ω. The uniformity property of CoSaMP shows it can recover all signals given a fixed sampling matrix and the stability property guarantees its success in solving problem (P 1 ) when the samples are contaminated with noise. CoSaMP performs signal estimation and residual update, and then generates K (sparsity level) non-zero coefficients for z [27]. As section 2.1 shows, the problem (P 1 ) then becomes searching for the sparsest solution on the basis Ω as Eqn. (18) shows. For classification we use the sparse representation classifier (SRC) [30] which is known to have good robustness against signal corruption and noise. SRC minimizes the residuals between the test sample and training samples of different classes, and find the label of test sample which corresponds to the minimum residual. Note that TSSR structure can accommodate other sparse solvers as well as different measure matrices. We present experimental results on real data sets to demonstrate the efficiency and effectiveness of the proposed algorithm. All the experiments were carried out using MATLAB on a 3.0GHz machine with 2G RAM. The time to classify one image is averaged over 10 runs. The bold values indicate the best performances under specific condition. The parameter α adopted in ALB depends on the data [35], but a typical value is 1 to 10 times the estimate of x true ∞ . Here we assume that an observed sample belongs to one certain class and can be well represented using samples from the same class. Face Recognition 3.2.1. PIE Database The CMU PIE database http://vasc.ri.cmu.edu/idb/html/face/ contains 68 human subjects with 41,368 face images as a whole. We choose the Table 1. As you see in table 1, TSSR is the fastest among all the methods. The rest of the methods except Homotopy need ten times longer than TSSR does for all the cases, and FISTA even needs 230 times more to run. TSSR is also more accurate than Homotopy for all the cases. The reason that Homotopy runs quickly is due to the fact that it iteratively adds or removes nonzero representing coefficients one at a time, and is clearly more efficient when the signal is very sparse. The time ratio of L 1 -Magic, achieving the highest accuracy for all cases, to TSSR, changes rapidly as the training samples becomes larger, from 12.5 to 140. This may due to the fact that least square method within it needs much more time to run when the number of samples changes. AR Database The AR database [32] we choose contains 2600 color images corresponding to 100 people's faces (50 men and 50 women). Images feature frontal view faces with different facial expressions, illumination conditions, and occlusions (sun glasses and scarf). Each person participated in two sessions, separated by two weeks time. The size of the each image is 120x165 pixels, and was reduced to 40 × 55, i.e. a 2200 dimension vector. Similarly, for AR, we randomly select 1, 3, 7 samples for training, and the rest for test , we use cases 1-13, 3-11, 7-7. The results are shown in Table 2. TSSR is the fastest among all the methods as shown in table 2. For all the cases, ALB and FISTA can both obtain quite high accuracy but at high cost. For example, ALB (1.5771s) needs 1120 times more to run than TSSR (0.0014s) does for case (3)(4)(5)(6)(7)(8)(9)(10)(11). Both ALB and FISTA have high accuracy, this maybe because ALB has a small relative error, while FISTA is a proximal method with good convergence property. On the other side of view, the performance of TSSR on AR database is overall worse than the others, this may because the sparse basis Ω constructed can not expressed the space information of the images very well. Palmprint Recognition The PolyU palmprint database [33] contains 386 palms and each palm has about 20 samples, collected in two sessions separated by two months. The size of the each image is 384 × 284 pixels, and was reduced to dimension 1131. All the data are normalized. For PolyU palmprint, we uses cases 1-10, 3-10, and 6-10. The results are shown in Table 3. We can see in table 3 TSSR is superior to other methods in speed. Homotopy and CoSaMP have comparable speed, and run faster than the rest of the methods except TSSR. For CoSaMP, the RIP ensures that the leastsquare problems encountered are always well conditioned, so the iteration Table 3: Recognition accuracy and speed comparison using PolyU database. Samples PolyU ( From all the experiments with the three databases, the results indicate TSSR is effective and efficient. We note that sparsity level K should be carefully chosen when TSSR is implemented. In the following section, we'll consider how to pick a good K for a dataset. Choosing Sparsity Level Before indicating how we choose K, we first describe the relation between sparsity and mutual coherence [36]. For a dictionary Ψ, Ψ ∈ R m×N , its coherence is bounded by Since every entry in the off-diagonal of Ψ is at most M, this leads to the condition (K − 1)M < 1, then M is the Mutual Coherence, which is a measure of how similar the columns in Ψ are to each other. If M is large (close to 1, since 0 < M ≤ 1), it implies atoms are highly correlated and will led to poor performance for sparse representation. Geometrically this implies we want the atoms to be as orthogonal to each other as feasible -i.e. to form a Grassmanian frame in the sense of Benedetto [37]. The Extended Yale B database [31] consists of 2432 grey images of 38 subjects under 9 poses and 64 illumination conditions. We choose the frontal pose and use all the images under different illumination, thus we get 64 images for each person. Each image is manually aligned to 32 × 32 according to the eyes positions, with 256 gray levels per pixel. So each image can be represented by a 1024-dimensional vector in image space. No further preprocessing is done. For the (132 × 1209) dictionary Ψ, which we use in all the following experiments, the sparsity level is K ≤ 13.2 as prescribed by Eqn. (20). To verify this method of estimating K, we begin our experiments with K set to 5, 10, 13, 18, and 20 and study the various methods' performance. We compare TSSR with FSRM and FSR [24] to evaluate their performance since they are all algorithms with two-stage structure and using sparsity level K to represent coefficients. For FSR, the KSVD algorithm [25] within it is constrained by the dimension of training vectors, and trivial solution would occur if the dimension of the training vector is larger than the size of the dictionary. So we performed all the experiments on the extended YaleB database, using half of samples for training and the rest for testing as FSR did [24]. The speed comparison between FSR and TSSR and the accuracy of all the algorithms under different K can be seen in Table 4 and Figure 2. We can find: 1. TSSR is more accurate than the other two methods. Interestingly for the methods themselves, there is a trade-off between recognition accuracy and speed. Each method obtains its highest accuracy at the predetermined value K, i.e., 13.2. This may be due to precondition that the normalization of the original data satisfies the requirement of Gaussian distribution, and Eqn. (20) does its work. 2. The TSSR is the fastest among all the methods in most cases. As K increases, the run time ratio of FSR to TSSR becomes larger, which is from 1 (K = 5) to 61 (K = 20). FSR needs a little less time than TSSR when K = 5, which is much lower in accuracy than the others. 3. FSRM runs faster than FSR for most cases. This is because KSVD adopted in FSR needs much time to generate coefficients for the second coding, while FSRM feeds the adopted L 1 -Magic with a much smaller input set in a two-stage process. Meanwhile, TSSR needs almost the same time to run for every K, which is similar to FSRM. The reason is that TSSR and FSRM have similar two-stage structure and both encode coefficients sparse enough which are generated in the first stage. Discussion Extensive experiments on four biometrics databases have revealed some significant points, from which we can find the following: 1. Comparing with several classical methods for solving linear inverse problems, experiments on PIE, AR and PolyU palmprint databases show that TSSR is an effective and efficient method. 2. TSSR almost uses the same time for different K to run. This indicates the robustness of algorithm, which is not sensitive to the parameter K. This happens as long as we can obtain a small number of good discriminative features in the first stage to do the second sparse coding. 3. Since TSSR reaches the highest accuracy at about the predetermined value K, we can first use Eqn. (20) to get the initial K for a dataset, then adjust K in experiments to acquire the desired results. This may give us freedom and avoid trial and error. 4. ALB and FISTA can obtain quite high accuracy for all the three databases, which can be used in the situation when highest accuracy is desired and we can afford the time. 5. TSSR structure can accommodate different sparse solvers as well as different measure matrices, if Ψ is designed to obey or nearly obey Gaussian distribution with the training data, while Ω approximates an identity matrix. 6. Since the sparsity basis Ω is approaching identity matrix but not exactly, this may have effect on recognition rate. So the design of Ω is worth studying further. Conclusions and Future Work We have proposed a new method of Two-Stage Sparse Representation (TSSR) for solving linear inverse problem. TSSR makes the sparse approximation computationally tractable without sacrificing stable convergence. Experimental results with image recognition indicate TSSR is more efficient with comparable accuracy than several classic methods solving linear inverse problem. As the proposed method provides a good way to exploit the special structure of biometric datasets and is helpful for achieving rapid speed, it can be also applied to other recognition tasks. Appendix A. Concentration of Measure Suppose we have a signal y ∈ R m , and a matrix Φ ∈ R m×N in which the signal is presented. We want to obtain a sparse set of coefficients (sparsity level K) to represent the signal. Assume that an observed sample belongs to one certain class and can be well represented using samples from the same class. We say that the matrix Φ satisfies RIP of order K with constant δ = δ K < 1 if RIP is a measure of closeness to an identity matrix for sparse vectors. Define that Φ K selects K columns from Φ, the RIP then suggests that every Φ K should behave like an isometry -not changing the length of the vector it multiplies. If x 0 is small enough, the norm Φx 2 2 can be constrained to a small enough value, in which case a sparse representation can be stably determined [38]. We can generate random m × N matrices Φ by choosing the entries φ ij as independent and identically distributed (i.i.d.) random variables. For any x ∈ R N , the random variable Φ 2 2 is strongly concentrated about its expected value, that is, where the probability is taken over Φ and c is a constant, for any α ∈ (0, 1). This is called concentration of measure inequalities. Any distribution satisfying a concentration inequality will provide both the canonical RIP and the universality with respect to a certain sparsifying basis Ψ. Here Ψ is represented as a N × N identity matrix and x ∈ Σ K , with which we can acquire signals Ψx that are sparse or compressible in practice. By choosing K-dimensional subspaces spanned by sets of K columns of Ψ, theorem 5.2 of [39] establishes the RIP for ΦΨ for each of the distributions. See [15] for more details on Sub-Gaussian random variables. If the matrix ΦΨ satisfies the RIP of order 2K, then Φ is a δ-stable embedding of (Ψ( K ), Ψ( K )), where Ψ( K ) = {Ψx : x ∈ K }. We would require that ΦΨ satisfies the RIP, and thus bound the error x − x 2 introduced by the embedding. See the appendix B.
2014-07-25T07:31:31.000Z
2014-04-03T00:00:00.000
{ "year": 2014, "sha1": "075921dbd59c9cd3c9b38d1650b0c700562b1743", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1404.1129", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "075921dbd59c9cd3c9b38d1650b0c700562b1743", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
1427884
pes2o/s2orc
v3-fos-license
The GJA8 allele encoding CX50I247M is a rare polymorphism, not a cataract-causing mutation. Purpose The aim of this study was the genetic, cellular, and physiological characterization of a connexin50 (CX50) variant identified in a child with congenital cataracts. Methods Lens material from surgery was collected and used for cDNA production. Genomic DNA was prepared from blood obtained from the proband and her parents. PCR amplified DNA fragments were sequenced and characterized by restriction digestion. Connexin protein distribution was studied by immunofluorescence in transiently transfected HeLa cells. Formation of functional channels was assessed by two-microelectrode voltage-clamp in cRNA-injected Xenopus oocytes. Results Ophthalmologic examination showed that the proband suffered from bilateral white, diffuse cataracts, but the parents were free of lens opacities. Direct sequencing of the PCR product produced from lens cDNA showed that the proband was heterozygous for a G>T transition at position 741 of the GJA8 gene, encoding the exchange of methionine for isoleucine at position 247 of CX50 (CX50I247M). The mutation was confirmed in the genomic DNA, but it was also present in the unaffected mother. When expressed in HeLa cells, both wild type CX50 and CX50I247M formed gap junction plaques. Both CX50 and CX50I247M induced gap junctional currents in pairs of Xenopus oocytes. Conclusions Although the CX50I247M substitution has previously been suggested to cause cataracts, our genetic, cellular, and electrophysiological data suggest that this allele more likely represents a rare silent, polymorphic variant. The parents were examined for the presence of lens opacities with a slit-lamp (Zeiss, Oberkochen, Germany). Blood samples (5-10 ml) were collected from the proband and her parents, and they were used to isolate genomic DNA [6]. The conditions for PCR reactions performed using genomic DNA as template have been described previously [7][8][9]. Primers were obtained from Utz Linzner (Helmholtz Center Munich, Institutes of Experimental Genetics and of Pathology, Munich, Germany) or from commercial sources (Invitrogen, Karlsruhe, Germany, MWG, Vaterstetten, Germany or Sigma Genosys, Steinheim, Germany). Sequencing was performed commercially (Sequiserve, Vaterstetten, Germany or GATC Biotech, Konstanz, Germany). The presence of the mutant allele in the PCR fragments was confirmed by LweI digestion. The Cooperative Health Research in the Augsburg Region (KORA) Survey 2000 (S3) which studied a population based sample of 4,261 subjects aged 25-74 years during the years 1999-2001 [10] was used as a population-based control. 179 randomly chosen individuals without cataracts from this cohort were analyzed for the putative GJA8 mutation. The human wild type CX50 coding sequence was subcloned into pSP64TII [11] and pcDNA3.1/Hygro(+) (Invitrogen Life Technologies, Carlsbad, CA) [12]. The mutant allele (CX50I247M) was generated in these expression plasmids using a PCR-based site-directed mutagenesis strategy [13,14]. The coding regions of the PCR products were sequenced to confirm the fidelity of the amplification reaction. Connexin DNAs (in pSP64TII) were transcribed and capped in vitro, and cRNAs were injected into defolliculated Xenopus oocytes that had been injected with an oligonucleotide antisense to the endogenous Xenopus CX38 [16]. Oocytes were paired and studied after 14-18 h by double two-microelectrode voltage-clamp recording to allow determination of junctional conductance (gj) [17]. Animals were maintained and treated in accordance with NIH/PHS policies on humane care and use of laboratory animals. RESULTS The proband, LB, suffered from bilateral, diffuse white lens opacities. She underwent cataract surgery shortly after birth. Both parents were healthy; slit lamp examination showed no evidence of lens opacities (Figure 1). Using a functional candidate approach, we checked several genes including GJA8 for sequence alterations. We identified a T→G exchange in GJA8 cDNA at position 741 ( Figure 2A). This substitution changes the amino acid codon at position 247 from isoleucine to methionine (CX50I247M). It also creates a new SfaN/LweI restriction site in the mutated sequence ( Figure 2B). Using LweI digestion of the PCR fragments obtained from genomic DNA, we observed the same transition in the unaffected mother ( Figure 2C, arrows). Sequencing of genomic DNA from both parents confirmed that the mother was heterozygous like the daughter, and the father was wild type. None of the other genes analyzed (CRYAA, CRYAB, CRYBA4, CRYBB1, CRYBB2, CRYGA-D, CRYGS, FTL, LIM2, and AQP0) showed alterations. We also used LweI digestion of genomic DNA to test for the presence of the CX50I247M allele in 179 controls obtained from a population-based study (KORA). Since no additional LweI restriction site was observed in these samples, the frequency of the CX50I247M allele must be less than 0.3%. The capacity of CX50I247M to form gap junctions was assessed by immunofluorescence microscopy of HeLa cells transfected with wild type CX50 or CX50I247M. Similar to wild type CX50, CX50I247M localized at appositional membranes, where it formed gap junction plaques, and in the perinuclear region, probably the Golgi compartment ( Figure 3A,B). The ability of CX50I247M to form functional gap junctional channels was characterized by two-electrode voltage-clamp in Xenopus oocyte pairs. Pairs of oocytes injected with CX50I247M cRNA developed gap junctional conductances with mean values that were not significantly different from those determined in oocyte pairs injected with wild type CX50 cRNA (Figure 4). Pairs of control oocytes injected with no connexin cRNA showed no coupling. DISCUSSION In this study, we demonstrated a heterozygous mutation in GJA8 of a child with severe congenital cataracts (LB). The mutated sequence encodes CX50I247M, a CX50 variant in which the isoleucine at position 247 (within the cytoplasmic COOH-terminus) is replaced by methione. CX50I247M formed gap junction plaques and supported intercellular communication similarly to wild type CX50. Very few of the identified cataract-associated connexin mutations lie in the COOH-terminus. Indeed, removal of the COOH-terminus (139-150 amino acids) of CX50 causes only modest effects on voltage-dependent gap junction channel gating [18,19]. Similar to the effects caused by truncation of ovine Cx50 [20], removal of the COOH-terminus of human CX50 results in a decrease in sensitivity to intracellular pH (pHi) [18]. Truncation of mouse CX50 also appeared to cause a decrease in sensitivity to pHi as evidenced by the delay in the decrease in junctional conductance induced by 100% CO2 perfusion and the slower recovery of gap junctional conductance following washout [19]. Truncated human and mouse CX50 both show decreased junctional conductance [18,19]. Thus, this region may be important for regulation of CX50 channel function, but it is dispensable for channel activity per se. Two of the mutations in lens connexin genes linked to hereditary cataracts that affect the COOH-terminus cause frame shifts [5,21]. In the Cx46 mutant, CX46fs380 (that contains a frame shift at codon 380), the new protein sequence caused by the frame shift contains a retention/retrieval signal that leads to loss of function [22] and localization of the mutant connexin in the cytoplasm [13]. The CX50 variant, CX50I247M, was previously reported in three members of a three generation Russian family, and it co-segregated with a zonular pulverulent cataract trait [4]. However, this mutation did not co-segregate with the disease in our study; it was also present in the healthy mother of our proband. (Indeed, the genetic alteration responsible for the cataract in our patient has not been identified.) The segregation of the CX50I247M allele with cataract in the Russian family remains puzzling [4], because it seems to be a rare allele with a frequency of less than 0.3%. A plausible explanation for the contradicting findings between the previous study and ours is the possibility of a close linkage to another gene, which is really causative for these cataracts. If this hypothesis were true, there should be another cataractrelated gene close to the CX50-encoding gene GJA8. Referring to the database of Mendelian hereditary disorders (OMIM), a few further cataract loci are reported on human chromosome 1; however, two of them have already been attributed to GJA8. Another one is the gene encoding glyceronephosphate O-acyltransferase (GNPAT) for which a syndromic cataract would be expected rather than an isolated one as we have reported. Therefore, one might speculate that there is another yet unidentified cataract-associated gene in this region. Examination of the ENSEMBL database for this region reveals a considerable number of genes that have not yet been annotated including some genes for non-coding RNAs. Our cellular and functional studies support the conclusion that CX50I247M is an inconsequential variant. Taken together, our results strongly suggest that CX50I247M represents a rare polymorphic site rather than a causative mutation. ACKNOWLEDGMENTS The excellent technical assistance of Erika Bürkle and Monika Stadler (Helmholtz Center Munich, Neuherberg) is gratefully acknowledged. This work was supported by National Institutes of Health grants EY08368 (E.C.B.) and EY10589 (L.E.). Jessica Rodriguez was supported through the Pritzker School of Medicine Experience in Research (PSOMER) program (T35 HL07764).
2014-10-01T00:00:00.000Z
2009-09-14T00:00:00.000
{ "year": 2009, "sha1": "cc0b8aa5a768356ddc968c71cb6012b9f142d6b1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9bb9ec5098bb3a9603801c24a60a2cd3492bfd88", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
231688174
pes2o/s2orc
v3-fos-license
Ultrahigh sensitive refractive index nanosensors based on nanoshells, nanocages and nanoframes: effects of plasmon hybridization and restoring force In this study, the effect of the plasmon hybridization mechanism on the performance and refractive index (RI) sensitivity of nanoshell, nanocage and nanoframe structures is investigated using the finite-difference time-domain simulation. To create nanocage structure, we textured the cubic nanoshell surfaces and examined the impact of its key parameters (such as array of cavities, size of cavities and wall thickness) on the nanocage's RI-sensitivity. Synthesis of the designed nanocages is a challenging process in practice, but here the goal is to understand the physics lied behind it and try to answer the question “Why nanoframes are more sensitive than nanocages?”. Our obtained results show that the RI-sensitivity of nanocage structures increases continuously by decreasing the array of cavities. Transforming the nanocage to the nanoframe structure by reducing the array of cavities to a single cavity significantly increases the RI-sensitivity of the nanostructure. This phenomenon can be related to the simultaneous presence of symmetric and asymmetric plasmon oscillations in the nanocage structure and low restoring force of nanoframe compared to nanocage. As the optimized case shows, the proposed single nanoframe with aspect ratio (wall length/wall thickness) of 12.5 shows RI-sensitivity of 1460 nm/RIU, the sensitivity of which is ~ 5.5 times more than its solid counterpart. On the other hand, hollow nanostructures showed that they can achieve ultra-high sensitivities thanks to their better plasmonic properties, based on the plasmon hybridization mechanism 17,18 . To describe the plasmon hybridization mechanism, Prodan et al. considered a nanoshell including an inner cavity and an outer spherical surface, having different resonance frequencies 19 . The cavity plasmons interact with the sphere plasmons thanks to finite thickness of the nanoshell. The strength of this interaction could be adjusted by manipulating the nanoshell thickness. Due to this interaction, the plasmonic oscillations of the nanoshell is split into symmetric and asymmetric oscillations, symmetric oscillation occurs at smaller frequencies and hence has lower energy than the asymmetric ones. Unlike asymmetric oscillation, which is considered as a dark mode and does not couple to the far-field radiation, the symmetric oscillations are coupled with the external optical fields and have greater RI-sensitivity than asymmetric plasmonic oscillations 19 . The hybridization model has a significant role in the RI-sensitivity improvement of metallic nanoparticles and its validity is tested by quantum mechanical calculations and also by FDTD simulations 18,20 . The influence of nanoshell thickness on the plasmonic hybridization mechanism is studied by Halas 19 . Her results show that the energy gap between two modes of hybrid surface plasmons (symmetric and asymmetric modes) increases as the nanoshell thickness decreases and hence frequency shift (with respect to solid nanoparticle) is larger for thinner nanoshells. Accordingly, the spherical nanoshells are the simplest structures in complex hollow structures which hybridization model could describe. Progressive developments in the nanoparticle synthesis 21,22 have introduced complex nanostructures with high degree of RI-sensitivity 23,24 . Among them, high potential structures of nanocage and nanoframe can be mentioned. In this way, Yugang Sun and Younan Xia have recorded a sensitivity of 408.8 nm/RIU for the nanocage structure with a 50 nm wall length and a 4.5 nm wall thickness 25 . Also, Mahmoud A. Mahmoud and Mostafa A. El-Sayed synthesized the gold nanoframes with different wall thicknesses and reported RI-sensitivity of 620 nm/ RIU for nanoframes with 51 nm wall length and 10 nm wall thickness 24 . They develop an equation for estimating the sensitivity of nanoframes as function of aspect ratio (ratio of wall length to wall thickness) in order to make it possible to compare nanoframes and nanocages synthesized by Sun and his colleague in the same aspect ratio. The results showed a ~ threefold sensitivity of nanoframes compared to nanocages, but the reason for the superiority of nanoframes over nanocages has remained unanswered so far. In this work, motivated by finding a reason for the more RI-sensitivity of nanoframes than nanocages, we have numerically investigated the LSPR properties and RI-sensing capabilities of the plasmon hybridization based cubic nanoshells, nanocages, and nanoframes by using FDTD simulation. In this way, we have first examined the effect of shell thickness on the plasmonic response of SiO 2 @Au core-shell nanocubes. Then by texturing the surfaces of optimized Au shell with array of cavities and removing the silica core, we have investigated the effect of key parameters (such as array of cavities, size of cavities, and wall thickness) on LSPR properties and RI-sensitivity of created nanocages. We have shown that the nanocages are less sensitive than nanoframes due to the simultaneous presence of symmetric and asymmetric oscillations in nanocage structures at the plasmon resonance mode. Finally, we have presented a linear equation for estimating the RI-sensitivity of nanoframes as a function of aspect ratio in a wide range. Results and discussion The nanoshell structures have better plasmonic properties than their solid counterparts due to the plasmon hybridization mechanism. When a hollow structured nanoparticle illuminated by the electromagnetics wave, the electrons of inner and outer surfaces show different plasmonic oscillation frequencies. These oscillations could interact with each other. The strength of the interaction is controlled by the nanoshell thickness. There exist two regimes of interactions, the symmetric and asymmetric modes. The symmetric mode couples to the optical fields and is sensitive to the variation of the medium's refractive index. The plasmon hybridization theory predicts that reducing the nanoshell causes the symmetric mode to oscillate at lower frequencies (this is the origin of red-shift phenomenon) 19 . The effect of plasmonic hybridization on the sensitivity of 50 nm gold cubic nanoparticles during the conversion of its structure from solid to silica-gold core-shell, has been studied using FDTD simulation (Fig. 1). As the Au shell thickness decreases, the nanoparticle absorption spectrum peak shifts from the visible spectral region to higher wavelengths in the near-infrared spectral region and the intensity also increases (Fig. 1a), stemming from the coupling strengthening of the inner and outer surfaces plasmons. The charge density distribution pattern at the Au nanosolid and nanoshell surfaces in the resonance mode shows the accumulation of charge at the edges and corners of cubic nanoparticles as well as the hybridization oscillations of coupled plasmons at the inner and outer surfaces of the nanoshells (see figure S1 in supplementary materials). Unlike spherical nanoshells, which fully symmetric plasmon oscillations occurs at their coupled surfaces, cubic nanoshells show the simultaneous existence of symmetric and asymmetric oscillations on vertical and parallel surfaces with respect to the polarization of the incident light, respectively. This can act as a limiting factor in nanoparticle RI-sensitivity within the structure. The RI-sensitivity is defined by the line slope of resonance wavelength versus the refractive index of the medium. Figure 1b shows the investigation of the effect of the shell thickness on RI-sensitivity of silica core-gold shell nano-cubes. This figure reports the RI-sensitivities of 220, 242, 262 and 372 nm/RIU for the shell thickness (T) of 16, 10, 8 and 4 nm, respectively. Figure 1c shows the linear increase in the sensitivity of the core-shell silica-gold nanoparticles by increasing the aspect ratio, X (Half the total size (L/2) -to-shell thickness (T)) ratio), which is formulated as follows: www.nature.com/scientificreports/ here, the core-shell nanocube with a shell thickness of 4 nm and aspect ratio of 6.25 represented the highest sensitivity of 372 nm/RIU. Plasmonic coupling not only increases the RI-sensitivity of the cubic core-shell SiO 2 @ Au nanoparticles to changes in the refractive index of the environment, but also amplifies the generated nearfield, increases its decay length as well as increases the plasmons lifetime ( Figure S2). In the following, the effect of key parameters (such as the array of cavities created on the surfaces of the gold nanoshell, size of the cavities and wall thickness) on the plasmonic hybridization mechanism and RI-sensitivity of the nanocages are investigated and presented in Fig. 2. The square cavity size ( l ) is optimized by creating 4*4 array of cavities in the core-shell nanocube surfaces, as shown in Fig. 2a. Increasing the size of the cavities (reducing the wall thickness) from 4 to 7.5 nm (from 6.8 to 4 nm), increases the sensitivity of the nanocages from 468 to 634 nm/RIU, which can be related to the strengthening of plasmons interaction in the walls. Further improvement of RI-sensitivity can be achieved by enhancing the contact surface of nanoparticle with its surrounding medium. This is possible by etching the SiO 2 core. Figure 2b reveals this fact and shows the red-shift of the absorption spectrum peak of empty core nanocage (with the wall thickness of 4 nm and the 4*4 cavities array) as a function of the environment refractive index increment. This figure shows that the RI-sensitivity of nanocage is increased from 634 (for silica filled core 4*4 nanocage) to 774 nm/RIU (for empty core 4*4 nanocage). Also, the sensitivity of nanocages with 4*4, 3*3, 2*2, and 1*1 cavity arrays are investigated in Fig. 2c (wall thickness has been fixed.). The results show that by reducing the array of cavities from 4*4 to 1*1, the sensitivity of nanocages increases significantly from 774 to 1460 nm/RIU. The nanocage with a 1*1 cavity array, i.e. the www.nature.com/scientificreports/ nanoframe, has recorded the highest sensitivity. The response of its absorption spectrum as a function of the refractive index of the environment is shown in Fig. 2d. In order to find a reason for the higher sensitivity of nanoframes compared to nanocages, the mechanism of plasmonic hybridization governing them has been investigated. It is well known that the refractive index sensitivity of nanoparticles depends on the position of the plasmon resonance band 26 . The lower the plasmon resonance band energy, the higher the sensitivity of the nanoparticles. The important point is what factors determine the position of the nanoparticle resonance band? According to the model proposed by Prodan et al. for the case of spherical nanoshells, the simplest structure in which the plasmon hybridization mechanism prevails, two resonance modes of symmetric and asymmetric are produced at low and high frequencies in interaction with light, respectively 18 (Fig. 3a). Here, the absorption spectra of a spherical nanoshell with a shell thickness of 4.5 nm and an overall size of 50 nm are calculated (Fig. 3b). As the plasmon hybridization model predicts, two resonance modes occur at the wavelengths of 283 and 583 nm (Fig. 3b). Investigation of the nanoparticle charge density distribution at resonance modes shows symmetric and asymmetric oscillations at 583 and 283 nm wavelengths, respectively (Fig. 3c). Increasing the refractive index of the environment, which nanoparticle is embedded, leads to red-shift of its absorption spectrum, where the symmetric plasmonic peak experiences greater displacement than the asymmetric peak (Fig. 3b). Figure 4 shows the charge density and near-field distribution of nanocages compared to nanoframes in the dipolar resonant mode. A significant phenomenon that can be observed in the charge density distribution of nanocages, unlike nanoframes, is the simultaneous existence of asymmetric and symmetric oscillations in the www.nature.com/scientificreports/ resonant mode. Asymmetric oscillation, which is considered a dark mode, not only does not couple to the farfield radiation, but also shows less sensitivity to changes in the refractive index of the environment compared to light modes. The near-field distribution profile clearly shows the lack of generation of strong plasmonic fields around the walls covered with dark modes (asymmetric oscillations). Here, the outcome of the behavior of both oscillations determines the position of the nanoparticle plasmon resonance band and its sensitivity. As the refractive index of the environment increases, asymmetric oscillations, which have a low tendency to wavelength displacement, act as a limiting factor for nanoparticle sensitivity. The nanocages with 4*4 cavities array produce more asymmetric oscillations compared to the 2*2 cavities array in the resonant mode. The presence of these asymmetric oscillations in the resonance mode can be a factor for the lower sensitivity of nanocages than nanoframes. On the other hand, a more in-depth study of the parameters affecting the plasmon resonance band of hybrid nanoparticles showed that the restoring force acts as a determining parameter 27 . As mentioned earlier, the refractive index sensitivity of the nanoparticle is directly related to its plasmonic peak position. Therefore, the effect of restoring force on the sensitivity of nanocages is investigated in Fig. 5. For this purpose, nanocages are divided into two structures, vertical and parallel, which have walls perpendicular to and parallel with respect to the polarization of incident light, respectively (see Fig. 5a). Investigation of their sensitivity shows that plasmonic peaks of parallel structures occur at shorter wavelengths and are less sensitive compared to vertical structures (Fig. 5b). In fact, in a parallel structure, the presence of walls parallel to the polarization direction of light contributes to the easier displacement of electrons during dipolar resonance and, consequently, to remain the plasmon energies at higher levels. In contrast, in vertical structures the strength of surface charges is reduced by the walls Similarly, as the number of these walls perpendicular to the polarity is increased, the nanoparticle plasmon peak is more likely to occur at higher wavelengths. This phenomenon is so prevalent in the nanoparticles that the vertical structures of the nanocages showed greater sensitivities than nanoframes (Albeit minor). In the following, nanoframes have been further investigated due to their high sensitivity and low dependence on light polarization compared to nanocages. According to the mentioned results, further optimization has been performed on the wall thickness of nanoframes with a wall length of 50 nm in order to achieve a RI-sensitivity estimation equation over a wide range of aspect ratio, R (ratio of wall length (L) to wall thickness (g)). Also, to evaluate the overlapping of simulation results with experimental results, a comparative study has been presented. Figure 6a shows RI-sensitivity analysis of nanoframes with the wall thicknesses of 4, 6, 8, and 10 nm. The wall thickness reduction increases the RI-sensitivity of nanoframes from 656 to 1460 nm/RIU due to the strengthening of facing surfaces plasmons. RI-sensitivity of the nanoframes is reasonably correlated and linearly increased with the aspect ratio. Fitting the RI-sensitivity versus the aspect ratio, gives the equation of S = δ δn = (107 ± 3) L g + (138 ± 23) , for nanoframes with aspect ratio in the range of 5-12.5 (Fig. 6b (red fit line)). Mahmoud In fact, the equation previously presented in this paper also does not provide correct estimates for aspect ratio values below 5, and the more aspect ratio reduces,the more the predicted error increases. Hence, using the results of experimental and DDA theory and the FDTD simulation results obtained in this paper, a more general equation for aspect ratio in the wide range of 4-12.5 is proposed which can properly predicts the nanoframes RI-sensitivity ( Fig. 6b (blue fit line)): (2) S = δ δn = (118 ± 4) L g + (35 ± 28) Conclusion In summary, we have shown the effect of plasmon hybridization mechanism on the LSPR properties and RIsensitivity of several single metallic nanostructures: SiO 2 @Au core-shell nanocube, Au nanocage, and nanoframe characterized by FDTD simulation. We have used charge density distribution calculations to show that the simultaneous presence of symmetric and asymmetric oscillations in nanocages in the plasmon resonance mode can be a limiting factor in their RI-sensitivity. We have also shown that the restoring force in dipolar resonance acts as a parameter determining the sensitivity of nanocages, since the sensitivity of nanoparticles is directly related to the position of their plasmon resonance band. By studying the effect of the array of cavities in nanocages, we have shown that with the reduction of the asymmetric oscillations occuring in the resonant mode of the nanocage, the sensitivity of the refractive index of its surroundings becomes more prominent. Also, the presence of walls parallel with respect to the polarization of light contributes to the easier displacement of electrons during dipolar resonance and, consequently, maintains the plasmon energies at high level. These results could somehow advance the understanding of why nanocages are less sensitive compared to nanoframes. Nanocages with 1*1 cavity array, which has the same structure as the nanoframe, shows the most sensitivity due to the generation pure symmetrical oscillations in the resonant mode and having a reduced restoring force. In this way, by optimizing and examining the wall thickness of the nanoframe structure, in overlap with the previously reported experimental results, we have presented a linear equation for estimating the sensitivity as a function of aspect ratio, R. This equation predicts the sensitivity of 1460 nm/RIU for nanoframes with the aspect ratio of 12.5, which is more than fivefold of their solid counterparts. The results of this paper provide a useful recipe for fabricating more sensitive nanosensors for medical diagnostics. Methods We evaluated the potential use of single gold nanoparticles as sensors by employing the FDTD computational method using OptiFDTD commercial software. FDTD permits a solution of the Maxwell's curl equations via subbing all time and space derivatives by finite time and space differences 28 . This method enables us to obtain the distribution of electromagnetic fields approximate to metallic objects and hence gives an algorithm for calculating absorption and scattering cross-sections of different objects. Since the solution depends on boundary conditions, the field distribution near the object is determined by their shape, their martial and their environment. Therefore, FDTD is vastly used to study the near field electromagnetic response of metallic nanoparticles. To calculate RI sensitivity of a metallic nanoparticle we assume that it is illuminated by a uniform downward-directed linearly polarized electromagnetic wave, propagating in the z-direction. The polarization axis is assumed to be along the x-direction. The wave scattered from the surface of the nanoparticle. In order to avoid the backscattering effects from boundaries we employed Perfectly Matched Layer (PML) boundary conditions 29 . By this we mean that the impedance matching conditions is completely satisfied in all directions through the boundaries. The starting point of calculation of RI sensitivity of metallic nanoparticles lies on the concept of the absorption cross-section. The absorption cross-section is obtained by 30 : where I inc is the incident (landing) wave intensity and power absorbed, P abs is defined by 30 : here E abs and H abs are absorption electric and magnetic fields, respectively. The sensitivity of the nanoparticles to the variation of RI of their surrounding medium is defined by the ratio of the resonance wavelength shift of absorption cross-section ( δ ) due to the variation of embedding medium RI ( δn): In spite of the fact that the plasmonic performance of silver nanoparticles is better than the gold nanoparticles, their application to bio-sensors is restricted by their low chemical stability and also their bio-incompatibility 23 . Hence the gold nanoparticles are superior to silver nanoparticles and in the following, we concentrate on the sensitivity of gold nanoparticles fabricated by Johnson and Christy's method 31 .
2021-01-24T06:16:14.461Z
2021-01-22T00:00:00.000
{ "year": 2021, "sha1": "d689569068d2c515d11af8fb289b58bbe170ef2b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-81578-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c3957ae2cd1fc446699e2f8be73ca8baf7b5ab9", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
256568502
pes2o/s2orc
v3-fos-license
Wilms Tumor 1-Driven Fibroblast Activation and Subpleural Thickening in Idiopathic Pulmonary Fibrosis Idiopathic pulmonary fibrosis (IPF) is a progressive fibrotic lung disease that is often fatal due to the formation of irreversible scar tissue in the distal areas of the lung. Although the pathological and radiological features of IPF lungs are well defined, the lack of insight into the fibrogenic role of fibroblasts that accumulate in distinct anatomical regions of the lungs is a critical knowledge gap. Fibrotic lesions have been shown to originate in the subpleural areas and extend into the lung parenchyma through processes of dysregulated fibroproliferation, migration, fibroblast-to-myofibroblast transformation, and extracellular matrix production. Identifying the molecular targets underlying subpleural thickening at the early and late stages of fibrosis could facilitate the development of new therapies to attenuate fibroblast activation and improve the survival of patients with IPF. Here, we discuss the key cellular and molecular events that contribute to (myo)fibroblast activation and subpleural thickening in IPF. In particular, we highlight the transcriptional programs involved in mesothelial to mesenchymal transformation and fibroblast dysfunction that can be targeted to alter the course of the progressive expansion of fibrotic lesions in the distal areas of IPF lungs. Introduction Pulmonary fibrosis is a pathological endpoint in many chronic lung diseases and is associated with repetitive lung injury, involving mesenchymal cell dysfunction and unremitting collagen deposition [1][2][3]. A key event in the manifestation of unresolved fibrosis is the persistent activation of fibroblasts, which culminates in myofibroblast accumulation and the excessive production of collagen and another extracellular matrix (ECM) proteins in the pulmonary parenchyma [4,5]. Pulmonary fibrosis is a major cause of death, as the progressive distortion of alveolar architecture impairs gas exchange [6,7]. Pulmonary fibrosis plays a major role in disrupting lung function in several chronic lung diseases, including idiopathic pulmonary fibrosis (IPF) and systemic sclerosis [4,[8][9][10]. The activation of fibroblasts and collagen deposition are also implicated in the pathological progression of multiple lung cancers, resulting in the increased invasion and metastasis of oncogenic cells in tumors [11,12]. Therefore, the development of effective therapeutics against pulmonary fibrosis is an urgent pursuit in diverse research areas. IPF is a chronic lung disease of unknown etiology with progressive scarring of the lungs and one of the most common forms of interstitial lung disease (ILD). Mortality and morbidity are increasing worldwide, with rates that are substantially higher in older populations (over 65 years of age), especially men [13,14]. The incidence of IPF is approximately 2.8-9.3 in 100,000 per year, and the median survival after diagnosis is approximately 3-5 years [15,16]. Pirfenidone and nintedanib are two recent U.S. Food and Drug Administration (FDA)-approved drugs that delay the decline in lung function but appear to have Highlighted dashed region in low magnification (Scale bar, 1500 µm) images represents the high magnification (Scale bar, 200 µm) images that show the prominent subpleural thickening and fibrotic foci that accumulate in the distal areas of the alveolar parenchyma of IPF lungs compared to normal lungs. Pentachrome staining highlights collagen (yellow color), muscle (red color), and elastic fibers (black to blue color) in mature fibrotic lesions of IPF. Arrowhead is used to highlight the fibrotic foci. Mesothelial Origin of Myofibroblasts In IPF, myofibroblasts distort lung architecture by depositing excess ECM. The source of myofibroblasts is being investigated by lineage tracing in injury models for pulmonary fibrosis. Although several studies have implicated resident fibroblasts as the main precursors of myofibroblasts [30][31][32], other cell populations such as epithelial cells, fibrocytes [33,34], and pericytes [35] have also been reported to contribute to myofibroblast transformation and the expansion of fibrotic lung lesions [36]. In the past, epithelial cells were frequently cited as a major source of myofibroblasts; however, many studies have disproved the epithelial-mesenchymal transformation theory [26,37]. In particular, Hardie Highlighted dashed region in low magnification (Scale bar, 1500 µm) images represents the high magnification (Scale bar, 200 µm) images that show the prominent subpleural thickening and fibrotic foci that accumulate in the distal areas of the alveolar parenchyma of IPF lungs compared to normal lungs. Pentachrome staining highlights collagen (yellow color), muscle (red color), and elastic fibers (black to blue color) in mature fibrotic lesions of IPF. Arrowhead is used to highlight the fibrotic foci. At the cellular level, the participation of fibroblasts in disease progression is well established and includes the aberrant activation of myofibroblasts, which are marked by smooth muscle actin alpha expression, that secrete excessive amounts of ECM proteins such as collagen and fibronectin [23,24]. Other hallmarks of fibroblast activation include the proliferation of fibroblasts, which is predominately limited to the early or expanding areas of fibrotic lesions in the lung parenchyma, excessive migration and invasiveness, fibroblastto-myofibroblast transformation, and resistance to apoptosis [25][26][27][28][29]. In the following section, we discuss the potential role of mesothelial cells in myofibroblast activation, which is critical for the initiation and maintenance of subpleural fibrotic lesions in IPF. Mesothelial Origin of Myofibroblasts In IPF, myofibroblasts distort lung architecture by depositing excess ECM. The source of myofibroblasts is being investigated by lineage tracing in injury models for pulmonary fibrosis. Although several studies have implicated resident fibroblasts as the main precursors of myofibroblasts [30][31][32], other cell populations such as epithelial cells, fibrocytes [33,34], and pericytes [35] have also been reported to contribute to myofibroblast transformation and the expansion of fibrotic lung lesions [36]. In the past, epithelial cells were frequently cited as a major source of myofibroblasts; however, many studies have disproved the epithelial-mesenchymal transformation theory [26,37]. In particular, Hardie et al. [38] evaluated the contribution of epithelial cells for transforming the growth factor alpha (TGFα)-induced fibrosis in vivo. After labeling epithelial cells with β-galactosidase using a clara cell secretory protein (CCSP)/Cre driver, minimal to no staining was observed in the fibrotic lesions. Similar conclusions were drawn from other studies that used different epithelial-specific Cre drivers. Rock et al. [26] employed the surfactant protein C (Sftpc)-CreERT2 driver to label type 2 alveolar epithelial cells (ACE2) with a red fluorescent protein (RFP) in an intratracheal bleomycin model. They demonstrated that ACE2 cells do not contribute to fibroblasts that accumulate in fibrotic lung lesions. Similar conclusions were reached using the Secretoglobin Family 1A Member 1 (Scgb1a1)-CreER driver, which labeled clara cells as well as a few scgb1a1 and sftpc dual positive cells and concluded that epithelial cells were not the source of myofibroblasts in bleomycin-induced pulmonary fibrosis. However, epithelial-mesenchymal crosstalk plays a crucial role in activating fibroblasts and may enhance fibroblast-to-myofibroblast transformation (FMT) processes by secreting paracrine factors [39]. Understanding the impact of epithelial cells on FMT holds promise for improving IPF management. Published studies have demonstrated an increase in bone-marrow-derived mesenchymal cells called fibrocytes both in circulation and fibrotic lung lesions associated with the progression of fibrotic lung remodeling in IPF [36,40]. This led us to question whether fibrocytes could contribute to the myofibroblast pool in pulmonary fibrosis, as well as their role in the progression of the disease. In the TGFα-mouse model, we were unable to demonstrate that transfused green fluorescent protein (GFP)-labeled fibrocytes contributed to the stroma of the fibrotic lung lesion [41]. Instead, the study provided evidence for the paracrine activation of resident lung fibroblasts by fibrocytes, supporting the notion of resident lung fibroblasts as the primary source of stromal cells [41]. Similar conclusions have been drawn from studies in renal fibrosis models, which suggest only a minor role in its contribution to the myofibroblast pool [42]. Likewise, it has been postulated that the pericyte, a type of mesenchymal cell that lines the capillaries and venules, may also contribute to the myofibroblast pool. Pericytes markers include neural/glial antigen 2 (NG2) and platelet-derived growth factor β (PDGFRβ). To test whether pericytes are the source of myofibroblasts in the bleomycin model, Rock et al. [26] utilized two mouse strains, Ng2-CreER and forkhead boxJ1 (FoxJ1)-CreER, to lineage-label pericyte-like cells. The lineage-labeled cells were proliferated in response to bleomycin; nevertheless, there was no evidence of colocalization with alpha-smooth muscle actin (αSMA), suggesting that pericytes were not a major contributor tp myofibroblasts in the fibrotic regions. In contrast, Hung et al. [30] utilized fate-mapping strategies and found that the Foxd1-expressing pericytes transform into myofibroblasts during bleomycin-induced injury. Although foxd1-derived pericytes transform into myofibroblasts, they are not the major source of myofibroblasts that accumulate during bleomycin-induced fibrosis. The differences in the observations made by Rock et al. [26] and Hung et al. [30] may be attributable to the differences in the labeling efficiency or to heterogeneity among pericyte cell populations. A single sheet of cuboidal pleural mesothelial cells (PMCs) lines the lungs and expresses several epithelial and mesenchymal cell-specific genes, such as calretinin, cytokeratin, collagen, desmin, and vimentin, but not smooth muscle actin. Mesothelial cells can transform into myofibroblasts through the MMT process and may represent a novel source of myofibroblasts in the fibrotic lung [25,[43][44][45][46]. Wilms tumor gene 1 (WT1) is a marker for mesothelial cells, and studies have shown that during embryonic development, the majority of lung resident fibroblasts are derived from the WT1-positive mesothelium [47] and populate the perivascular and peribronchial areas [48]. More recent studies have shown that certain tamoxifen-dependent Cre recombinase mouse models, such as CreERT2-driven recombination in Wilms tumor (WT1 CreERT2 ) mice, are more reliable and reproducible than WT1 CreEGFP reporter mice [34,47,49,50]. The use of WT1 CreERT2 mice enabled the accurate labeling of WT1-positive mesothelial cells lining embryonic lungs, which were shown to ultimately give rise to mesenchymal cells of the lung parenchyma [47,49]. We demonstrated that WT1 is downregulated in the postnatal stages of lung development but is upregulated in mesothelial cells in IPF and in a mouse model of TGFα-induced pulmonary fibrosis [34]. Indeed, in vivo, postnatal mesothelial lung cells were transformed into myofibroblasts in TGFα/WT1 CreERT2/mTmG reporter mice during TGFα-induced pulmonary fibrosis. They were found in the subpleural areas of fibrotic lungs but not in the peribronchial or adventitial regions [32]. However, PMCs did not transform into myofibroblasts during single-dose bleomycin-induced injury (or adeno transforming growth factor beta1 (TGFβ1)-induced pulmonary fibrosis) [49], which might be because bleomycin-driven fibrosis is transient and lacks subpleural lesions that are similar to IPF. Recent studies using cultured PMCs have provided evidence for MMT in the pathogenesis of pulmonary fibrosis. In particular, the TGFβ1/SMAD3 axis has been implicated in MMT and myofibroblast accumulation in the parenchyma of TGFβ1-injured lungs [51]. Although these studies suggest that MMT contributes to subpleural fibrosis, molecular insights are limited, and the role of MMT in the initiation and expansion of fibrotic lesions in the distal airways and other areas of the lung is unclear [25,43,45,52]. Future studies are needed to elucidate both upstream and downstream WT1 targets and the possible crosstalk between the WT1-driven gene networks and the TGFβ/SMAD pathway in myofibroblasts. Understanding the complex regulation of myofibroblast formation by TGFβ-dependent and TGFβ-independent pathways in the pathogenesis of subpleural fibrosis in pulmonary fibrosis is essential for developing more efficacious therapeutics for IPF. Molecular Insights on Fibroblast Dysfunction in IPF Early abnormalities and the most rapid progression of IPF are predominantly observed in the subpleural regions, highlighting the need to understand the molecular mechanisms of subpleural fibrosis [34,53,54]. We have focused on a set of subpleural molecules, such as WT1 and Sox9, that play a prominent role in activating fibroblasts and promoting fibrotic events such as proliferation, migration, differentiation, and survival [32,55]. Many studies in the fibrosis field have identified integrin αvβ6 as a master regulator of pro-fibrotic processes that are produced primarily by injured epithelial cells and macrophages but also fibroblasts, myofibroblasts, and neutrophils [56,57]. TGFβ exerts SMAD-mediated actions on ECM production, inflammation, and myofibroblast formation: particularly the accumulation of apoptosis-resistant cells in IPF [58,59]. Nonetheless, emerging in vitro and in vivo evidence indicates that non-TGFβ/SMAD signaling pathways also contribute to myofibroblast transformation and pulmonary fibrosis [32,55]. In the following subsection, we review the emerging molecular targets of (myo)fibroblast activation in pulmonary fibrosis. WT1 WT1 is a zinc finger transcription factor that plays a crucial role in the development of multiple organs, including the lungs, heart, and kidneys, and regulates post-transcriptional modifications and RNA metabolism [60]. Mutations or loss of WT1 in embryonic stages is associated with severe developmental defects and embryonic lethality in mice [60,61]. Expression levels of WT1 are low in adult mouse lung mesothelial cells, but it is upregulated in both mesothelial and mesenchymal cells in IPF lung tissue [32,34,62]. In our study, WT1 loss or gain-of-function studies in primary fibroblasts were sufficient to modulate fibroproliferation, myofibroblast formation, and ECM production [32]. Moreover, the genetic loss of WT1 markedly reduced the expression of ECM genes, such as collagen type1 alpha1 (Col1α) and collagen type V alpha 1 (Col5α), and proliferative genes, such as gremlin 1 (Grem1), runt-related transcription factor-1 (Runx1), wnt family member-4 (Wnt4), insulin-like growth factor 1 (Igf1), cyclin B1 (Ccnb1), and E2F transcription factor 8 (E2f8). Our cell fate mapping strategy, based on the lineage-specific expression of αSMA reporter fibroblasts, demonstrated that WT1 overexpression by transduction was sufficient to induce fibroblast to myofibroblast transformation (FMT). The motif analysis and chromatin immunoprecipitation experiments indicated that WT1 binds directly to the promoter DNA sequence of αSMA to induce the differentiation of FMT [32]. This revealed a sophisticated mechanism by which WT1 regulates FMT processes, highlighting the key role of WT1 in IPF. Previously, WT1 was shown to maintain the mesenchymal cell phenotype by repressing epithelial genes such as Snail (Snail1) and E-cadherin (Cdh1) during embryonic stem cell differentiation [63]. Notably, the haploinsufficiency of WT1 was sufficient to attenuate fibroproliferation, myofibroblast accumulation, and collagen deposition in both TGFα-and bleomycin-induced pulmonary fibrosis in vivo [32]. Our new findings suggest that WT1-driven effects on fibroproliferation are non-cell-autonomous and may involve paracrine factors secreted by WT1-expressing cells [32]. These results highlight the need for a more detailed investigation into the molecular mechanisms of WT1-driven fibroblast activation and pulmonary fibrosis and whether the crosstalk between WT1 and the TGFβ/SMAD pathway regulates them. Identifying WT1 as a positive regulator of fibroblast activation suggests a new target for treating fibrotic lung diseases and possibly for regulating fibrosis in other organs. Aurora Kinase B Aurora kinase B (AurkB) is a mitotic serine/threonine kinase involved in various stages of the cell cycle [64,65]. This molecule is highly expressed in different types of cancer and contributes to tumor progression through the increased proliferation and survival of the cells [65]. In the fibrotic field, for the first time, we have shown that AurkB is highly upregulated in fibroblasts of the subpleural region in IPF and in two alternative pulmonary fibrotic mouse models [66]. Its expression in IPF fibroblasts is regulated by WT1, as demonstrated by knockdown (KD) and the overexpression of WT1, and its binding to the AurkB promoter was validated by chromatin immunoprecipitation and promoter-driven luciferase assays. KD studies in both IPF and TGFα lung fibroblasts have demonstrated a pathogenic role for AurkB in fibrogenesis by promoting fibroproliferation and survival. Specifically, AurkB KD showed a marked reduction in proliferative genes such as cyclin A2 (CCNA2) and polo-like kinase (Plk1) and impacted the expression of pro-apoptotic genes such as Bak, Bax, and Fas in fibrotic fibroblasts. Furthermore, the inhibition of AurkB activity using barasertib in vitro resulted in altered fibroblast activation processes, such as proliferation and apoptosis. Treatment with barasertib in both bleomycin and TGFα fibrotic models rescued mice from fibrosis by attenuating collagen deposition and proliferation in vivo [66]. This study shows that the WT1-AurkB axis is a critical driver of fibroproliferation and survival. Therefore, targeting AurkB therapeutically with barasertib may highlight its potential benefits in IPF. Heat Shock Protein 90 Heat shock protein 90 (HSP90) is an important molecule that has been extensively studied in organ fibrosis [67][68][69][70][71][72][73][74][75]. Its overexpression in subpleural compartments is implicated in the pathogenesis of pulmonary fibrosis, resulting in the regulation of key cellular processes apart from its chaperone activity [72]. HSP90AA and HSP90AB are the two isoforms of HSP90 that are well-studied in the context of fibrosis. They have common ATPase activity but also unique binding partners due to the lack of N-terminal signal peptides in HSP90AA. Under pathophysiological conditions, preferential binding to their partners allows them to perform different functions. Our laboratory and others have shown the pro-fibrotic activity of HSP90AB, which is able to regulate proliferation, ECM production, and myofibroblast transformation [72,76]. The KD of intracellular HSP90AB, but not HSP90AA, also attenuated pro-fibrotic genes such as col1α1, col5α1, and αSMA. However, both isoforms play important roles in fibroblast migration. Recently, Bellaye et al. [76] showed the synergistic role of HSP90AA and HSP90AB in myofibroblast transformation and survival. They demonstrated that HSP90AA was elevated in IPF, and its release into circulation was regulated by mechanical stress. The secreted HSP90AA signals via the lipoprotein receptor-related protein 1 (LRP1) and intracellular HSP90AB are essential to the stabilization of LRP1 and to amplify the HSP90AA-induced signal, thus regulating myofibroblast transformation. This indicates that both forms are pathogenic when expressed at higher levels than those under basal conditions. The authors also demonstrated that the ectopic treatment of fibroblasts with HSP90AA promotes αSMA expression independent of the TGFβ pathway, suggesting a spatio-temporal function of different isoforms. Currently, more than 10 HSP90 inhibitors that belong to multiple drug classes are in the advanced stages of clinical trials for cancer [77,78]. Most of these are small molecules that are derivatives of geldanamycin and block the activity of both isoforms. 17-N-allylamino-17-demethoxygeldanamycin (17-AAG) and 17-demethoxy-17-[[2-(dimethylamino) ethyl] amino]-geldanamycin (17-DMAG) bind to the ATP-binding pocket and change the conformation of the protein, leading to proteasomal degradation. In our study, we treated fibroblasts with 17-AAG to block the intracellular HSP90AA and HSP90AB forms, which attenuated fibroblast activation and TGFβ-induced myofibroblast transformation. Moreover, the pharmacological inhibition of HSP90 with 17-AAG or 17-DMAG in a pulmonary fibrosis model has attenuated ongoing and established fibrosis, highlighting the potential benefits of HSP90 inhibition in IPF [72,79]. In a study by Bellaye et al. [76], HS-30, a non-permeable HSP90 inhibitor, was used to target the extracellular HSP90AA in precision-cut lung slices. The authors demonstrated the effects by inhibiting the extracellular HSP90 AA form, suggesting the unique features of different isoforms. However, characterization of the extracellular HSP90AA inhibitory effects in the pulmonary fibrosis models is necessary to shed light on how HSP90 functions. Nevertheless, the emergence of a growing body of evidence suggests that HSP90 is an important target with the potential for future therapies in pulmonary fibrosis. Sox9 Sox9 belongs to the SOX family of proteins that are characterized by the highly conserved high mobility group (HMG) domain of sex-determining region Y (Sry) proteins [80]. Sox9 is selectively expressed by epithelial progenitor cells to modulate branching morphogenesis in the lung and the organized deposition of collagen as a part of cartilage formation in multiple organs, melanocyte differentiation, and male gonad development [81][82][83][84][85]. The dysregulation of Sox9 has been shown to be associated with the development of different types of cancer [86] and fibrosis in multiple organs, including the lung, kidney, heart, and liver [55,[87][88][89]. Our recent findings showed the aberrant Sox9 overexpression in fibroblasts that accumulate in the subpleural, peribronchial, and fibrotic foci of IPF lungs [55]. This was further validated by the upregulation of Sox9 in distal lung fibroblasts derived from IPF lungs and in TGFα-overexpressing mice with severe fibrotic lung disease. The promoterdriven luciferase assay suggests the direct binding of WT1 to the Sox9 promoter in the presence of TGFα, which, consistent with the upregulation of Sox9 in the lung fibroblasts of IPF patients, is positively regulated by the TGFα-WT1 axis. The loss of Sox9 in IPF fibroblasts is sufficient to attenuate the expression of fibrosis-associated genes such as ECM genes and genes associated with mesenchymal cell differentiation and growth. Similarly, the overexpression of Sox9 in fibroblasts resulted in the upregulation of pro-fibrotic growth factors such as TGFβ1, IL-6, IL-13, and IL-17, but the mechanisms underlying Sox9-driven fibrosis in the early and late stages of fibrosis are yet to be determined. Hence, studying Sox9-driven molecular networks and signaling pathways is a promising approach for identifying potential therapeutic candidates for IPF and other fibrotic diseases. A recent study by Jiang et al. [90] demonstrated that the vascular endothelial growth factor (VEGF) receptor 2 (kinase insert domain receptor (KDR)) loss mediated Sox9 overexpression in airway mucous metaplasia in asthma and cystic fibrosis (CF) patients. These new findings further support the potential role of Sox9 in the pathogenesis of other chronic lung diseases with dysregulated epithelia and mesenchyme. Other Key Regulators of (Myo)fibroblast Activation in Pulmonary Fibrosis Fox head box M1 (Foxm1) is a well-known cell cycle regulator that belongs to a family of transcription factors characterized by forkhead DNA binding domains. It acts downstream of the phosphoinositol-3-kinase (PI3K)-AKT signaling cascade. Penke et al. showed the upregulation of FOXM1 in fibroblasts isolated from the IPF lung [91]. The fibroblast-specific deletion of FOXM1 resulted in a reduced expression of several profibrotic genes such as αSMA, connective tissue growth factor (CTGF), Col1α1, and Tgfβ1. Fibroblast-specific Foxm1 deleted mice were also protected against bleomycin-induced fibrosis [91]. Recent studies have also demonstrated how FOXM1 suppression inhibits fibroblast differentiation to myofibroblasts during pulmonary fibrosis [92][93][94][95]. Another Fox protein called FOXL1 was found to be elevated in IPF lungs, potentially contributing to fibroblast accumulation in fibrotic lung lesions by activating TAZ (transcriptional coactivator with PDZ-binding motif) and YAP (Yes-associated protein) cascades and the PDGF axis via PDGFRα (platelet-derived growth factor receptor-α) [96]. Dock2 (Dedicator of cytokinesis 2) is an evolutionarily conserved guanine nucleotide exchange factor that activates Rac and regulates leukocyte migration and activation. Qian et al. reported elevated levels of Dock2 and colocalization with αSMA in the thickened pleura of nonspecific pleuritis patients [97]. The study also showed that the TGF-β is responsible for DOCK2 expression in human pleural mesothelial cells (PMCs) through meso MT processes. Furthermore, DOCK2 knockdown attenuated the expression of profibrotic genes such as αSMA, Col1A1, and fibronectin1. They also demonstrated that Tgfβ-induced MesoMT and Dock2 overexpression modulated Snail expression via Smad3 in PMCs [97]. In another study, elevated DOCK2 expression was observed in fibroblasts isolated from IPF and the bleomycin model [97]. The authors also showed that TGFβ-induced DOCK2 overexpression is dependent on both SMAD and ERK signaling. Overall, the studies highlighted here suggest that a comprehensive understanding of both cellular and molecular mechanisms underlying fibrosis in the distal areas of the lung is critical for the development of new therapies against IPF. Fibroblasts and myofibroblasts are the primary targets to attenuate excessive ECM deposition in severe fibrotic lung diseases. These cells display significant heterogeneity, which is evidenced by the differential expression of markers such as Thy1 and differences in their lipid content, cytoskeletal composition, and cytokine profile. Multiple single-cell RNA sequencing (scRNA-seq) studies from both humans and mice have demonstrated morphologically and functionally distinct fibroblasts from IPF compared to normal lungs. The list of fibroblast populations includes myofibrogenic mesenchymal fibroblasts (Axin + ), mesenchymal alveolar niche (Axin2 + PDGFR + ) , fibroblasts (Lgr6 + ), fibroblasts involved in alveolar differentiation (Lgr5 + ), collagen-producing (Cthrc1 + ), profibrotic mesenchymal cells (PDGFRb hi ) and pleural ECM-producing fibroblasts (Has1 hi ) [24,31,[98][99][100]. Our recent studies using preclinical models and the immunostaining of IPF lungs have demonstrated the accumulation of myofibroblasts that express high levels of profibrotic transcription factors, including WT1 and Sox9, in the fibrotic lesions of IPF [32,34,55,99,101]. The accumulation of these profibrotic populations was further validated in recent scRNA-seq studies ( Funding: This work was supported by the National Heart Lung and Blood Institute (1R01 HL134801 and 1R01 HL157176). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study. Funding: This work was supported by the National Heart Lung and Blood Institute (1R01 HL134801 and 1R01 HL157176). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare that they have no conflict of interest to report regarding the present study.
2023-02-04T16:09:53.682Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "19e103c368c55dbc5d7fef7a139040b71baa8998", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms24032850", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "160f6ca8532283e7603ffff53f7642c9846a22da", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
268929915
pes2o/s2orc
v3-fos-license
Inflammatory responses in esophageal mucosa before and after laparoscopic antireflux surgery BACKGROUND Currently, the primary treatment for gastroesophageal reflux is acid suppression with proton pump inhibitors, but they are not a cure, and some patients don’t respond well or refuse long-term use. Therefore, alternative therapies are needed to understand the disease and develop better treatments. Laparoscopic anti-reflux surgery (LARS) can resolve symptoms of these patients and plays a significant role in evaluating esophageal healing after preventing harmful effects. Successful LARS improves typical gastroesophageal reflux symptoms in most patients, mainly by reducing the exposure time to gastric contents in the esophagus. Amelioration of the inflammatory response and a recovery response in the esophageal epithelium is expected following the cessation of the noxious attack. AIM To explore the role of inflammatory biomolecules in LARS and assess the time required for esophageal epithelial recovery. METHODS Of 22 patients with LARS (pre- and post/5.8 ± 3.8 months after LARS) and 25 healthy controls (HCs) were included. All subjects underwent 24-h multichannel intraluminal impedance-pH monitoring and upper gastrointestinal endoscopy, during which esophageal biopsy samples were collected using endoscopic techniques. Inflammatory molecules in esophageal biopsies were investigated by reverse transcription-polymerase chain reaction and multiplex-enzyme-linked immunosorbent assay. RESULTS Post-LARS samples showed significant increases in proinflammatory cytokines [interleukin (IL)-1β, interferon-γ, C-X-C chemokine ligand 2 (CXCL2)], anti-inflammatory cytokines [CC chemokine ligand (CCL) 11, CCL13, CCL17, CCL26, CCL1, CCL7, CCL8, CCL24, IL-4, IL-10], and homeostatic cytokines (CCL27, CCL20, CCL19, CCL23, CCL25, CXCL12, migration inhibitory factor) compared to both HCs and pre-LARS samples. CCL17 and CCL21 levels were higher in pre-LARS than in HCs (P < 0.05). The mRNA expression levels of AKT1, fibroblast growth factor 2, HRAS, and mitogen-activated protein kinase 4 were significantly decreased post-LARS vs pre-LARS. CCL2 and epidermal growth factor gene levels were significantly increased in the pre-LARS compared to the HCs (P < 0.05). CONCLUSION The presence of proinflammatory proteins post-LARS suggests ongoing inflammation in the epithelium. Elevated homeostatic cytokine levels indicate cell balance is maintained for about 6 months after LARS. The anti-inflammatory response post-LARS shows suppression of inflammatory damage and ongoing postoperative recovery. INTRODUCTION Gastroesophageal reflux disease (GERD) is a chronic public health problem characterized by typical symptoms of heartburn and/or regurgitation.It is a common worldwide condition and ranks among the most prevalent diseases in adults [1].Although GERD is widespread, the factors initiating the pathogenesis of the disease are not fully understood.Two theories exist on the disease's pathogenesis.The first of these is the direct effect of gastric contents on the surface epithelium, where the inflammatory process begins in the lumen and advances with the assistance of dilated intercellular spaces [2].However, this theory is insufficient to explain patients without erosion.Except for erosive esophagitis, no difference has been shown in other phenotypes of GERD according to the level of dilated intercellular spaces [1].The other theory is cytokine-mediated damage.According to this theory, proinflammatory cytokines recruit immune cells, and tissue damage occurs as a result of the inflammatory response mediated by incoming immune cells starting from the basolateral cell layers [3].Substances like acid and pepsin in the reflux content contribute to this damage [4]. The primary treatment modality currently is inhibiting gastric acid secretion with proton pump inhibitors (PPIs).However, some patients resist long-term drug usage.While erosive reflux patients generally respond well to PPIs, others require continuous medical treatment due to the absence of a cure through drug use [5].In addition, some phenotypes of the disease do not totally or even partially respond to PPIs.Drug therapy provides temporary relief but is not a definitive treatment method.For treatment of the disease, exploring alternative therapeutic approaches becomes imperative.Understanding the disease pathogenesis is crucial to identifying target molecules for the development of preventive or therapeutic medications. Since symptom resolution can be achieved in up to 93.1% of patients following laparoscopic anti-reflux surgery (LARS) [6], this modality is crucial for assessing the healing process of the esophageal epithelium after preventing the effects of noxious agents.The aim of this study was to investigate the role of inflammatory and recovery biomolecules after LARS by exploring the inflammatory pathways that may contribute to the pathogenesis of the disease.Additionally, we aimed to determine healing time frame to ascertain whether a meaningful period for healing allows the esophageal epithelium to fully recover. Subjects In total, 35 patients with GERD who had been approved for LARS by the Ege University GERD Study Group, and 25 healthy controls (HCs) were included in the study.However, the follow-up upper gastrointestinal (GI) endoscopy continued with 23 patients, as 12 patients did not attend their post-LARS upper GI endoscopy appointments.The interval between the two upper GI endoscopies ranged from 2 to 18 months (mean 5.8 ± 3.8 months).All patients had pyrosis and/or regurgitation at least once a week or more frequently and completed the GERDQ (Validated Mayo Clinic) and QoLRAD (quality of life) questionnaires.Patients were ceased proton-pump inhibitors, H2 blockers, and antacids at least 10 d pre-procedure. Esophageal motility tests were done before placing the multichannel intraluminal impedance-pH (MII-pH) catheter at the upper lower esophageal sphincter (LES) boundary.Data were analyzed using MMS software version 8.1 (MMS -Laborie, the Netherlands).An eight-channel motility catheter with four radial and four circumferential openings was used for motility measurements.After an 8-h fast, the catheter was placed 50-55 cm deep via the nasal passage.LES location was identified using intragastric pressure.For 24-h MII-pH monitoring, a calibrated MII-pH catheter (MMS -Laborie, the Netherlands) was positioned 5 cm above the LES, connected to a recording device (MMS -Laborie, the Netherlands).All HCs had normal intraesophageal 24-h MII-pH and high-resolution manometry and a negative history of upper GI disease or surgery.The patients with GERD who were treated with LARS already had a pathological reflux burden according to MII-pH monitoring and/or endoscopically observed esophageal erosions.Surgical indications were determined by the entire GERD team, which included specialists in gastroenterology, surgery, ENT, pulmonary medicine, and psychiatry. The exclusion criteria for both patients and HCs included primary esophageal motility disorders, Barrett's esophagus, previous upper GI surgery, chronic renal failure, severe coronary artery disease, severe chronic obstructive pulmonary disease, uncontrolled diabetes mellitus, pregnancy, lactation, and other disorders that may affect the study, with the exception of cancer (except non-melanoma skin cancer). Biopsy specimens Upper GI endoscopy was conducted by one gastroenterologist (Bor S), and the biopsy samples were taken by one technician.Esophageal biopsy specimens (n = 4) were endoscopically taken from normal mucosa 3-5 cm above the Z-line without erosion using biopsy forceps (Radial Jaw 4, opening diameter 2.8 mm, Boston Scientific, United States).Two biopsies were preserved in RNAzol ® (GeneCopoeia, Rockville, MD) for subsequent mRNA studies at -80 °C, while the remaining samples were immediately frozen at -80 °C for later protein measurements (Figure 1). Gene expression The biopsy samples were homogenized using a Bioprep-6 Homogenizer (Hangzhou Allsheng Instruments Inc., Zhejiang, China), and total RNA was isolated with an Aurum™ Total RNA Mini Kit (Bio-Rad Laboratories, Inc., Hercules, CA) following to the manufacturer's instructions.The absorbance, indicating the concentration and purity of the total RNA, was measured at 260/280 nm with a NanoDrop spectrophotometer (Thermo Scientific, Wilmington, DE) using 2 μL of each homogenized and isolated sample. cDNA was synthesized from total RNA in each sample using qPCR and an iScript cDNA Synthesis Kit with a reverse transcriptase enzyme (Bio-Rad Laboratories, Inc., Hercules, CA) following to the manufacturer's instructions.Real-time polymerase chain reaction was conducted using a LightCycler ® 480 (Roche Diagnostics Inc., Basel, CH).iTaq Universal SYBR Green Supermix (Bio-Rad Laboratories, Inc., Hercules, CA) and two different primer libraries -(Human JAK/STAT Signaling Primer Library and Human NFκB Primer Library) Real Time Primers (LLC) -were employed according to the manufacturer's specifications.The housekeeping genes selected were actin-beta, beta-2-microglobulin, and ribosomal protein L13a. Multiplex protein measurements The biopsy samples were homogenized using a Bioprep-6 Homogenizer (Hangzhou Allsheng Instruments Inc., Zhejiang, China), and total protein was extracted with a Bio-Plex TM Cell Lysis Kit (Bio-Rad Laboratories, Inc., Hercules, CA) according to the manufacturer's instructions.After centrifugation (4500 rpm for 10 min) [7], the isolated proteins were divided into aliquots, and protein amounts were determined using the Lowry method [7].The protein levels of chemokines and phospho-cell signaling proteins were measured using Bio-Plex Multiplex Immunoassays (Human Chemokine 40-Plex panel, Pro Cell Signaling Phospho 7-plex panel, Pro Cell Signaling Phospho NFκB p65 ve Pro Cell Signaling Phospho p38 MAPK, Bio-Rad Laboratories, Inc., Hercules, CA) according to the manufacturer's instructions. Statistical analysis The 2 -ΔΔCt method was used for the quantitation analysis of gene expression.The corresponding gene expression levels in each group were compared.Gene expression levels in each group were compared, and genes with a fold change ≥ 1.5 were included in the evaluation.Statistical analyses were performed using ANOVA, Student's t test (for parametric data) and the Mann-Whitney U test (for nonparametric data) with IBM ® SPSS ® Statistics 25.0.A P value of < 0.05 was considered statistically significant in all comparisons.A paired samples t-test was applied for pre-LARS and post-LARS comparisons.Parametric values were presented as mean ± SD, while nonparametric tests used median and variance values.There is no significant relationship between gender, age and body mass index.GERD: Gastroesophageal reflux disease; HC: Healthy control; BMI: Body mass index. Study group One patient with erosive reflux disease (ERD) C/D out of 23 patients and 5 out of 25 HCs were excluded for various reasons: The presence of multiple polyps observed during upper GI endoscopy, excessive bleeding during biopsies, desaturation, and other related issues with the sedation procedure, as well as the relapse of erosions and/or symptoms after LARS.Ultimately, a total of 22 patients [10 ERD A/B, 6 ERD C/D, 6 non-ERD (NERD)] and 20 HCs were included in the study (Table 1). The levels of homeostatic cytokines, including CCL27, CCL20, CCL19, CCL23, CCL25, CXCL12 and migration inhibitory factor (MIF), were higher in the post-LARS group than in the HC and/or pre-LARS groups (Table 3, Supplementary Table 4).Specifically, CCL21 levels were higher in the pre-LARS group than in the HC group (P < 0.05). DISCUSSION GERD is typically treated with PPIs, aimed at suppressing gastric acid secretion.However, long-term drug use can pose challenges, and some patients may exhibit a low response to PPI treatment, necessitating a more permanent solution.Laparoscopic antireflux surgery is offered as an alternative method to alleviate reflux symptoms, with a success rate of approximately 90% [8,9] in experienced centers.Following LARS, the contact of the esophagus with gastric contents and noxious agents is significantly reduced, leading to a drastic alleviation of symptoms and observable healing of the epithelium during endoscopy, as seen in our patients.In this study, CCL21 levels were found to be higher in pre-LARS patients compared to controls.These intriguing findings may be explained by the role of the CCL21/CCR7 axis in the regulation of T-cell immunity.Unsoeld et al [10] observed that transgenic mice with high expression of CCL21 failed in the CD4 T-cell response against local skin infections.They suggested that a high concentration of CCL21 downregulated CCR-7, which is responsible for mediating the T-cell adaptive immune response and peripheral tolerance [10,11].It could be speculated that reflux disease is associated with an imbalance between CCL21 and CCR7 expression, characterized by an increase in favor of CCL21. Chemotactic response of antireflux surgery We investigated cellular-level changes before and after surgical treatment to comprehend the pathophysiological mechanism underlying GERD.Upon comparing data from patients' post-LARS to those from HCs, we observed an increase in IL1β, MEK-1, p38 MAP kinase, and certain chemokine levels (CCL1, CCL19, CCL20, CCL21, CCL23, and CCL24) in the post-LARS group.These findings suggest an elevation in the inflammatory process in reflux disease through the toll-like receptors (TLR) signaling pathway and MEK/ERK pathway.While the MEK/ERK pathway is primarily activated by growth factors, osmotic stress, and cytokines [12], p38-MAPK is predominantly triggered by oxidative stress, UV radiation, hypoxia, ischemia, and specific proinflammatory cytokines like IL-1 and tumor necrosis factor-alpha (TNFα) [13]. Additionally, the expression level of RAF1, an activator of the MEK/ERK pathway that transmits chemical signals outside the cell to the cell nucleus, was significantly increased in post-LARS patients.Overactivity of these pathways results in NFκB activation and subsequently increased levels of pro-inflammatory cytokines, especially IL1β and IFNγ.In our study, the elevated IL1β levels may have been regulated by these two pathways and NFκB [14,15]. When we evaluated our data concerning the type of reflux, we observed varied responses in protein levels after surgery among different reflux phenotypes (Supplementary Table 5).Interestingly, there was no significant change in notable chemokine and protein levels in reflux patients with ERD C/D after surgery.This might be explained by the limited number of patients (n = 6) in this group or the time of control endoscopy after surgery (approximately 5.8 months after LARS). IL1β, a potent proinflammatory regulator, is secreted from many immune cells and triggers the production of acute phase proteins, proinflammatory cytokines, and adhesion molecules.It also activates T and B lymphocytes [16].Together, IFNγ and TNFα are precursors of the inflammatory response.IFNγ, predominantly secreted from activated T lymphocytes, is a crucial cytokine with pleiotropic immunological functions.Although elevated mostly in pathogenic infections, it has many functions, including promoting macrophage growth, antigen production, activation the innate immune system, fostering lymphocyte-endothelial interaction, regulating type 1 T helper (Th1)/Th2 balance, and controlling cellular proliferation and apoptosis [17].IFNγ can also trigger IL1β synthesis [18]. The elevation of CXCL2 levels in post-LARS provides evidence of the presence of neutrophils in the tissue [19].IL1β also induces the production of macrophage MIF, a regulator of innate immunity.MIF mostly causes macrophage accumulation in hypersensitivity regions [20].MIF contributes to the activation of NFκB by inhibiting the MEK/ERK signaling pathway and IKBA, an inhibitor of NFκB [21].It might be suggested that the TLR signaling pathway through MAP kinase and the MEK/ERK pathway was suppressed after surgery, likely due to depletion of stimulants in the lumen.On the other hand, the proinflammatory status, demonstrated by increases in IL1β, NFκB, and IFNγ levels, remained active in tissues after surgery. This could be explained by two theories: Oxidative stress that might be elevated due to ischemia-reperfusion after surgery stimulates NFκB activation by increasing nuclear factor E2-related factor 2 and heme oxygenase levels.The second explanation involves chloride sensing regulation of NACHT, LRR, and PYD domains-containing protein 3 (NLRP3) inflammasome activation [22,23].Recent studies have shown that the chloride concentration in cells is a critical control point for NLRP3 inflammasome activation.Mayes-Hopfinger et al [22] revealed that decreased intracellular Cl activates the NLRP3 inflammasome, promoting an immune response by switching the proinflammatory status of a phagocyte.Although their study was conducted in macrophages, we can speculate that depletion of intracellular chloride concentration due to a decrease in extracellular chloride concentration, achieved by blocking acid flux to the esophagus with surgical intervention, might activate the NLRP3 inflammasome in the esophageal epithelium.These theories warrant further study. TECK/CCL25 and CTACK/CCL27 levels were also increased in the post-LARS group compared to both the pre-LARS and HC groups.T memory and effector lymphocytes activated by IL1β rapidly migrate to the inflammatory epithelium via CCL25 and CCL27.However, it is known that these two chemokines primarily act via memory T cells [19].CCL25 and CCL27 have more homeostatic effects on memory cells [19,24]. Additionally, I309/CCL1 and monocyte chemoattractant protein (MCP)2/CCL8, which have a homeostatic effect on memory T cells and have anti-inflammatory effects on Th2 and regulatory T cells during inflammation, were significantly increased after surgery.Moreover, there was an increase in MCP3/CCL7 [25] and IL-10 levels, which can block the Th1 response that mediates monocyte motility, supported the anti-inflammatory activation post-LARS. Our study showed that EOTAXIN-2/CCL24, MIP1d/CCL15, and MIP3b/CCL19 levels increased in NERD patients after surgery.EOTAXIN2/CCL24, responsible for the recruitment of basophils and eosinophils, promotes cell migration and regulates inflammatory and fibrotic activities.It is secreted from various cells, especially activated fibroblasts, leading to fibroblast proliferation and collagen synthesis [26].The increase in CCL24 levels indicates that the collagen deposition and reorganization process was active, with the effect of anti-inflammatory regulation in post-LARS tissues.Increased EGF expression in this group also supports the healing process and proliferation [27] and provides information about the presence of eosinophils or basophils in tissues. Anti-inflammatory process following antireflux surgery On the other hand, elevated levels of CCL1, CCL11 and CCL24 indicate that the Th2 response is activated, and that the resolution of the inflammatory response is increased post-LARS.CCL11 mediates the Th2 response as well as eosinophil and basophil migration [19].In addition to its anti-inflammatory properties, CCL17 also helps maintain homeostatic balance by mediating the transition of effector memory T cells to the inflammatory region. MPIF1/CCL23 levels increased after surgery in patients with ERD A/B.CCL23 secreted by neutrophils via CXCL2, supports the inflammatory response by activating lymphocytes, monocytes and macrophages [28].However, homeostatic chemokines, such as MIP3-b/CCL19 and SDF1-a+b/CXCL12 were also increased [24] along with CCL25 and CCL27.CCL19 helps stabilize the inflammatory response by inducing naive T cells and central memory T cells to return to lymph nodes [25].Similarly, neutrophils, monocytes and B cells return to the bone marrow and mediate the suppression of the inflammatory response [19]. IL-4 levels increased after surgery in NERD patients compared to HCs.IL-4 has potent cytoprotective properties [29].We thought that it may have a major role in preserving the mucosal integrity after surgery.IL-4 also exerts anti-inflammatory effects by inducing the production of CCL7 and CCL11 from peripheral cells in the inflammatory region [25].These two elevated chemokines may be secreted via IL-4.IL-4 can also suppress important cytokines in the proinflammatory process, such as IL1β and TNFα [30,31].The significant increase in important anti-inflammatory cytokines such as IL-4 and IL-10 in the NERD and ERD A/B groups may have caused the suppression of important proinflammatory markers in the postoperative group [32]. These findings suggest that the postoperative recovery process is ongoing after successful surgery.In addition, the proinflammatory effect is still ongoing, and it is possible that the anti-inflammatory response overwhelms the ongoing proinflammatory process.After the operation, patients were rescored for symptoms, improvements noted in the control endoscopy, and relapsed patients were excluded.But there may be patients whose mucosal damage had healed but who still had insensible acid attacks.Therefore, reflux symptoms that may occur after LARS may not always indicate failure of the surgery.A limitation of our study is the inability to perform a 24-h pH-impedance test in the post-LARS group, preventing the collection of rational data on acid attacks.In addition, only three patients (11, 14, and 18 months) visited our clinic for control endoscopy after LARS (the other 19 patients were observed for < 6 months).Although no significant change was observed in pro-inflammatory and chemotactic cytokines when these three patients were excluded, it was noted that levels returned to the preoperative level (data not shown) according to the inflammatory cytokine levels in these three patients.However, a statistical calculation could not be made because we only had three patients in the long term.More patients are needed to examine the long-term effects. bP < 0.05 vs pre-laparoscopic anti-reflux surgery.mean ± SD values are given for Student's t test, Median and Variance values are given for Mann-Whitney U test.LARS: Laparoscopic anti-reflux surgery; HC: Healthy control; NF-κB: Nuclear factor kappa-beta. Figure 2 Figure 2 Significant gene expression.A: Significant gene expression in the post-laparoscopic anti-reflux surgery group compared to healthy controls; B: Significant gene expression in the post-laparoscopic anti-reflux surgery group compared to pre-laparoscopic anti-reflux surgery group.All comparisons are given as fold changes.BCL3: B-cell CLL/lymphoma 3; FGF2: Fibroblast growth factor 2; IFNAR1: Interferon (alpha, beta and omega) receptor 1; IKBKE: Inhibitor of kappa light polypeptide gene enhancer in B-cells, kinase epsilon; JUN: Jun proto-oncogene; MAP2K4: Mitogen-activated protein kinase 4; EGF: Epidermal growth factor; TNFRSF1A: Tumor necrosis factor receptor superfamily, member 1A. Table 2 Cell signaling proteins a P < 0.05 vs healthy control.
2024-03-23T15:17:00.516Z
2024-03-27T00:00:00.000
{ "year": 2024, "sha1": "e4ac32369af49a149fb8478b3c9bd681552620ec", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4240/wjgs.v16.i3.871", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "bf73b03677dec0895f4b1586af1b4993c44011cd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259369334
pes2o/s2orc
v3-fos-license
Identification and elucidation of cross talk between SLAM Family Member 7 (SLAMF7) and Toll-like receptor (TLR) pathways in monocytes and macrophages To further elucidate the expression, regulation and function of Signaling Lymphocytic Activation Molecule Family (SLAMF) protein members in human monocytes and macrophages. Un-differentiated monocytic THP-1 cell (u-THP-1) and differentiated THP-1 macrophage (d-THP-1) were used as culture models in the study. Responses of cells to the differentiation agents phorbol ester (25 ng/ml) and TLR (Toll-like receptor) ligands were assessed. RT-PCR and Western blot analysis were used to determine mRNA and protein level. Pro-inflammatory cytokine mRNA expression levels and phagocytosis were used as functional markers. Data analyzed using t-test, one-way or two-way ANOVA followed by post hoc test. SLAMFs were differentially expressed in THP-1 cells. Differentiation of u-THP-1 to d-THP-1 led to significantly higher SLAMF7 mRNA and protein levels than other SLAMF. In addition, TLR stimuli increased SLAMF7 mRNA expression but not protein expression. Importantly, SLAMF7 agonist antibody and TLR ligands synergistically increased the mRNA expression levels of IL-1β, IL-6 and TNF-α, but had no effect on phagocytosis. SLAMF7 knocked-down in d-THP-1 significantly lowered TLR-induced mRNA expressions of pro-inflammatory markers. SLAM family proteins are differentially regulated by differentiation and TLRs. SLAMF7 enhanced TLR-mediated induction of pro-inflammatory cytokines in monocytes and macrophages but not phagocytosis. Identification and elucidation of cross talk between SLAM Family Member 7 (SLAMF7) and Toll-like receptor (TLR) pathways in monocytes and macrophages Uyory Choe 1 , Quynhchi Pham 2 , Young S. Kim 3 To further elucidate the expression, regulation and function of Signaling Lymphocytic Activation Molecule Family (SLAMF) protein members in human monocytes and macrophages. Un-differentiated monocytic THP-1 cell (u-THP-1) and differentiated THP-1 macrophage (d-THP-1) were used as culture models in the study. Responses of cells to the differentiation agents phorbol ester (25 ng/ml) and TLR (Toll-like receptor) ligands were assessed. RT-PCR and Western blot analysis were used to determine mRNA and protein level. Pro-inflammatory cytokine mRNA expression levels and phagocytosis were used as functional markers. Data analyzed using t-test, one-way or two-way ANOVA followed by post hoc test. SLAMFs were differentially expressed in THP-1 cells. Differentiation of u-THP-1 to d-THP-1 led to significantly higher SLAMF7 mRNA and protein levels than other SLAMF. In addition, TLR stimuli increased SLAMF7 mRNA expression but not protein expression. Importantly, SLAMF7 agonist antibody and TLR ligands synergistically increased the mRNA expression levels of IL-1β, IL-6 and TNF-α, but had no effect on phagocytosis. SLAMF7 knocked-down in d-THP-1 significantly lowered TLR-induced mRNA expressions of pro-inflammatory markers. SLAM family proteins are differentially regulated by differentiation and TLRs. SLAMF7 enhanced TLR-mediated induction of proinflammatory cytokines in monocytes and macrophages but not phagocytosis. Abbreviations d-THP-1 Differentiated THP-1 macrophage u-THP-1 Undifferentiated THP-1 monocyte LPS Lipopolysaccharide IL-1β Interleukin 1 beta IL- 6 Interleukin 6 CCL2 Chemokine ligand 2 TNF-α Tumor necrosis factor alpha COX-2 Cyclooxygenase 2 PGE 2 Prostaglandin E2 IFN-γ Interferon gamma Inflammatory pathways play a vital role in response to many insults such as injury, infection, trauma 1 and are regulated by immune cells including monocytes and macrophages. During inflammation, monocytes serve as immune effector cells and are equipped with chemokine receptors and adhesion receptors for the role 2 . Monocytes produce pro-inflammatory cytokines such as IL-1β, IL-6, CCL2 and TNF-α, and remove cell debris and pathogens through phagocytosis. Additionally, monocytes are recruited to the site of inflammation in a tissue where they can be differentiated into macrophages 3 . This differentiation process is important since production of pro-inflammatory cytokines such as IL-1β, IL-6, CCL2 and TNF-α by macrophages is critical for an acute inflammatory response 4 . Macrophages also engulf debris, foreign substances, microbes, and cancer cells through phagocytosis. In addition, macrophages play a critical role in innate immunity and help initiate adaptive immunity by recruiting other immune cells such as the lymphocytes 5 . The SLAM family proteins are a group of type I transmembrane receptors present in immune cells such as monocytes, macrophages, natural killer (NK) cells, CD8+ T lymphocytes, B lymphocytes and mature dendritic cells 6,7 . The SLAMF receptors currently have 9 members with different names in the literature, which include SLAMF1 (SLAM or CD150), SLAMF2 (CD48), SLAMF3 (Ly-9 or CD229), SLAMF4 (2B4 or CD244), SLAMF5 (CD84), SLAMF6 (Ly108 in mice, NTB-A or SF2000 in humans), SLAMF7 (CRACC, CD319 or CS1), SLAMF8 (CD353 or BLAME) and SLAMF9 (SF2001 or CD84H) [8][9][10][11] . The SLAMF receptors are important immunomodulatory receptors and have various functions including T cell activation [12][13][14] , Th2 cytokine production [13][14][15][16][17][18][19][20][21][22][23] , NK-and CD8+ T cells mediated cytotoxicity 24,25 . In general, SLAMF receptors are self-ligand i.e. each SLAM protein recognizes itself to trigger down-stream signaling pathways 26 . This interaction allows different immune cell types to interact with each other through SLAMF receptors 26 . One exception is SLAMF4, which recognizes SLAMF2 and vice versa 26 . The SLAMF receptors are usually composed of two immunoglobulin (Ig) like domains, one variable (V) like domain and one constant 2 (C2) like domain 27 ; except SLAMF3 which has four Ig-like domains with two repeated patterns of V-like and C2-like domains 27 . The signal transduction of SLAMF receptors is mediated through signaling lymphocyte activation molecule-associated protein (SAP) adaptors. The SAP adaptors are comprised of SH2 Domain Containing 1A (SAP), SH2 Domain Containing 1B (EAT-2) and EAT-2-related transducer (ERT). SAP adaptors are small proteins that consist of a single Src homology 2 (SH2) domain. Through the SH2 domain, the SAP adaptors interact with immunoreceptor tyrosine-based motifs (ITSMs) in the cytoplasmic domain of SLAMF receptors. Among SAP adaptors, EAT-2 appears to be receptor-specific. For example, in human NK-cells, SLAMF7 recruits exclusively EAT-2 but not SAP 27 . SLAMF receptors can induce signals through interaction with SAP adaptors to various downstream effectors such as nuclear factor of activated T-cells (NFAT), nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) and vav guanine nucleotide exchange factor 1 (Vav1) to exert their biological activities 28 . However, some exceptions exist. For example, SLAMF2 is an atypical SLAMF member that has no cytoplasmic region and thus lacks type I transmembrane glycoproteins but has a glycosylphosphatidylinositol membrane anchor 27 . SLAMF8 and SLAMF9 have a short cytoplasmic tail that lacks the ITSM, similar to SLAMF2, and have no known ligands 27 . Although many studies have been reported for SLAMF receptors in T-and NK-cells 9,26,29 , the literature on the function of SLAMF receptors in monocytes and macrophages remain scarce. Existing literature suggests that the SLAMF proteins and TLRs appear to interact with each other 30,31 . The TLRs are pattern recognition receptors present in immune cells and play an important role as a defense mechanism against bacterial and viral infection 32 . For examples, the gram-negative bacterial cell wall component, LPS, can bind to TLR4. Flagellin, the primary protein component of the flagellar filament in bacteria, can bind to TLR5. Also, bacterial lipoprotein can bind to TLR2/6 and TLR1/2 and synthetic lipopeptides such as Pam2CSK4 and Pam3CSK4 are often used to investigate this pathway 33,34 . When ligands bind to TLRs, signal transduction through transcriptional factors such as NF-κB and AP-1 are activated. Activation of NF-κB and AP-1 induce pro-inflammatory cytokines (TNF-α, IL-1β and IL-6) and chemokines (CCL2 and CXCL8) leading to inflammatory responses which include endothelial cell activation, neutrophil granulocyte activation, fever induction, acute phase protein synthesis, and B lymphocytes proliferation 35 . Farina and others reported that SLAMF1 can be induced by engagement of TLR2, 4 and 5 30 . Sintes et al. 31 reported that SLAMF5 modulate signaling pathways downstream of TLR4. LPS-stimulated secretion of two pro-inflammatory cytokines (IL-6 and TNF-α) were significantly reduced when RAW-264.7 macrophage cells were transfected with SLAMF5 siRNA 31 . Hence, cross talk between the SLAMF and TLR pathways exist but the precise relationship between specific SLAMF receptors and TLRs remain unclear. In addition to TLR pathways, the differentiation of monocytes to macrophages may play a critical role in SLAMF protein regulation. Previously, our lab found that SLAMF5 (CD84) mRNA as well as protein levels were up-regulated when phorbol 12-myristate 13-acetate (PMA) was used to induce differentiation of u-THP-1 cells (monocytes) to d-THP-1 cells (macrophages) 36 . Therefore, it is possible that other SLAMFs may also be subjected to similar regulation as SLAMF5, but this conjecture warrants further elucidation. Recently, SLAMF7 also received much attention due to their high expression in multiple myeloma and therefore could be explored as a potential therapeutic target 37 . This led to the development of Elotuzumab, a novel humanized immunoglobulin G1 (IgG1) monoclonal antibody which targets SLAMF7 on myeloma cancer cell. Elotuzumab has recently been approved for the treatment of relapsed or refractory multiple myeloma in combination with lenalidomide and dexamethasone 38 . The mechanisms appeared to involve activation of NK cell as well as activation of macrophage phagocytosis to eradicate tumor cells 39 . In addition, SLAMF7-mediated macrophage phagocytosis was also reported to be involved in anti-CD47-induced phagocytosis of hematopoietic tumor cells 40 . This effect appeared to be SAP-independent 40 . Finally, SLAMF7 is shown to play a significant role in macrophage-related inflammatory diseases such as rheumatoid arthritis, Crohn's disease, and COVID-19 infection 41 . However, the role of SLAMF7 and other SLAMs in normal inflammatory processes such as phagocytosis in monocytes and macrophages are less well known and warrant elucidation. Questions regarding the role of SLAMF proteins and their mechanisms on regulation especially in monocytes and macrophages remain unclear. In this study, we focus on examining the role of SLAMF members in common inflammatory response, such as the interaction with bacterial products to determine 1) if SLAMF proteins are regulated during the differentiation process from monocytes to macrophages, 2) if SLAMF proteins are regulated by the TLRs in monocytes and macrophages, and 3) if SLAMF proteins modulate TLR stimuli-induced production of cytokines and phagocytosis. We focused on SLAMF 1, 3, 4, 5, 6, 7 excluding SLAMF2, SLAMF8 and SLAMF9 due to their atypical composition i.e. lacking ITSM, from other SLAMF members. Our results indicated that SLAMF proteins were differentially regulated by cellular differentiation and TLRs, and SLAMF7 www.nature.com/scientificreports/ was involved in enhancing TLR-mediated induction of pro-inflammatory cytokines but not phagocytosis in monocytes and macrophages. Materials and methods Materials and reagents. Flagellin Briefly, the siRNA/HiPerFect complex was made and added dropwise to PMA (25 ng/ml) activated THP-1. The whole cell/complexes solution was incubated for 6 h, then additional fresh cell culture media in the presence of 10% FBS and 1% Pen/Strep was added to each well. Plates were then incubated for 48 h to facilitate transfection followed by with/without LPS incubation for an additional 4 h. After LPS treatment, cells were harvested for RNA isolation. Relative mRNA levels for SLAMF7, IL-1β, IL-6, TNF-α, and CCL2 were determined using RT-PCR as described below. Total RNA isolation, cDNA synthesis, and Real-time PCR. Total RNA was isolated by using the TRIzol reagent as previously described 36 . Real-time PCR was used for quantifying changes in relative mRNA levels. 1 µg of total RNA was used for cDNA synthesis using the AffinityScript Multi Temperature cDNA Synthesis kit according to the manufacturer's protocol. Real-time PCR was performed using the TaqMan Fast Universal PCR Master Mix according to the previously published protocol on ViiA7 real-time PCR system following the manufacturer's protocol 36 . For the determination of relative mRNA levels, TATA binding protein (Tbp) was used as housekeeping gene for normalization. Relative expression value was generated using Ct method as described by the manufacturer's protocol. The following TaqMan Gene Expression Assays (Thermo Fisher Scientific, Waltham, MA) PCR primer/probe sets were used: SLAMF1(Hs00234149_m1), SLAMF3 (Hs03004331_m1), SLAMF4 (Hs00175568_m1), SLAMF5 (Hs01547121_m1), SLAMF6 (Hs00372941_m1), SLAMF7 (Hs00221793_m1), IL-1b (Hs01555410_m1), IL-6 (Hs00985689_m1), CCL2 (Hs00234140_m1), and TNF-aα (Hs00174128_m1). Western blot analysis. Western analysis of protein expression was conducted as described previously 36 . Briefly, after treatments cultured u-THP-1 cells were harvested by centrifugation at 400 RCF for 5 min, cell pellets were washed twice with cold PBS and centrifuge at 400 RCF for 5 min, supernatant removed. Cell pellets were immediately mixed with 100 μL of radio immunoprecipitation assay (RIPA) buffer containing EDTA, protease and phosphatase inhibitors. Statistical analysis. GraphPad Prism (Prism 9, GraphPad Software Inc., La Jolla, CA, USA) was used for statistical analysis 42 . Each experiment was performed in triplicate. Data were expressed as mean ± SD. For two groups comparison, t-test was used. For multiple groups either one-way or two-way ANOVA followed by posthoc test (Tukey test) was used depending on experimental design. In all cases, p ≤ 0.05 was considered significant. Expression of SLAMF mRNAs and proteins in the human THP-1 monocytes, THP-1 macrophages and Jurkat cells. We first examined the relative expression of SLAMF mRNAs in human THP-1 cells. As shown in Fig. 1A, SLAMF4 and SLAMF5 are the most highly expressed mRNA in SLAMF proteins in u-THP-1 while other SLAMF members' expressions were minimal. Interestingly, in d-THP-1, SLAMF5, SLAMF7 were highly expressed, followed by SLAMF4 as the third most highly expressed SLAMF member (Fig. 1A). To further elucidate specificity of SLAMF protein expression in different immune cells, Jurkat cell, a lymphocyte cell model, was also queried for SLAMF as comparison to macrophage/monocyte. In contrast to THP-1 cells, in Jurkat cells (Fig. 1B), SLAMF5 appeared to be the most highly expressed SLAMF protein. In contrast to THP-1 macrophage, SLAMF7 mRNA expression was minimal in Jurkat cells. Also, a significant up-regulation of SLAMF5, SLAMF7 but not SLAMF4 mRNA expression levels were detected during the PMA-induced differentiation of u-THP-1 to d-THP-1. The up-regulated mRNA expression levels of SLAMF5 and 7 were confirmed at the protein level (Fig. 2). In contrast to SLAMF5, two immune-detectable SLAMF7 bands were observed. A 37 kDa and a 50 kDa, representing the original molecular weight of SLAMF7 and its presumable glycosylated form 38 , respectively. PMA induction of THP-1 cells led to time-dependent increase in both immune-detectable SLAMF5 and SLAMF7 proteins (Fig. 2). Effect of TLRs activation on SLAMF mRNA levels in monocytes and macrophages. We next determined the effects of TLR activation on SLAMFs given the role of TLR in host's immune responses. SLAMF protein mRNA expression levels were evaluated when u-THP-1 and d-THP-1 cells were activated with specific TLR ligands. As shown in Fig. 3, only SLAMF7 was significantly induced by all four TLR stimuli including LPS, Flagellin, Pam2CSK4 and Pam3CSK4. Compared to SLAMF7 mRNA expression levels, other SLAMF protein expression levels were minimal. Since we previously published work on expression, regulation and immune-related roles of SLAMF5 36 and given the uniqueness of SLAMF7 expression, regulation patterns, we focused on SLAMF7 for follow-up analysis. Characterization of LPS responses of SLAMF7 in monocyte (u-THP-1) and macrophage (d-THP-1). CD14 dependency of LPS-induced increase in SLAMF7 mRNA levels. Because TLR4 can be activated through CD14-dependent or independent pathway, we further characterized the CD14 dependency of the TLR4 ligand LPS induction of SLAMF7 mRNA levels. The pathway of LPS-induced SLAMF7 mRNA expressions in u-THP-1 and d-THP-1 was evaluated using CD14 blocking antibody. Blocking CD14 in both u-THP-1 and d-THP-1 significantly reduced LPS-induced SLAMF7 mRNA expression levels (Fig. 4A). Time-dependent effects of LPS on SLAMF7 mRNA levels. The mRNA expression levels of SLAMF7 in u-THP-1 cells stimulated with LPS peaked at 4 h and then decreased. In d-THP-1 cells, mRNA expression levels of SLAMF7 stimulated with LPS peaked at 6 h and then decreased (Fig. 4B). Concentration dependent effect of LPS on SLAMF7 mRNA levels. LPS induced a concentration dependent increase in SLAMF7 mRNA expression levels in both u-THP-1 and d-THP-1 cells (Fig. 4C). Induction of SLAMF7 mRNA levels in both u-THP-1 and d-THP1 cells was significantly induced by LPS at 1 ng/ml. www.nature.com/scientificreports/ LPS stimulation affect SLAMF7 at protein level. We also seek to confirm LPS stimulation of SLAMF7 at the protein level. TLRs activation significantly induced SLAMF7 protein expression (Fig. 5). Effects of SLAMF7 siRNA knocked-down in d-THP-1 on LPS-induced cytokines. The unique expression and regulation of SLAMF7 in macrophage led us to evaluate the functional roles of SLAMF7 in monocyte/macrophage. Using siRNA knockdown experiment, SLAMF7 knock-down in d-THP-d significantly reduced SLAMF7 mRNA expression levels with and without LPS stimulation (Fig. 6A). In the presence of LPS, SLAMF7 knock-down in d-THP-1 significantly attenuated LPS-induced mRNA expression levels of pro-inflammatory cytokines including IL-1β, IL-6, CCL2 and TNF-α compared to the non-target control (NC) (Fig. 6B-E). Identifying SLAMF7 activating antibody and its effect on TLR stimuli-induction of cytokines. Based on reported Elotuzumab activating property 38,39 , we seek to identify commercially available antibody that can activate SLAMF7. We found that addition of the SLAMF7 antibody (Cat#:NBP2-45868), but not IgG 1 control antibody (Cat#:sc-52003), enhanced up-regulation of pro-inflammatory cytokines including IL-1β, IL-6 and TNF-α in u-THP-1 cells by TLR ligands LPS, Flagellin, Pam2CSK4 or Pam3CSK4, (Fig. 7, S1 and S3). Similarly, when d-THP-1 was activated with LPS, Flagellin or Pam3CSK4 in presence of SLAMF7 agonist antibody, enhanced up-regulation of pro-inflammatory cytokines including IL-1β, IL-6 and TNF-α was also observed. In contrast, activation with Pam2CSK4 in the presence of SLAMF7 agonist antibody slightly lowered IL-6 and TNF-α mRNA expression levels compared to its IgG 1 control (Fig. S1-S3). SLAMF7's effect on macrophages (d-THP-1) phagocytosis activity. In addition to cytokine production, we also asked if SLAMF7 played a role in phagocytotic activity of macrophage. Using the SLAMF7 ago- www.nature.com/scientificreports/ nist antibody described above, that enhanced cytokine production, we did not observe an effect of the antibody on d-THP-1 phagocytosis activity as compared to IgG 1 or a positive control (Fig. 8). Discussion In the present study, several important pieces of information related to SLAMF proteins were identified which included: 1) Identification of crosstalk between SLAMF7 and several selected TLRs in the monocyte/macrophage. 2) SLAMF7 to be the major SLAMF protein regulated in monocyte/macrophage in term of fold changes. 2) Different types of immune cells express different sets of SLAMF protein may indicate functional selectivity. 3) Differentiation and TLR stimuli regulated specific SLAMF members in monocyte/macrophage. 4) Functionally, SLAMF7 activation enhanced selected TLR-induced cytokine responses but not phagocytosis. The SLAMF receptors are important immunomodulatory receptors in the immune cells. These receptors are known to have a wide spectrum of roles including regulation of cytotoxicity, humoral immunity, autoimmunity, cell survival, lymphocyte development, cell adhesion, invasiveness, and production of cytokines [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25]43,44 . Moreover, due to having multiple members in the SLAMF receptors, their expression patterns in immune cells such as monocytes and macrophages remained unclear but may be important contributors to their function. Our results indicated that monocyte SLAMF4 and 5 appeared to be the dominant SLAMF in monocytes; which was consistent with previously published reports 24,25 . These results suggested that SLAMF4 and 5 may play a relatively more central role than other SLAMF in mediating monocyte's function. Differentiation of monocyte to macrophage significantly altered the expression profile of SLAMF protein; most notably, an induction of SLAMF7 expression. Comparing human T lymphocyte Jurkat cells to THP-1 monocyte/macrophage (Fig. 1B), SLAMF7 showed unique expression in the macrophage, suggesting a specific functional role for this protein in immune responses of macrophages. The biological efficacies specific to SLAMF7 were confirmed from experiments using siRNA against SLAMF7. Transient transfection with SLAMF7 siRNA to knock-down SLAMF7 expression in d-THP-1 significantly reduced LPS activated mRNA expression levels of pro-inflammatory cytokines including IL-1β, IL-6, CCL2 and TNF-α (Fig. 6A-E). These data support SLAMF7 play an enhancing role in mediating the effects of inflammatory stimuli in macrophage. Unfortunately, the efficiency of siRNA for u-THP-1 was too low (data not shown), therefore, experiments in u-THP-1 were not able to be conducted. This data is consistent with reported literature that u-THP-1 cells are notoriously difficult to transfect 45 . Importantly, we identified a SLAMF7 antibody that function as an agonist antibody. The SLAMF7 antibody (Cat#NBP2-45868), enhanced up-regulation of pro-inflammatory cytokines including IL-1β, IL-6 and TNF-α mRNA by TLR ligands LPS (TLR4), Flagellin (TLR5) and Pam3CSK4 (TLR1/2) (Fig. 7, Figure 2. Western blot analysis of SLAMF5 and SLAMF7 protein expression in monocytes (u-THP-1) and macrophages (d-THP-1). Cells were cultured and SLAMF protein levels were determined by Western blot analysis. The proteins were detected and quantified using the LICOR ODYSSEY® CLx (LiCOR, Lincoln, NE, USA) Infrared Imager according to manufacturer's procedure 37 . Results are expressed as mean ± SD (n = 3). Bars with asterisk indicate significant difference to that of u-THP-1 monocytes www.nature.com/scientificreports/ S1-S3) in both d-and u-THP-1 cells. These data are consistent with a recent report by Simmons et al. 41 and provide independent verification of an enhanced immune response by SLAMF7 engagement. One exception from our study was observed in Pam2CSK4 (TLR2/6 ligand)-activated d-THP-1 ( Fig. S1-S3). In this case, the SLAMF7 agonist antibody and Pam2CSK4 did not synergistically induce IL-6, CCL2 and TNF-α. These results suggested that SLAMF7 may only interact with selective TLR to enhance the immune responses elicit by those TLRs, but more studies are needed to further elucidate the specific differences. In addition, the chemokine CCL2 showed a slightly different mRNA expression levels compared to the other cytokines (Fig. S2) suggesting that selective modulation of specific cytokines/chemokines pathways by SLAMF7 may also exist. Overall, our results from siRNA and agonist antibody supported involvement of SLAMF7 in the regulation of selected TLR-induced proinflammatory cytokines/chemokines in u-THP-1 and d-THP-1. This cross talk between SLAMF7 and selected TLR allow for the monocyte/macrophage to mount a concerted robust immune response during infection. Of note, the agonist antibody enhanced the selected TLR-induction of cytokines (Fig. 7) only in the presence of TLR ligands. Hence, SLAMF7 engagement itself is not capable of directly induce cytokines but rather SLAMF7 appeared to play a supporting role in the process. The interaction between TLR and SLAMF7 appeared to be through CD14-related pathway as knock-down of CD14 inhibited the induction by LPS (Fig. 4A). We also evaluated SLAMF7 biological efficacy in phagocytosis, as phagocytosis is a major mechanism by which monocyte/macrophage remove pathogens and cell debris 46 . However, we did not observed effects of agonist antibody on phagocytosis. These results and the cytokine analysis supported the cross talk between SLAMF7 and TLR pathways that regulates cytokine production but not phagocytosis. Several other SLAMF were also upregulated during differentiation (Fig. 1A). Therefore, differentiation is apparently the main driver for regulating the SLAMF pathway. Although we have not tested all up-regulated SLAMF pathways, we suspect the SLAMF pathway provide overall promotional effects on host's immune response through interaction with TLR-associated pathways. SLAMF1, 4 appeared not to be regulated by differentiation or TLR stimuli, therefore may not contribute to enhanced immune responses, such as cytokine production, in the macrophage. However, additional studies are needed to further elucidate the specific role for other SLAMF not examined in this study in monocyte/macrophage. Conclusion Our study indicated that SLAMF receptors are differentially expressed in immune cells. We also provided data to support differentiation from monocytes to macrophages lead to up-regulation of SLAMF7 that promote induction of cytokines by selected TLR ligands in monocytes and macrophages. Overall, our results indicate crosstalk and a coordinated response of selected TLR and SLAMF7-mediated pathways against immune stimuli in monocyte/macrophage exist. Data availability The datasets used and/or analyzed in the current study are available from the corresponding author on reasonable request. Received: 3 March 2023; Accepted: 14 June 2023 Figure 8. Effects of SLAMF7 antibody on phagocytosis activity of macrophage (d-THP-1). U-THP-1 cells were differentiated to d-THP-1 cells and phagocytosis activity was determined in the presence or absence of control antibody or SLAMF7 antibody. B: Blank (without E. coli particle) or negative control, and Ctrl: E. coli particles were used as a phagocytosis pathogen or positive control. +IgG: with isotype IgG control added, +Ab-SLAMF7: with antibody against SLAMF7 added. Results are expressed as mean ± SD (n = 3). Bars with different letters indicate significant difference at p ≤ 0.05.
2023-07-09T06:17:33.137Z
2023-07-07T00:00:00.000
{ "year": 2023, "sha1": "cc95c9ed0a55b54651814438a3c5801908370cc7", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3429bd75d0bcbb731a18b35780b163cd8faa2bb0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
227258792
pes2o/s2orc
v3-fos-license
Application of Dynamic Fragmentation Methods in Multimedia Databases: A Review Fragmentation is a design technique widely used in multimedia databases, because it produces substantial benefits in reducing response times, causing lower execution costs in each operation performed. Multimedia databases include data whose main characteristic is their large size, therefore, database administrators face a challenge of great importance, since they must contemplate the different qualities of non-trivial data. These databases over time undergo changes in their access patterns. Different fragmentation techniques presented in related studies show adequate workflows, however, some do not contemplate changes in access patterns. This paper aims to provide an in-depth review of the literature related to dynamic fragmentation of multimedia databases, to identify the main challenges, technologies employed, types of fragmentation used, and characteristics of the cost model. This review provides valuable information for database administrators by showing essential characteristics to perform proper fragmentation and to improve the performance of fragmentation schemes. The reduction of costs in fragmentation methods is one of the most desired main properties. To fulfill this objective, the works include cost models, covering different qualities. In this analysis, a set of characteristics used in the cost models of each work is presented to facilitate the creation of a new cost model including the most used qualities. In addition, different data sets or reference points used in the testing stage of each work analyzed are presented. Introduction Database fragmentation is a process for reducing irrelevant data accesses by grouping data frequently accessed together in dedicated segments [1]. The time consumed during the execution of queries in a parallel and distributed environment is highly affected by the form in which the tables comprising a database have been fragmented. The classical methods of fragmentation in a database distributed system helps to make the information retrieval faster and with smaller calculation efforts [2][3][4][5]. Three main fragmentation techniques have been defined for relational databases: 1. The database administrator (DBA) has to observe the system for a significant amount of time before the partitioning operation can take place until the probabilities of queries accessing certain database elements and their frequencies are collected, this is called an analysis stage. 2. Even then, after the partitioning process is completed, nothing guarantees that the real trends in queries and data have been discovered. Thus the partitioning scheme may not be good. In this case, the database users may experience a very long query response time. 3. In some dynamic (e.g., multimedia) applications, queries tend to change over time and a partitioning scheme is implemented to optimize the response time for one particular set of queries. Thus, if the queries or their relative frequencies change, the fragmentation result may no longer be adequate. 4. In static partitioning methods, refragmentation is a heavy task and only can be performed manually when the system is idle In contrast, in dynamic vertical partitioning, attributes are being relocated if it is detected that the current vertical partitioning scheme has become inadequate due to query information changes. [7]. Multimedia data are very important in many application areas such as medicine [8][9][10], cartography [11][12][13], meteorology [14][15][16], security [17][18][19], among others. Automatic extraction, classification, and manipulation of multimedia content are critical to efficient multimedia data management. Content-based media data retrieval methods improve the accuracy of database searches. These methods are necessary when the textual annotations are missing or incomplete. In addition, content-based methods potentially improve retrieval accuracy even when textual annotations are present, by giving additional insight to collections of multimedia data [20,21]. This work provides an in-depth review of works related to database fragmentation, with special emphasis on methods that focus on multimedia data, that perform dynamic fragmentation, and that contemplate CBIR (Content-Based Image Retrieval). Furthermore, different details of each work analyzed are shown, such as whether it is complete for its full implementation, whether the characteristics of the method makes it easy to implement, the main qualities of the cost model used, the technologies used in the implementation stage and the benchmarks used in the evaluation. This article stands out for being the only one to contemplate the specific characteristics mentioned above, deeply reviewing the literature of dynamic fragmentation in multimedia databases to identify areas of opportunity aimed at clear recommendations for the creation of new methods. This paper is structured as follows. Section 2 describes the research methodology used in this study. Section 3 presents the classification and analysis of the works obtained in the search stage. Section 4 gives a classification of fragmentation papers considering six interesting criteria: (1) year of publication; (2) editorial; (3) type of fragmentation, (4) used costs, (5) Database Management System, and (6) used benchmarks. Section 5 provides a description of the set of works by category. A discussion is included in Section 6. Finally, Section 7 presents conclusions and future directions. Research Methodology Dynamic fragmentation is a highly addressed topic in the literature of works related to fragmentation focused on improving the performance of distributed databases. To give an approach that contributes to the main characteristics of this work, it is necessary to know the related works. In Figure 1, all the stages of the proposed methodology are shown. The search for works was carried out Classification Method 80 articles were obtained in the search stage and they were analyzed according to the eight criteria presented in Figure 1. Table 1 shows the description of the methods exposing the type of fragmentation used, if it presents a cost model, if it is dynamic, if it considers CBIR, and, finally, the cost equation used to obtain the fragmentation scheme. Classification Method 80 articles were obtained in the search stage and they were analyzed according to the eight criteria presented in Figure 1. Table 1 shows the description of the methods exposing the type of fragmentation used, if it presents a cost model, if it is dynamic, if it considers CBIR, and, finally, the cost equation used to obtain the fragmentation scheme. Classification of Research Papers After having obtained the works of the search methodology, an analysis was carried out using graphs and tables that show a clear guide for future research. Figure 2 shows a bar graph with the distribution of the articles analyzed according to the year of publication. It is observed that in 2019 the largest number of articles is concentrated. Of the 9 articles, 3 are from Springer ( [33,34,66]), the publisher with the highest number of articles, as it can be observed in Figures 3 and 4; 2 belong to ACM ( [76,78]); 2 to IEEE ( [35,96]), and 2 pertain to the category "Others" ( [95,97]). The "Other" category shows the number of articles that use a type of fragmentation that is not any of the other three, such as the fragmentation of documents in [82][83][84][85][86][87], fragmentation of videos in [88], grid fragmentation in [95], among others. Of the papers that addressed horizontal fragmentation, 4 also included CBIR ( [1,40,41,49]). The "Other" category shows the number of articles that use a type of fragmentation that is not any of the other three, such as the fragmentation of documents in [82][83][84][85][86][87], fragmentation of videos in [88], grid fragmentation in [95], among others. Of the papers that addressed horizontal fragmentation, 4 also included CBIR ( [1,40,41,49]). The "Other" category shows the number of articles that use a type of fragmentation that is not any of the other three, such as the fragmentation of documents in [82][83][84][85][86][87], fragmentation of videos in [88], grid fragmentation in [95], among others. Of the papers that addressed horizontal fragmentation, 4 also included CBIR ( [1,40,41,49]). Table 2 shows the costs that were found throughout the analysis and the articles that use them. Different articles contemplate more than one type of cost, for this reason, the total number of articles in the second column is not the total number of articles that were analyzed. The cost of transportation, access, storage, and execution are the most used, with a total of 63 mentions in the articles. Figure 6 shows the number of articles per DBMS. Most of the works do not report or do not use a DBMS to carry out their research. It is observed that the most used systems are Oracle, MongoDB, PostgreSQL, and Cassandra. Table 2 shows the costs that were found throughout the analysis and the articles that use them. Different articles contemplate more than one type of cost, for this reason, the total number of articles in the second column is not the total number of articles that were analyzed. The cost of transportation, access, storage, and execution are the most used, with a total of 63 mentions in the articles. Figure 6 shows the number of articles per DBMS. Most of the works do not report or do not use a DBMS to carry out their research. It is observed that the most used systems are Oracle, MongoDB, PostgreSQL, and Cassandra. Table 3 shows the most used benchmarks, which, likewise, the right column does not reflect the number of total articles, since many articles use more than one benchmark. The TPC-H benchmark is the most widely used. One way to understand the different approaches of the works obtained is to classify them using the categories shown in Figure 7. Table 3 shows the most used benchmarks, which, likewise, the right column does not reflect the number of total articles, since many articles use more than one benchmark. The TPC-H benchmark is the most widely used. One way to understand the different approaches of the works obtained is to classify them using the categories shown in Figure 7. Four categories are proposed to present the works obtained. The first category contains the methods that focus on multimedia databases; the second category covers all the approaches that perform a dynamic fragmentation and that do not focus on multimedia data; the third category contains all the works that consider NoSQL databases, that do not perform dynamic fragmentation and that do not take into account multimedia elements; the last category shows the works that are not included in any of the previous categories. Each category is subdivided by type of fragmentation, i.e., horizontal, vertical, and hybrid. The third category is not subdivided due to the nature of NoSQL databases. NoSQL technology has high availability and high scalability, which provides new methods for the storage and management of unstructured data. NoSQL technology abandons the relational model of paradigm constraint, can store data with different types, and has high scalability characteristics [61]. These features make NoSQL databases attractive for storing and managing multimedia data. The two categories shown at the top of the image are the characteristics on which this work mainly focuses. The set of works for each category is presented in the next section. Description of the Set of Works by Category This section details each article in each of the categories, mainly presenting implementation characteristics, the type of fragmentation, and the results obtained. Fragmentation of Multimedia Databases Currently, multimedia applications are highly available, such as audio/video on demand, digital libraries, electronic catalogs, among others. The rapid development of multimedia applications has created a huge volume of multimedia data and it is exponentially incremented from time to time. A multimedia database is crucial in these applications to provide efficient data retrieval [55]. In this Four categories are proposed to present the works obtained. The first category contains the methods that focus on multimedia databases; the second category covers all the approaches that perform a dynamic fragmentation and that do not focus on multimedia data; the third category contains all the works that consider NoSQL databases, that do not perform dynamic fragmentation and that do not take into account multimedia elements; the last category shows the works that are not included in any of the previous categories. Each category is subdivided by type of fragmentation, i.e., horizontal, vertical, and hybrid. The third category is not subdivided due to the nature of NoSQL databases. NoSQL technology has high availability and high scalability, which provides new methods for the storage and management of unstructured data. NoSQL technology abandons the relational model of paradigm constraint, can store data with different types, and has high scalability characteristics [61]. These features make NoSQL databases attractive for storing and managing multimedia data. The two categories shown at the top of the image are the characteristics on which this work mainly focuses. The set of works for each category is presented in the next section. Description of the Set of Works by Category This section details each article in each of the categories, mainly presenting implementation characteristics, the type of fragmentation, and the results obtained. Fragmentation of Multimedia Databases Currently, multimedia applications are highly available, such as audio/video on demand, digital libraries, electronic catalogs, among others. The rapid development of multimedia applications has created a huge volume of multimedia data and it is exponentially incremented from time to time. A multimedia database is crucial in these applications to provide efficient data retrieval [55]. In this section, the works that address fragmentation focused on multimedia databases are grouped. Horizontal Fragmentation of Multimedia Databases Several authors focus on horizontal fragmentation for multimedia databases, however, all propose different ways to carry out this task. In [39], Ma et al. analyzed fragmentation and allocation in the context of databases with complex values. The main contribution of the authors was a heuristic approach to fragmentation and allocation. The implementation was carried out using a database scheme which was populated by benchmark 007, subsequently, four sites and 30 queries were considered, of which 20% of them were frequently used for the most important transactions. The result of the experiment validated the proposed approach since the cost of transportation was minimized and, in this way, the performance of the database schemes was improved. In [40,41], the primary horizontal fragmentation of textually annotated multimedia data was addressed. In these works, the problem of identifying the semantic implications between textually based multimedia predicates was observed and it was proposed to integrate knowledge bases as a framework to evaluate the affinity semantics between values of predicates and operators. The implementation consisted of varying the number of predicates and the number of concepts to obtain the execution time using the semantic base of predicates. Operator implications are identified using a specific knowledge-based operator, developed in [41]. In addition, a prototype was presented to test the approach used, which demonstrated that the proposed method has polynomial complexity. In [49], Fasolin et al. proposed an efficient approach to running conjunctive queries on complex big data along with conventional data. A horizontal fragmentation was performed according to the criteria frequently used in the query predicates. This strategy was applied in CoPhIR, a collection of more than 106 million images along with its related conventional data. The experimental results showed a considerable increase in performance with the proposed approach for queries with conventional and similarity-based predicates, compared to the use of a single metric index for the entire content of the database. Rodríguez-Mazahua et al. [55] developed a horizontal fragmentation method for multimedia databases which is based on an agglomerative hierarchical clustering algorithm. The main advantage of this method is that it does not use affinity to create a horizontal fragmentation scheme. In this work, the multimedia horizontal fragmentation algorithm was described using an example of equipment management in a machinery sales company and the cost model was presented in detail. The algorithm evaluation was carried out and it was shown that it exceeds the performance to create the fragmentation scheme in most cases. Hakka culture is an important part of the culture of southern China. NoSQL technology has high availability and high scalability, this provides new methods for data storage and management of Hakka culture. In [61], Wu, Chen & Jiang managed unstructured data of the Hakka culture using MongoDB. Data from the Hakka culture of western Fujian were taken as an example and the prototype data management system was built. The authors concluded by mentioning that a new method was presented to improve the flexibility and efficiency of managing unstructured and heterogeneous data from multiple sources. Vertical Fragmentation for Multimedia Databases Fung, Leung & Li [24] described the development of a video eLearning database system. The video eLearning database provides a framework for temporal modeling to describe video eLearning data and supports the distribution of data by applying vertical class fragmentation techniques. Vertical class fragmentation techniques were applied over 4DIS (Four-Dimensional Information System) as a measure for efficient query execution. The dynamic acquisition of eLearning multimedia video on the internet was presented and a detailed cost model was developed for the execution of queries through vertical class fragmentation. The effectiveness of this approach was demonstrated in the context of the video eLearning database system using the 4DIS modeling framework. The authors in [26] presented a vertical fragmentation algorithm for distributed multimedia databases (MAVP, Adaptable Multimedia Vertical Partitioning) that takes into account the size of the multimedia objects to generate an optimal vertical fragmentation scheme. MAVP minimizes the number of irrelevant data accesses and the transportation cost of queries in distributed multimedia databases to achieve efficient retrieval of multimedia objects. This paper presented the development of a cost model that considers the cost of general query processing in a distributed multimedia environment. The experimental evaluation showed that it outperformed AVP [98], an algorithm with the same approach. The vast majority of vertical fragmentation algorithms are static, that is, they optimize vertical fragmentation schemes using a workload, but if it undergoes changes, the schema will also suffer, resulting in long query response times. In [32], Rodríguez-Mazahua et al. proposed a set of active rules to perform dynamic vertical fragmentation on multimedia databases. Active rules are implemented in DYMOND (DYnamic Multimedia ONline Distribution), which is a system based on active rules for dynamic vertical fragmentation of multimedia databases. In this work, a case study of a machinery sales company was presented, which has a multimedia database, this database contains a table called equipment and contains 100 tuples with alphanumeric and multimedia attributes. Vertical fragmentation was performed applying the active rules on this database and a shorter response time was obtained when the first fragmentation was performed and also when the database was refragmented. Hybrid Fragmentation for Multimedia Databases In [67], Chbeir and Laurent presented a formal approach to identify implications of predicates and queries to efficiently fragment multimedia data. The authors in this work focused on the fragmentation of databases with the objective of reducing access to irrelevant data by grouping frequently used data. The proposed approach uses combined fragmentation of multimedia data based on query comparison and query equivalence. Multimedia functional dependencies take into account specific characteristics of multimedia data and are axiomatized in the same way as standard functional dependencies. An example of a multimedia database is presented as an implementation that considers a table called "Albums" and contains the attributes name, birth, place, genre, image, song, and clip. Fragmentation is applied to that table. Due to the increase in multimedia applications, the use of fragmentation techniques to reduce the number of pages required to answer a query is very useful. In [73], Rodríguez-Mazahua et al. presented a hybrid fragmentation method for multimedia databases that takes into account the size of the attributes and the selectivity of the predicates to create hybrid fragmentation schemes. The proposed algorithm searches for the most suitable vertical partition scheme to later obtain the hybrid scheme taking into account the cost model described in the same work. To carry out the experiments, the same scenario was used as in [32]. As a result, the hybrid partition method for multimedia databases was obtained, which reduces access to irrelevant data, taking into account the size and selectivity of each attribute, and also presented a cost model for distributed multimedia databases. In [74], the authors created an index partition algorithm that addressed the specific properties of a distributed system: load balancing between nodes, redundancy in node failure, and efficient use of nodes under concurrent queries. The experiments focused on measuring the effectiveness of partition size balance and recovery quality. The results show that B-KSVD (Balanced K-means-based Single Value Decomposition) better balances partition sizes, compared to k-means and KSVD. In conclusion, it was mentioned that the requirements to create complete representations with redundant document indexing were formalized, where partitions contain overlapping data subsets. Vogt, Stiemer & Schuldt [77] presented Polypheny-DB's vision of a distributed polystore system that seamlessly combines replication and partitioning with local polystore and can dynamically adapt all parts of the system when workload changes. The basic components for both parts of the system were presented and the open challenges towards the implementation of the Polypheny-DB vision were shown. Different domains take care of managing massive data volumes and thousands of OLTP (OnLine Transaction Processing) transactions per second. Traditional relational databases cannot cope with these requirements. NewSQL is a new generation of databases that provides high scalability, availability, and support of ACID properties (Atomicity, Consistency, Isolation, Durability). Schreiner et al. [78] proposed a hybrid fragmentation approach for NewSQL databases that allows the user to define vertical and horizontal data partitions. The experimental evaluation compared the hybrid version of VoltDB with standard VoltDB. The results highlight that the strategy shown increased the number of transactions in a single site from 37% to 76%, maintaining the same response time. Other Types of Fragmentation for Multimedia Databases Different papers address other types of fragmentation for multimedia databases. In [82], the authors carry out hierarchical multimedia fragmentation for XML (Extensible Markup Language) documents. The results showed that the textual information of the elements must be different and returning multimedia fragments based on the hierarchical relationships between the common elements and the multimedia elements allows obtaining good results. Torjmen-Khemakhem, Pinel-Sauvagnat & Boughanem [84] studied the impact, in terms of effectiveness, of the position of the text in relation to the sought multimedia objects. The authors' general approach was based on two steps: first, it retrieves XML elements containing multimedia objects, and then, explores the surrounding information to retrieve relevant multimedia fragments. Different experiments were carried out in the context of the INEX framework (Initiative for the Evaluation of XML Retrieval), which contains more than 660,000 XML documents. The results showed that the structural evidence is of great interest to adjust the importance of the textual context for multimedia recovery. In [86], Santos & Masala proposed a novel approach that combines a pattern fragmentation technique with a NoSQL database to organize and manage fragments. The experiments were carried out at different cloud service providers and three types of files were used, docx, jpg, and pdf, all with 100 kb in size. The result showed that fragmentation with random patterns is faster than other approaches. Mourão & Magalhães [87] described how sparse hashes can help find an index where partitions are based on the feature vector distribution in the original space and create better distribution options for high dimension feature vectors. Different focus tests were performed on a commercial cloud service. It was tested on a billion vector dataset showing that this approach had low fragmentation overhead, achieved balanced document and query distribution, handled concurrent queries effectively, and had little degradation when nodes failed. In [88], Mettes et al. proposed to fragment videos by recognizing events within them, and through this technique, they improved calls to sub-events. The experiments were carried out on a data set called THUMOS'14, which contained 1010 videos. The experimental evaluation showed the effectiveness of the coding of event detection as well as the natural complementation of the global aggregation of semantic concepts. Lu et al. [94] designed a new index structure called Dynamic Partition Forest (DPF) to hierarchically divide high collision areas with dynamic hashing, to cause self-adaptation of various data distributions. The results of the experiment showed that DPF increases accuracy by 3% to 5% within the same period compared to DPF without the multi-step search. Experimental comparisons with two other leading-edge methods on three popular data sets show that DPF is 3.2 to 9 times faster to achieve the same precision with a decrease in index space from 17% to 78%. The authors in [95] analyzed DICOM (Digital Imaging and Communications in Medicine) to find the optimal hybrid data configuration. NSGA-G (Non-dominated Sorting Genetic Algorithm-Grid) based on grid fragmentation was proposed to improve queries to hybrid DICOM data. Experiments on DICOM files on hybrid storage prove that NSGA-G provides the best processing time with interesting results. The authors in [97] presented a cultural Big Data repository as an efficient way to store and retrieve cultural Big Data. The proposed repository is highly scalable and provides high performance integrated methods for Big Data analysis of cultural heritage. The experimental results show that the proposed repository exceeds in terms of space, as well as the storage and recovery time of cultural Big Data. Dynamic Fragmentation Dynamic fragmentation, as detailed in Section 1, improves the performance of databases, solving different problems of a static approach. This section shows all the works that focus on performing dynamic fragmentation that is not oriented to multimedia databases. Dynamic Horizontal Fragmentation Vazquez [2] presented a method of Dynamic Virtual Fragmentation. The proposed method was tested and implemented on a parallel database server running on a supercomputer with shared-nothing architecture. By implementing the proposed method, performance improvement in information retrieval was achieved in queries that did not contain any attribute used by the horizontal fragmentation of the table in the selection criteria. In [38], the authors addressed the problem of dynamic data reallocation in a distributed, fragmented database with changing access patterns. The algorithm was implemented on the database of the Federal Electricity Commission, which had changeable access patterns. As a result, it was observed an excellent performance of the database under the proposed approach. A decentralized approach to fragmentation and dynamic table allocation was proposed in [43] for distributed database systems based on observation of site access patterns to tables. This approach, called DYFRAM, was evaluated in three stages. The first phase was to examine the results from running a simulator on four workloads involving two sites. In the second part of the evaluation, the same simulator was used with two dynamic workloads involving more sites. The third part of the evaluation consisted of implementing the experiments on a distributed database system. Simulation results proved that, for typical workloads, DYFRAM significantly reduces communication costs. Abdalla & Amer [45] proposed a synchronized model of horizontal fragmentation, replication, and allocation in the context of relational databases. The experiments were carried out using a table, which contained information about different employees of a company. Four sites were considered on a distributed system, the costs between sites, and the restrictions of each one. This work significantly improved the performance of distributed database systems by reducing remote access and high costs of data transfer between sites. In [46] and [51], the authors designed DynPart and DynPartGroup, two dynamic fragmentation algorithms for continuously growing databases. The solution was validated through experimentation on real data. The dataset was taken from the Sloan Digital Sky Survey catalog, Data Release 8 (DR8). The results show that in the case of data sets in which there is a high correlation between the new data elements, the DynPartGroup algorithm maintains very good behavior. In [47], Bellatreche et al. proposed and experimentally evaluated an incremental approach to select data storage fragmentation schemes using genetic algorithms. The proposed approach was evaluated using the APB1 benchmark and a data warehouse with a star schema, which was populated with more than 24 million tuples and 4-dimensional tables. The tests were carried out on a small scale and on a large scale, which showed that of the three algorithms compared, the one proposed by the authors, called ISGA (Incremental Approach Based on Genetic Algorithms), outperformed the other two. Derrar, Nacer & Boussaid [48] developed an approach based on the statistical use of data access for dynamic data fragmentation in data warehouses. Experimental studies were performed using Oracle 10 G with the APB1 benchmark to verify the adaptability of the proposed approach. Several tests were performed considering the evaluation criteria as a threshold for the response time of an OLAP query. The results obtained were encouraging both in terms of memory space and query execution time. Herrmann, Voigt & Lehner in [52] solved the problem of online fragmentation for irregularly structured data and presented Cinderella, an autonomous online algorithm for horizontal fragmentation of irregularly structured entities in universal tables. The evaluation was carried out with the TPC-H and DBpedia benchmarks. Cinderella was implemented in PostgreSQL. Kumar & Gupta [53] designed an algorithm called TTVDCA (Threshold, Time, Volume, and Distance Constraint Algorithm) for dynamic fragment allocation in non-replicated distributed database systems. Calculations on the hypothetical database supported that the algorithm is better than other algorithms previously developed in this category and showed an improvement in overall system performance. The innovation presented by Baron & Iacob in [54] consisted in the possibility of integrating the three specific fundamental concepts of distributed databases: fragmentation, replication, and fragment allocation in an unbalanced, fully decentralized, and fully automated dynamic system. The authors mentioned that they not only innovated but also improved performance with the proposed approach which is configurable and easy to manage. Fetai, Murezzan & Schuldt [56] presented Cumulus, an adaptive data fragmentation approach that can identify characteristic access patterns of transaction mixes, determine data partitions based on these patterns, and dynamically re-fragment data if access patterns change. The approach evaluation was performed with the TPC-C benchmark and it was shown that Cumulus significantly increased overall system performance in an OLTP configuration compared to static data partitioning approaches. Abdel et al. [58] developed an improved dynamic system of distributed databases on a cloud environment, which allows them to dynamically make fragmentation, allocation, and replication decisions at runtime. Experiments were performed on a table containing data on bank user accounts. Three sites were used to apply the fragment distribution. An efficient approach to fragmentation, allocation, and replication based on access history was presented. The objective of the presented technique is to maximize local access. Serafini et al. [60] presented a new online fragmentation approach, called Clay, that supported both tree-based schemas and more complex general schemas with arbitrary foreign key relationships. To evaluate the proposed approach, Clay was integrated into a distributed main memory DBMS and it was shown that it can generate partition schemes that allow the system to achieve up to 15 times better performance and 99% lower latency than existing approaches. Zar Lwin & Naing [65] proposed an approach for the dynamic allocation of non-redundant fragments in a distributed database system. The implementation was carried out on four fully connected cloud sites, in which 10,000 queries were executed. The proposed approach was used to fragment the database and migrate each fragment to the best site. The result of the experiment showed that the performance of the four sites after the migration is slightly better than the performance of the four sites before migration. Olma et al. [66] presented an online partitioning and indexing scheme, along with a partitioning and indexing tuner designed for in situ query engines. An on-site query engine called Slalom was created to show the impact of the proposed design. As a result of its lightweight nature, Slalom achieves efficient query processing on raw data with minimal memory consumption. The authors showed at the implementation stage that Slalom outperforms the latest generation in situ engines using microbenchmarks and actual workloads. Dynamic Vertical Fragmentation Rodriguez et al. [7] discussed the improvement in DYVEP (DYnamic VErtical Partitioning), which was developed as an active system with dynamic partitioning capacity. The implementation was performed on the PostgreSQL database and the TPCH benchmark was used. The results showed the improvement in performance that DYVEP has when applied and all the advantages of the proposed approach were considered. Pérez et al. [22] proposed an extension of the DFAR (Dynamic Fragmentation, Allocation, and reallocation) mathematical optimization model, which unifies the fragmentation, allocation, and dynamic migration of data in distributed database systems. The extension consisted of adding a constraint that models the storage capacity of network sites. Initial experiments revealed that when site capacity is used almost to its limit, attribute deadlocks can occur, preventing the threshold acceptance algorithm from converging. Ref. [28,30] show the SMOPD (Self-Managing Online Partitioner for Databases) and SMOPD-C (Self-Managing Online Partitioner for Distributed Databases on Cluster Computers) algorithms that can autonomously partition a vertically distributed database into clusters, determine when a new fragmentation is needed, and partition the database accordingly. The works presented show different experiments that were carried out to study the performance of both, using the TPC-H benchmark in a cluster of computers. The results of the experiment showed that SMOPD-C and SMOPD are able to perform dynamic fragmentation with high precision and to obtain a lower cost in the execution of queries compared to other approaches. Alagiannis, Idreos & Ailamaki in [29] presented the H 2 O system that allowed the flexibility to support multiple storage designs and data access patterns on a single-engine. Plus, it decides on-the-fly, i.e., during query processing, it chooses the best design for a specific workload. A detailed H 2 O analysis was presented using the SDSS (Sloan Digital Sky Survey) benchmark. It was shown that while existing systems cannot achieve maximum performance across all workloads, H 2 O can always match best-case performance without requiring any tuning knowledge or specific workload. The authors in [33] expanded the work in GridFormation, assigning the partition task to an RL (Reinforcement Learning) task. The proposal was experimentally validated using a database and a workload with the TPC-H benchmark and the Google Dopamine framework for deep RL. Competitive execution times were presented while increasing the number of attributes in a table, outperforming some cutting-edge algorithms. Sharify et al. in [35] addressed the different challenges present in unstructured data through a lightweight relational database engine prototype and a flexible vertical partition algorithm that used simple heuristics to tailor the data design to the workload. Experimental evaluation using the Nobench dataset for JSON (JavaScript Object Notation) data showed that Argo and Hyrise, next-generation vertical partition algorithms, were outperformed by 24%. Furthermore, the proposed algorithm was able to achieve around 40% better cache utilization and 35% better Translation Lookaside Buffer (TLB) utilization. Schroeder et al. [37] presented an RDF data distribution method that overcomes the shortcomings of current approaches to scale RDF storage in both data volume and query processing. This approach was implemented in a summary view of data to avoid exhaustive analysis of large data sets. As a result, the fragmentation templates were derived from data elements in an RDF structure. Additionally, an approach was provided for inserting dynamic data, even if the new data does not conform to the original RDF structure. Dynamic Hybrid Fragmentation Wang et al. [69] addressed the problem of data distribution using a general triangle model called DaWN (Data, Workload, and Nodes). Based on data and workload analysis, it presents a novel strategy called ADDS (Automatic Data Distribution Strategy) for automatic data distribution in OLTP applications. The evaluation of this approach was carried out using the TPC-C benchmark and the MySQL database. The authors compared three different strategies: Hashing, Round-Robin, and ADDS. Based on the results of a series of experiments on TPC-C data sets and transactions, the proposed approach shows effective improvements. In [70], the authors proposed SOAP (System Framework for Scheduling Online Database Repartitioning), a framework of a system for scheduling refragmentation of online databases for OLTP workloads. SOAP serves the goal of minimizing the run time for refragmentation operations while ensuring the correctness and performance of the concurrent processing of normal transactions. PostgreSQL was prototyped and a comprehensive pilot study was conducted on Amazon EC2 (Elastic Compute Cloud) to validate the significant performance benefits of SOAP. Kulba & Somov [71] analyzed the dynamic fragment allocation in distributed data processing systems. A heuristic algorithm was presented to place fragments based on the parameters of each system over time. Two main methods of data fragmentation are used: horizontal and vertical fragmentation. The authors presented a method for the dynamic redistribution of table fragments between the nodes of a distributed system, taking into account the current values of the system parameters, which can change over time. Other Dynamic Fragmentations Jindal & Dittrich [68] presented AutoStore: an autotuning data store that monitors current workload and automatically splits data into control time intervals, without human intervention. This allowed AutoStore to adapt to workloads without stopping the system. The experimental results were obtained using the TPC-H benchmark data set and showed that AutoStore outperformed the row and column designs by up to a factor of 2. Sleit et al. [80] improved the ADRW (Adaptive Distributed Request Window) algorithm to achieve dynamic fragmentation and object allocation in distributed databases. The main result of the experiments carried out is that E-ADRW (Enhance ADRW) required less storage space compared to two other algorithms shown in the analysis stage. Hung & Huang [81] proposed a new Dynamic Fragment Allocation Algorithm in Partially Replicated Allocation Scenario (DFAPR). To evaluate the algorithm, the OptorSim simulator was used and different sites were considered in MMDB (Main Memory Database). OptorSim performs 100 to 5000 operations with 6 types of work. The results shown in the simulation demonstrate that DFAPR is suitable for the MMDB cluster because it provides better response time and maximizes local processing. Chernishev [91] presented an in-depth analysis of the prospects for an adaptive distributed relational column store. The column storage approach was shown to be a breakthrough in building an efficient self-managed database. As a conclusion, it was mentioned that different physical design options were presented to create the desired approach, as well as three alternatives to create an alert and execute the reorganization of the database. Fragmentation for NoSQL DBMS The authors in [57] used data mining and cluster analysis on the database records to apply data fragmentation. They compared the average response times of three related algorithms in a simple web application using a cloud-based NoSQL database management system. The experimental study shows that the presented techniques improve the performance of web applications. Elghamrawy in [59,63] proposed an Adaptive Rendezvous Hashing Partitioning Module (ARHPM) for Cassandra NoSQL databases. To evaluate the proposed module, Cassandra was modified incorporating the partitioning module and a series of experiments was carried out to validate the load balancing of the proposed module using the Yahoo Cloud Serving Benchmark. The two experiments showed that the proposed algorithm is faster to fragment and a better scheme is obtained in terms of performance. Oonhawat & Nupairoj [64] developed a new data distribution algorithm based on fragmentation conscious tagging to minimize the effect of the access point problem, especially in systems with heavy write requirements. As a conclusion, it was mentioned that the system improved because it is less likely that a certain fragment is affected during the consultations by the access point problem. Heni & Gargouri [85] presented a new methodological approach to big data security based on data fragmentation. The proposed development shows four phases. The first phase is used to automatically group Big Data in the NoSQL database. The second phase consists of identifying sensitive data using a neural network. The third phase consists of providing a layer of security through fragmentation. The last layer is intended to rebuild the fragments. The tests were carried out on the MongoDB database, however, it is not detailed which data set was used. In [92], the problem of using SSD (Solid-State Drive) flash drives was addressed. They introduced a new flow mapping scheme based on unique MongoDB features. The results that were obtained when evaluating the approaches, showed improvements in the performance of two benchmarks, YCSB (Yahoo! Cloud Serving Benchmark) and Linkbench. Santos, Ghita & Masala [96] introduced an approach to data security in the cloud using a random pattern fragmentation algorithm and combining it with a distributed NoSQL database. The experiments were performed on a data set with four types of data. Each file was 100 kb in size and was stored on Cassandra. Cassandra was deployed on the Microsoft Azure infrastructure. The results showed a higher performance compared to their counterparts, which implies the usability of the proposed method in cloud computing, especially in scenarios with high-speed needs and limited resources. Other Types of Fragmentation This section groups together all the works that are not included in the other classifications, i.e., that do not focus on multimedia databases, that do not perform dynamic fragmentation, and that are not developed for NoSQL databases. Other Types of Horizontal Fragmentation Castro et al. [3] showed an analysis of different methods of fragmentation, allocation, and replication of databases and a web application called FRAGMENT that adopted the design technique that was selected in the analysis stage, because the approach presented a method of fragmentation and replication, it was applied to a cloud environment, it was easy to implement, it focused on improving the performance of the operations executed in the database, it showed everything necessary for its implementation and it was based on a cost model. The experiments with the TPC-E benchmark demonstrated a lower response time of the queries executed against the distributed database generated by FRAGMENT compared to a centralized database. The authors in [42] addressed the fragmentation and horizontally derived allocation simultaneously in the context of the complex data model. The results demonstrated that the presented heuristic approach for derived horizontal fragmentation improved system performance over other traditional fragmentation approaches. Bellatreche et al. [44] developed a combined algorithm that handles the dependency problem between fragmentation and allocation. A new genetic solution was developed to solve this hurdle. Experiments for the genetic solution and previous work were carried out using the SSB benchmark (Star Schema Benchmark) applying it in Teradata with TD 13.10 software. The results showed that the genetic solution is faster than the previous work by 38%. A detailed description of the implementation of the proposed approach was presented, specifying that 22 queries were used, of which 50 selected predicates were obtained. Lim [50] investigated the support for elastic data fragmentation in cloud-based parallel SQL processing systems. The author proposed different algorithms and associated data organization techniques that minimized the distribution of tuples and the movement of data between nodes. The experimental evaluation demonstrated the effectiveness of the proposed methods. Islam khan [62] presented a technique called MMF (Matrix Based Fragmentation), which can be applied both in the initial stage and in the later stages of the design of distributed databases. To evaluate the approach, fragmentation was applied in a proposed scheme related to customer management. Through experiments, it was shown that the proposed technique achieved a success rate. For this reason, the performance of a distributed database management system is significantly improved by avoiding frequent remote access and high data transfer between sites. Other Types of Vertical Fragmentation Fung, Karlapalem, and Li [23] the development of a comprehensive and analytical cost model for query processing in vertically fragmented object-oriented database classes. A set of results from analytical evaluations was presented to show the effect of vertical fragmentation and to study the relationship between the projection radius, the vis-a-vis sequential selectivity factor, and access by index. Subsequently, the implementation of an experimental prototype that allows the vertical fragmentation of classes in a commercial object-oriented database to validate the cost model was shown. A structure of classes related to employees was fragmented, the classes are "Emp", "Dept" and "Proj", with 1000, 200, and 1000 instances respectively. The authors of [23] developed in [25] a heuristic algorithm called HCHA (Hill-Climbing Heuristic Algorithm) that takes the solution given by an affinity-based algorithm and improves it, thus reducing the total number of disk accesses. Furthermore, a second cost-based algorithm was developed and HCHA was shown to be significantly more efficient than the cost-based approach. The experiments were carried out in an object-oriented database containing employee-related data. A new algorithm called CHAC (Column-oriented Hadoop based Attribute Clustering) was proposed in [27] to design an appropriate attribute grouping algorithm to achieve optimal data processing performance in the column-oriented Hadoop environment. To perform the tests, the TPC-H benchmark was used and the algorithm was evaluated in 16 nodes of which one of them acted as the master. The database contained 30 attributes and 20 GB in size. It was observed that the results generated by the cost model are closely related to the execution time of the queries in the mapping phases since their trend is consistent, which indicates the effectiveness of the proposed cost model. Zhao et al. [31] provided a linear mixed-integer programming optimization formulation that was shown to be NP-hard. A heuristic was designed with two stages that find a solution close to the optimal solution in a fraction of the time. Optimization formulation and heuristics were extended for linear raw data processing, a scenario in which access and data extraction are performed simultaneously. For the implementation, the SDSS benchmark was used to evaluate the system with real data and the 100 most popular queries from the photoPrimary table were selected. Costa, Costa, and Santos [34] evaluated the impact of partitioning and data storage in Hive-based systems, testing different data organization strategies and verifying the efficiency of these strategies on query performance. As a conclusion, it was mentioned that the implementation of strategies based on fragmentation brings benefits both in terms of storage and in terms of query processing. Amer [36] introduced a k-means heuristic approach to vertical fragmentation and allocation. This approach was primarily focused on the early stage of DDBS (Distributed Database Systems) design. A short but effective experimental study was carried out, both on artificially created and real data sets, to demonstrate the optimization of the proposed approach against its counterparts. The results obtained supported that the work shown by the author surpassed different works in the experimentation stage. Other Types of Hybrid Fragmentation Al-Kateb et al. [72] presented the main features of the Teradata approach and explained in detail a new approach to implementing row-column storage. Subsequently, a performance study was presented that demonstrates how different fragmentation options affect query performance, and different query optimization techniques are proposed specifically applicable to fragmented tables. The deployment took place on the Teradata 6650 Enterprise Data Warehouse. The TPC-H benchmark was used with a terabyte in size. Campero Durand et al. [75] considered the feasibility of a general machine learning solution to overcome the drawbacks of more common approaches to fragmentation. The work in GridFormation was extended, assigning the partition task to an RL (Reinforcement Learning) task. The proposal was validated experimentally using a database and a workload with the TPC-H benchmark and the Google Dopamine framework for deep RL. Competitive runtimes were featured while increasing the number of attributes in a table, outperforming some cutting edge algorithms. Schreiner et al. [76] proposed an automated approach to hybrid data fragmentation that automatically reorganizes the data based on the current workload of the NewSQL databases. The authors concluded by mentioning that the proposed work is unprecedented in the literature as it is the only research that proposes a hybrid fragmentation approach, offering data storage and optimization based on access workload. H2TAP (Heterogeneous Hybrid Transactional Analytical Processing) has been developed to match the requirements for low-latency analysis of real-time operational data. Pinnecke et al. [79] proposed different solutions to many of these challenges in isolation: a unified engine has not yet been developed to optimize performance when combining these solutions. The authors suggested a highly flexible and adaptable data structure called GRIDTABLE to physically organize sparse but structured records in the context of H2TAP. The experiments were carried out using the CUSTOMER and LINEITEM tables of the TPC-C benchmark. Other Types of Fragmentation Cuzzocrea et al. [83] introduced the use of an algorithm based on K-means clustering for effective and efficient support of the fragmentation of large XML data stores, and at the same time, control and determination of the number of fragments originated through the configuration of the adequate value of the K parameter. To validate the approach, the fragmentation strategy was compared with two significant adaptations of the two most common fragmentation methods for storing relational data, the PC (Predicate Construction) and AB (Affinity-based) fragmentation techniques. The experimental results showed that the proposed approach surpasses both comparison techniques under a certain perspective of the experimental analysis. Turcu et al. [90] determined optimal fragmentation schemes, which greatly aid the design of schemes when dealing with non-trivial amounts of data. The development was implemented in the DTM (Distributed Transactional Memory) system and the tests were carried out under the TPC-C, TPCW, AuctionMark, EPinions, and ReTwis benchmarks in a distributed system of 20 physical computers with the H-base database management system. To validate the development, 5 benchmarks were used and, in most cases, improvements were observed both in the proportion of distributed transactions and in transactional performance. Khan et al. [93] presented a robust, fault-tolerant, and scalable cluster-wide deduplication that was able to eliminate duplicate copies across the entire cluster. The evaluation showed great savings in disk space with minimal performance degradation, as well as great robustness in the event of sudden server failure. The approach was implemented in Ceph, a cluster with shared-nothing architecture. The FIO (Flexile I/O Tester) benchmark was used with 500 GB of workload. Discussion In this work, extensive research was carried out in the state of the art of data fragmentation, obtaining the most related approaches, classifying them, and describing each one of them. It is observed that fragmentation methods proposed in [32,56,73] present simple cost models for their implementation. [32,56] carry out a dynamic vertical and horizontal fragmentation respectively. Rodríguez-Mazahua et al. [73] developed a hybrid fragmentation technique that stands out for containing a simple cost model focused on multimedia databases. The three mentioned works present, for vertical, horizontal, and hybrid fragmentation, excellent ways to carry out fragmentation and are considered in a special way in this work, since they stand out for various characteristics, i.e., easy implementation, they provide a cost model, completeness (authors include all the information required to reproduce their methods), they consider multimedia data to obtain the fragmentation scheme and they take into account dynamic fragmentation. Conclusions and Future Work A large number of works are observed in which fragmentation is used to improve the performance of databases using different techniques and solving various aspects. In this article, the research works related to dynamic fragmentation in multimedia databases were reviewed and classified into four categories: (1) For multimedia databases; (2) For dynamic databases; (3) For NoSQL databases; and (4) Others. Some categories were sub-classified by the type of fragmentation that occurs in the works. It is concluded that dynamic fragmentation for multimedia databases is a topic of great interest in the area of databases since it achieves good results in performance applying it in different ways. However, current information trends point to a large amount of multimedia data and new ways to improve the response performance of such databases are required. An in-depth analysis of all the works obtained was carried out and it was observed through these that the most used benchmark in the field of fragmentation was TPC-H; Oracle, MongoDB, and PostgreSQL were the database management systems utilized in more works; the most considered cost was the cost of transportation; horizontal fragmentation was the most applied technique; Springer was the editorial with the highest number of articles, and 2019 was the year in which the most works were found. The importance of this research is that it can provide researchers and practitioners with an overview of the state of the art in database fragmentation. As future work, this research will lay the foundations for the development of a Web application based on the main features obtained throughout this work and focused on dynamic fragmentation and multimedia databases.
2020-12-03T09:02:46.980Z
2020-11-29T00:00:00.000
{ "year": 2020, "sha1": "c62c1807d8d5b7b54cc8893336b1d5ff163c56f5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/e22121352", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3eebaf78f6894ec302d2863c0913f65e8c6574f0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
248070642
pes2o/s2orc
v3-fos-license
Freshwater Macrophytes: A Potential Source of Minerals and Fatty Acids for Fish, Poultry, and Livestock The freshwater macrophytes are abundant in tropical and subtropical climates. These macrophytes may be used as feed ingredients for fish and other animals. The nutritional value of twelve freshwater-cultured macrophytes was evaluated in the present study. Significantly higher crude protein (36.94–36.65%) and lipid (8.13–7.62%) were found in Lemna minor and Spirodela polyrhiza; ash content was significantly higher in Hydrilla verticillata, Wolffia globosa, and Pistia stratiotes (20.69–21.00%) compared with others. The highest levels of sodium, magnesium, chromium, and iron levels were recorded in P. stratiotes. H. verticillata was a rich source of copper, manganese, cobalt, and zinc; the contents of calcium, magnesium, strontium, and nickel were highest in S. polyrhiza. Selenium and potassium contents were higher in Salvinia natans and W. globosa, respectively. The n-6 and n-3 polyunsaturated fatty acids (PUFAs) contents were significantly higher in W. globosa and Ipomoea aquatica, respectively compared with others. Linoleic and α-linolenic acids were dominant n-6 and n-3 PUFAs. The highest value (4.04) of n-3/n-6 was found in I. aquatica. The ratio ranged from 0.61 to 2.46 in other macrophytes. This study reveals that macrophytes are rich sources of minerals, n-6 and n-3 PUFAs. INTRODUCTION Freshwater macrophytes, the fastest growing aquatic plants, are abundant in tropical and subtropical countries. They grow profusely in nutrient-rich water. These macrophytes are broadly classified into four groups based on their occurrence in the water body: the surface floating (e. g. Azolla spp.), submerged (e. g. Hydrilla spp.), emergent (e. g. Potamogeton spp.), and marginal (e. g. Ipomoea spp.). The nutritional value of freshwater macrophytes has been recognized globally. The unchecked propagation of freshwater macrophytes creates problems in many water bodies. The judicial exploitation of these nutrient-rich plants may open a new avenue from a nutritional view point for humans and animals. The leaf protein extracted from freshwater macrophytes may be used for human or non-ruminant animals (1). Macrophytes are a rich source of protein, lipid, amino acids, fatty acids, and minerals (2). The amino acid and fatty acid profiles of duckweeds Lemna minor and Spirodela polyrhiza have been documented recently (3,4). The mineral composition of macrophytes is different from the usual terrestrial vegetation. Calcium (Ca), iron (Fe), and manganese (Mn) contents are higher in aquatic plants compared with the terrestrial ones (1). Minerals are important catalysts for various biochemical reactions. These are essential components for metabolism, growth, and development and help the animals to cope with the variable environmental conditions (5). There is an optimum dose for each mineral. Low/high concentrations may affect the physiology of the organisms. Toxic minerals like arsenic (As), mercury (Hg), antimony (Sb), cadmium (Cd) etc., are required by the body in little amounts, whereas excess levels of useful minerals like, sodium (Na), potassium (K), magnesium (Mg), Ca, Fe etc., may be harmful (5). Dietary inclusions of polyunsaturated fatty acids (PUFAs) have several health benefits for humans and other animals. The study of the profiles of fatty acids of feed ingredients ensures the quality of diets. Fish are unable to synthesize two essential fatty acids like n-6 (derived from linoleic acid, LA) and n-3 (derived from alpha-linolenic acid, ALA). So these fatty acids should be supplied to the diets of fishes (6). The evaluation of minerals and fatty acids' compositions of aquatic macrophytes is essential for their selection as potential feed ingredients for fish and other animals. Some of the commonly occurring freshwater macrophytes are: Azolla microphylla, A. pinnata, Enhydra fluctuans, Hydrilla verticillata, Ipomoea aquatica, Lemna minor, Marsilea quadrifolia, Pistia stratiotes, Salvinia molesta, S. natans, Spirodela polyrhiza, and Wolffia globosa. These macrophytes are distributed throughout the temperate, sub-tropical, and tropical regions of the world. Some of these macrophytes like, E. fluctuans, I. aquatica, and M. quadrifolia, are consumed as vegetables by humans in India and Bangladesh (7), and W. arrhiza has been consumed in Thailand (8). Most of these macrophytes, except H. verticillata (submerged plant), M. quadrifolia, and E. fluctuans (marginal plants) are surface floating macrophytes. All these macrophytes propagate through vegetative reproduction. Mosquito fern Azolla spp. (Azollaceae) are heterosporous free-floating ferns. It lives symbiotically with nitrogen-fixing blue-green algae Anabaena azollae. Watercress Enhydra fluctuans (Asteraceae) is a hydrophytic plant and it grows in canals and marshy places. Waterthyme Hydrilla verticillata (Hydrocharitaceae) is a submerged, rooted aquatic plant. It can grow in water up to a depth of 6 m, and in transparent water it can survive up to a depth of 12 m. The water spinach Ipomoea aquatica (Convolvulaceae) with hollow roots floats in water easily. Three members of the family Lemnaceae, namely Lemna spp., Spirodela spp., and Wolffia spp. are known as duckweeds. The plant consists of a single leaf or frond with one or more roots. Water clover Marsilea quadrifolia (Marsileaceae) is a deciduous, aquatic fern. Each green and thin stalk rises from the rhizome to the water surface; it contains a single shamrock-like leaf with four leaflets. Water cabbage Pistia stratiotes (Araceae) is a perennial monocotyledon with thick, soft, and light green leaves that form a rosette. It floats on the surface of the water and roots are hanging beneath the leaves. The short stolon connects both the mother and daughter plants. Water fern Salvinia spp. (Salviniaceae) is a perennial freefloating macrophyte. During the period of high growth, leaf size decreases and both leaves and stems fold, doubling and layering to cover more of the water surface. The nutritional value of macrophytes in terms of proteins, lipids, ash etc. varies greatly (2). The culture medium influences the mineral contents of the macrophytes (8). The extracts of seven freshwater macrophytes show no cytotoxic and anti-proliferative effects on human cell lines (9). Therefore, macrophytes should be considered as useful feed ingredients. Production of macrophytes using a standard technique may help to maintain the nutritional value of the plant and also maximize the health benefits. The aim of the present study is to evaluate the nutritional value, viz. proximate composition, minerals and fatty acids profiles of twelve cultured freshwater macrophytes. This study will help to evaluate the suitability of these macrophytes as feed ingredients for fish, poultry, and livestock. Culture of Macrophytes Freshwater macrophytes were collected from water bodies of Delhi, Uttar Pradesh, and West Bengal and then identified. Macrophytes were cultured in outdoor cemented tanks (1.2 × 0.35 m) with clean dechlorinated tap water (3). A 10-cm layer of soil was used for the culture of H. verticillata, M. quadrifolia, and E. fluctuans. All other macrophytes were cultured without soil base. The depth of water was 30 cm in all culture tanks. A combination of organic manures viz. cattle manure, poultry droppings, and mustard oil cake was used (1:1:1) at the rate of 1.052 kg/m 3 . All manures were decomposed for 5 days and then macrophytes were introduced individually in the outdoor cemented tanks. Three replicates were used for each macrophyte. For the steady supply of nutrients for the growth of macrophytes, the same combination of manures (at onefourth dose of the initial one) was applied in the culture tanks. Manures were decomposed for 5 days in separate containers and then applied on day 6. This schedule was followed throughout the culture period. Culture tanks were monitored regularly and macrophytes were harvested when the whole surface of the tank was covered with plants. The freshly harvested macrophytes were washed twice with tap water and then with distilled water. After air drying, macrophytes were kept at 40 • C for 3 h. Then the ground, sieved, and fine powders were kept in air-tight containers and stored in a refrigerator at 4 • C for further assay. Proximate Composition Analysis The proximate composition of the macrophytes was analyzed (10). Three replicates were used for each assay. Moisture content was estimated after drying the sample at 105 • C for 24 h. The dried samples were kept in a muffle furnace at 550 • C for 8 h for the determination of ash contents. The crude protein contents were analyzed by measuring the nitrogen content (N x 6.25) with an automated micro-Kjeldhal apparatus (Pelican Instruments, Chennai, India). Crude lipid contents of the macrophytes were assayed gravimetrically (11). Carbohydrate contents were estimated by the subtraction method. Mineral Assay The mineral compositions of macrophytes were assayed using Inductively-Coupled Plasma Mass Spectrometer (ICP-MS, Agilent 7900, USA) following standard protocol at the Instrumentation Facility of Indian Institute of Technology, New Delhi. The powdered macrophyte sample (150 mg) was taken in a closed digestion vessel and 8 ml of suprapure 69% nitric acid (HNO 3 , Merck, USA) was added to this. The sample was digested in Microwave digestion system (Multiwave PRO; Anton Paar, Austria). The digested sample was cooled at room temperature and transferred into a measuring cylinder; Milli-Q ultrapure water was added to make the volume 40 ml. Then the sample was filtered through a 0.2 µM syringe filter (Thermo Scientific, USA) and was collected in a glass vial. A 20 µL sample was injected through autosampler in the ICP-MS. The standard solution for each mineral was supplied with the equipment (Agilent Technologies, USA). It was diluted with Milli-Q ultrapure water containing 1% HNO 3 to make concentrations of 20, 40, 60, 80, 100, 250, 500, 1000 µg/l. The calibration (standard) curve was prepared. The blank was prepared with Milli-Q ultrapure water containing HNO 3 (1%). Minerals are divided into three major groups based on their concentrations in the mammal/human body viz. macro, trace, and ultra-trace minerals (5). Fatty Acid Analysis The fatty acid profiles of the macrophytes were analyzed using Gas Chromatograph (GC)-Flame Ionization Detector, Clarus 580 (Perkin Elmer, USA). The total lipid extracted from plants (11) was used to prepare fatty acid methyl esters (FAME) by transesterification using sulfuric acid in methanol at 50 • C for 16 h (12). After extraction and purification of FAME (13), 1 ml sample was kept in a glass vial of autosampler of GC. The sample was separated and quantified in a GC column (60 m × 0.32 mm i.d. × 0.25 µm ZB-wax, Phenomenex, UK). The data were collected from pre-installed program software (TotalChrom Workstation Ver6.3, Perkin Elmer). The FAME was identified with the help of standards (Supelco FAME 37 mix, Sigma-Aldrich, USA). Statistical Analysis The compositions of twelve macrophytes are given as means ± standard error (SE). The differences in nutritional values of various macrophytes were tested using one-way analysis of variance (ANOVA) and Duncan's multiple range test (14). Statistical analyses were performed using the SPSS program (version 25.0). Statistical significance was accepted at p < 0.05. Proximate Composition The moisture content was highest (11.86%) and lowest (6.26%) in E. fluctuans and W. globosa, respectively (Figure 1). Significantly higher crude protein contents were found in two duckweeds, namely L. minor and S. polyrhiza, compared with others. The highest lipid content was also recorded in L. minor, followed by S. polyrhiza. The lipid content was minimum in E. fluctuans. Ash content was significantly higher in H. verticillata, W. globose, and P. stratiotes compared with other macrophytes. The ash content was minimum in M. quadrifolia. Carbohydrates levels were minimum and maximum in L. minor and M. quadrifolia, respectively. Macrominerals Among these twelve freshwater macrophytes, Na content was significantly higher in P. stratiotes compared with others (Figure 2A). This group was followed by S. natans and E. fluctuans. A significantly higher K level was found in W. globosa compared with others. This was followed by L. minor, E. Frontiers in Nutrition | www.frontiersin.org fluctuans, H. verticillate, and P. stratiotes. Ca content was highest in S. polyrhiza, followed by P. stratiotes. A significantly higher Mg level was found in P. stratiotes and S. polyrhiza compared with others. The Na, Ca, and Mg contents were minimum in M. quadrifolia compared with other macrophytes. It indicates the nutritional value of the macrophytes. Trace Minerals A total of nine trace minerals were found in these macrophytes ( Figure 2B). Molybdenum (Mo) content was significantly higher in A. microphylla, A. pinnata, and P. stratiotes compared with others. Mn, zinc (Zn), copper (Cu), and Cd contents were significantly higher in H. verticillata compared with others. In P. stratiotes, significantly higher levels of Fe and aluminum (Al) were found compared with others. Among these macrophytes, A. pinnata ranked second for both Fe and Al. A. microphylla ranked third for Fe and fourth for Al contents. Maximum strontium (Sr) level was recorded in S. polyrhiza followed by P. stratiotes. In all these macrophytes lead (Pb) was found. Ultra-Trace Minerals Five ultra-trace minerals were found in these macrophytes ( Figure 2C). A significantly higher level of selenium (Se) was found in S. natans compared with others. This plant was followed by H. verticillata and P. stratiotes. Se was absent in S. molesta, E. fluctuans, I. aquatica, and S. polyrhiza. Chromium (Cr) content was significantly higher in P. stratiotes, A. microphylla, and A. pinnata compared with others. Cobalt (Co) content was significantly higher in H. verticillata compared with others. This was followed by P. stratiotes and S. natans. Nickel (Ni) and tin (Sn) levels were significantly higher in S. polyrhiza and A. microphylla, respectively compared with others. Among these macrophytes, P. stratiotes ranked third for Ni content. Fatty Acid Profile The fatty acid profiles of twelve freshwater macrophytes were documented in the present study (Tables 1-3). The saturated fatty acids (SFAs) content was significantly higher in W. globosa compared with others. This was followed by A. pinnata, L. minor, and I. aquatica. SFA content was minimum in P. stratiotes. Among, SFA, palmitic acid (C16:0) was the dominant one in all these plants. Monounsaturated fatty acids (MUFA) content was significantly higher in M. quadrifolia compared with others. Among various MUFAs, oleic acid (C18:1n-9) was present in most of the plants and the amount was also higher compared with the others (Supplementary Tables 1A-C). MUFA content was also minimum in P. stratiotes. Though in small amounts two other monounsaturated fatty acids like, palmitoleic acid (C16:1n-9) and nervonic acid (C24:1), were present in all macrophytes, except E. fluctuans and A. pinnata. Another isomer of palmitoleic acid (C16:1n-7) was absent in two species of Azolla and S. natans. The n-6 PUFA content was significantly higher in W. globosa compared with others. This was followed by L. minor and A. pinnata. The minimum level was found in A. microphylla. Among n-6 PUFA, LA (C18:2n-6) was the dominant one and was present in all macrophytes. Arachidonic acid (C20:4n-6) was the second dominant n-6 PUFA found in all macrophytes, except in L. minor. ALA (C18:3n-3) was the only member of n-3 PUFA present in all these macrophytes. ALA content was significantly higher in I. aquatica compared with others. This was followed by L. minor and W. globosa. The highest (4.04) n-3/n-6 was found in I. aquatica (Supplementary Table 2). The ratio ranged from 0.61 (S. molesta)−2.46 (L. minor) in other macrophytes. DISCUSSION A wide variation in the composition of freshwater macrophytes was recorded in the present study. The advantage of this study is that plants were cultured in the outdoor systems following a standard protocol (3). Therefore, almost the same quality of products is expected in a further study. There is scope for improvement in the nutritional value as the quality of the culture medium influences the composition of the plants. In the present study, crude protein levels in three members of Lemnaceae family and E. fluctuans were above 30%, and protein contents of other macrophytes (except P. stratiotes, S. natans, and M. quadrifolia) were above 20%. The present study confirms the previous finding that macrophytes are rich sources of protein. The protein contents of L. minor and S. polyrhiza were 36.07 and 35.82%, respectively (3,4). A previous study in Bangladesh reported that the protein contents of E. fluctuans and I. aquatica were 16.69 and 21.45%, respectively; macrophytes were collected from natural water bodies (15). In the present study, protein contents of E. fluctuans and I. aquatica were 16.35 and 7.51% higher compared with the same macrophytes studied in Bangladesh. Lipid contents of I. aquatica, S. polyrhiza, and L. minor ranged from 7.16 to 8.13% in the present study. The lipid contents of E. fluctuans and I. aquatica were 1.90 and 3.82% higher in the present study compared with the previous study (15). Ash contents of these two macrophytes were also higher in the present study compared with the previous one. Higher levels of ash contents of H. verticillata, W. globosa, and P. stratiotes compared with other macrophytes enhanced the nutritional value of these plants as feed ingredients for fish, poultry, and livestock. In the present study, lower levels of carbohydrates were observed in macrophytes compared with the plants harvested from the wild (15). Culture of macrophytes with organic manures enhanced the nutritional value of plants. Among these macrophytes, highest levels of macrominerals, Na and Mg, were found in P. stratiotes. K and Ca were highest in W. globosa and S. polyrhiza, respectively. In the present study, among various macrophytes, P. stratiotes ranked second and fifth for Ca and K, respectively. A previous study reported the highest Ca level in Hydrilla sp., followed by P. stratiotes and E. crassipes. There was no variation in Mg level among these three macrophytes (16). Macromineral profile of leaves and roots of P. stratiotes collected from a natural water body of Nigeria was documented (17). This study showed that Na, K, Ca, and Mg contents were 3.73, 32.83, 2.30, and 3.70 g/kg of leaves, respectively. In the present study, Na, Ca, and Mg contents were 47, 20, and 30%, respectively, higher in P. stratiotes compared with the plants studied in Nigeria. K content was almost the same in the plants grown in two different conditions. The Na, Macrophytes /Fatty acids C18:2 n-6 C18:3 n-6 C20:2 n-6 C20:3 n-6 C20:4 n-6 n-6 PUFA C18:3 n-3 n-3 PUFA n-3/n-6 Ca, Zn, and Cu contents were higher in I. aquatica grown in Bangladesh compared with the macrophytes assayed in the present study (15). Although, Mg, K, and Fe contents were higher in the I. aquatica assayed in the present study compared with the plants studied in Bangladesh, Na, Mg, and K contents were higher in E. fluctuans evaluated in the present study compared with the previous study in Bangladesh. The Na, K, Mg, and Ca contents were higher in A. filiculoides and S. molesta grown in swine lagoons compared with the present study (18). In A. filiculoides, Na, K, Mg, and Ca contents were 2.77, 22.5, 5.04, and 9.3 g/kg (dry matter), respectively. In S. molesta Na, K, Mg, and Ca contents were 4.44, 34.7, 5.18 and 10.6 g/kg (dry matter), respectively. In the present study, Na to K ratio ranged from 0.038 (M. verticillata, A. microphylla, S. molesta, P. stratiotes, and S. natans, respectively. In all these macrophytes, the ratio of Na to K is less than the WHO/FAO-recommended ratio for an adult human, i. e., <0.49 (19). Various studies showed the effect of culture medium on the mineral profile of macrophytes (8,20,21). In different species of duckweeds Na: K varied from 0.027-1.49 (K: Na = 0.67-37). In Wolffia, the ratio was 0.025 (K: Na = 40) and in another species, W. microscopica it was 0.003 (K: Na = 276). In the present study, the Mg: Ca varied from 1.20 (L. minor)−4.65 in (E. fluctuans). Ca has been serving as the main structural mineral and helps in metabolism. It serves as a signal for vital physiological processes. Mg, the fourth most abundant cation in the body, is a co-factor for 350 cellular enzymes, most of which are involved in energy metabolism (22), hence, the ratio of Mg: Ca should be maintained. The Mg: Ca ratio was 0.4 in duckweed (21) and 0.5 in other species, W. microscopica (8). In the present study, the ratio was 1.28 for W. globosa. The trace minerals analysis showed that among these macrophytes, P. stratiotes was a rich source for Mo, Fe, and Al. This macrophyte also has considerable amounts of Mn, Zn, and Sr. A. microphylla and A. pinnata were also rich sources of Fe and Mo. In a different strain of W. arrhiza, Fe contents ranged from 0.16-0.29 µg/g freeze-dried sample (23). The Fe content of W. globosa was 254.12 µg/g in the present study. Higher levels of Zn and Cu were found in I. aquatica grown in Bangladesh compared with the present study; Fe content was higher in the present study compared with the previous one (15). Fe content of E. fluctuans grown in two different environments was the same. Zn and Cu contents were lower in the plants assayed in the present study compared with the plants studied in Bangladesh. The Cu content of S. molesta grown in swine lagoons was 13 g/kg, dry weight (18). In the present study, Cu content of S. molesta was less compared with the previous study. In the present study, the highest level of ultra-trace mineral Se was found in S. natans. This important mineral was also present in H. verticillata and P. stratiotes. It was interesting to record that Se was absent in S. molesta, S. polyrhiza, E. fluctuans, and I. aquatica. The Se content of freeze-dried W. arrhiza was <0.03 µg/g (23). In the present study, Se content of W. globosa was higher compared with the previous study. Significantly higher Cr levels were found in P. stratiotes, A. microphylla, and A. pinnata compared with other macrophytes. A significantly higher Co level was found in H. verticillata compared with the others. This macrophyte was followed by P. stratiotes and S. natans. Co content in all these macrophytes was >0.50 µg/g (dry weight). Among these macrophytes, the highest Ni content was found in S. polyrhiza, and this macrophyte was followed by H. verticillata and P. stratiotes. In the present study, the contents of heavy metals viz. Cd, Cu, Pb, and Sn of macrophytes were within the permissible limits (Cd: 0.2, Cu: 73.3, Pb: 0.3, Sn: 250 Zn: 99.40; mg/kg of wet weight) of WHO/FAO (24). In the present study, the mineral composition was evaluated in the dry sample. Therefore, the moisture (minimum 90%) contents of the samples should be considered at the time of comparison with the permissible limit of WHO/FAO for humans (where fresh plants were considered). In seaweeds, there is no regulation on the maximum heavy metals contents (25). Various studies showed the dietary requirements of different macro, trace, and ultra-trace minerals for different animals (Supplementary Tables 3A,B). Na requirements of grass carp (Ctenopharyngodon idella), poultry, cattle, and humans are 2, 0.012-0.200, 0.96 g/kg diet, and 2.4 g/day, respectively. Among various fishes and prawns (Pinneaus indicus), Mg requirements vary from 0.4 to 0.946 g/kg of diets. K requirements recorded for common carp (Cyprinus carpio), grass carp, and Nile tilapia (Orechromis niloticus) are as follows: 0.9-12.4, 4.6, and 2.1-3.3 g/kg diets. K requirements for poultry, cattle, and humans are 0.3 and 2.4 g/kg diet and 3.5 g/day, respectively. Among different groups of fishes, rohu (Labeo rohita), common carp, grass carp, catla (Catla catla), and Nile tilapia require 1.9, 0.1, 2, 1.9 and 7 g Ca/kg diet, respectively. Ca requirements for poultry, cattle, and humans are 8 and 5.12 g/kg diet and 1.0 g/day, respectively. Among various fishes, Mn, Fe, Zn, and Co requirements vary from 12-25, 30-200, 15-79, and 0.01-0.5 mg/kg diet, respectively. Nile tilapia requires Se and Cr at the rate of 0.4 and 139.6 mg/kg diet, respectively. Fe, Zn, Cu, Se, Cr, and Co requirements are also evaluated for poultry, cattle, and humans. In channel catfish Ictaluraus punctatus, Fe, Cu, Mn, Zn, Se, and Co requirements were 30, 5, 25, 200, 0.1, and 0.05 mg/kg feed, respectively (26). In fish, Fe deficiency causes hypochromic microcytic anemia, Co and Mn deficiencies result in poor growth; Zn deficiency causes growth depression, cataract, and caudal fin and skin erosion; Se deficiency results in muscular dystrophy. In fish nutrition, Co plays a significant role. In common carp, the addition of cobalt chloride/cobalt nitrate enhanced the growth and hemoglobin formation (27). Therefore, supplementation of freshwater macrophytes may help to overcome the mineral deficiency in fish and other animals without showing any negative impact (9). The fatty acid compositions of the two duckweeds L. minor and S. polyrhiza showed similarity with the previous study (3,4). In the present study, palmitic acid and oleic acid were the dominant SFA and MUFA, respectively. Similar results were also found in four duckweeds, Landoltia, Lemna, Wolffiella, and Wolffia (8). The fatty acid compositions of four aquatic plants S. cuculata, Trapa natans, L. minor, and I. reptans showed that cis-15 tetracosenoic acid and 9-hexadecenoic acid were the dominant fatty acids, and highly unsaturated fatty acids contents were higher compared with the saturated fatty acids (28). In the present study, LA was the major contributor for n-6 PUFA in all plants, and except in L. minor, arachidonic acid was also found in all macrophytes. ALA was the only member of n-3 PUFA present in these macrophytes. The presence of LA and ALA were recorded in duckweeds (8). The freshwater teleosts are capable of converting ALA to long-chain polyunsaturated fatty acids (LC-PUFA) like eicosapentaenoic acid (EPA; 20:5n-3) and docosahexaenoic acid (DHA; 22:6n-3) (29-31). Therefore, the feeding of fish with freshwater macrophytes-based diets helps to fulfill the LC-PUFA requirements of cultured fish (32,33). The n-6/n-3 PUFA was always <1; it ranged from 0.48-0.94 in different Wolffia species (23). A similar result was also found in the present study, except in two species of Salvania, where the ratio was >1.0. CONCLUSION Among these macrophytes, Na, Mg, Cr, and Fe contents were maximum in P. stratiotes; this macrophyte ranked second for Co, Sr, and Ca. H. verticillata was the richest source for Cu, Mn, Co, and Zn, and it ranked second for Se. Ca, Mg, Sr, and Ni contents were higher in S. polyrhiza compared with the others. S. natans and W. globosa were rich sources for Se and K, respectively. All these macrophytes were rich sources of n-6 and n-3 fatty acids. This study shows that macrophytes have an immense potential to be used as rich sources of minerals, as well as n-6 and n-3 PUFA for fish, poultry, and livestock. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
2022-04-11T13:30:51.697Z
2022-04-11T00:00:00.000
{ "year": 2022, "sha1": "037243a558733c1c0f9816c343d9515720cf6f15", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "037243a558733c1c0f9816c343d9515720cf6f15", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
249234559
pes2o/s2orc
v3-fos-license
GC/MS analysis and potential synergistic effect of mandarin and marjoram oils on Helicobacter pylori Abstract Helicobacter pylori can cause chronic gastritis, peptic ulcer, and gastric carcinoma. This study compares chemical composition and anti-H. pylori activity of mandarin leaves and marjoram herb essential oils, and their combined oil. GC/MS analysis of mandarin oil revealed six compounds (100% identified), mainly methyl-N-methyl anthranilate (89.93%), and 13 compounds (93.52% identified) of marjoram oil, mainly trans-sabinene hydrate (36.11%), terpinen-4-ol (17.97%), linalyl acetate (9.18%), and caryophyllene oxide (8.25%)). Marjoram oil (MIC = 11.40 µg/mL) demonstrated higher activity than mandarin oil (MIC = 31.25 µg/mL). The combined oil showed a synergistic effect at MIC of 1.95 µg/mL (same as clarithromycin). In-silico molecular docking on H. pylori urease, CagA, pharmacokinetic and toxicity studies were performed on major compounds from both oils. The best scores were for caryophyllene oxide then linalyl acetate and methyl-N-methyl anthranilate. Compounds revealed high safety and desirable properties. The combined oil can be an excellent candidate to manage H. pylori. Introduction Despite enormous progress in medicinal strategies for the treatment of many human health problems, infectious diseases continue to pose a significant threat to public health 1 . Helicobacter pylori is an extracellular gram-negative spiral bacterium, that is now recognised as a major cause of gastroduodenal diseases such as chronic gastritis, which affects nearly everyone and leads to peptic ulcers or gastric adenocarcinoma, the second most common cause of cancer death worldwide 2,3 . Standard medications can cure the infection in more than 80% of H. pylori-infected patients. Patient compliance, antibiotic resistance, and recurring infections, on the other hand, are all the major concerns that limit the use of antibiotics in the treatment of H. pylori infection that needs to be addressed 4,5 . Natural products are reported to demonstrate various biological activities [6][7][8][9][10] and as promising antimicrobials 11,12 . The importance of plant-based products for disease treatment is growing exponentially due to the increased incidence of adverse drug reactions 13 and the development of microbial resistance to the available antimicrobial drugs 14 . Essential oils derived from aromatic and medicinal plants have recently gained popularity and great scientific interest as they are a part of traditional medicine predominating all over the globe for the alleviation of various health problems. Essential oils have been shown to possess potential antibacterial, antifungal, antiviral, anticancer, and antioxidant properties such as cinnamon, orange, lemon, pepper, thyme, and Schinus [15][16][17][18][19][20] . Besides, they act as an important milestone in alternative medicine as well as natural therapies 1 . Therefore, it is reasonable to expect that a variety of plant compounds in these oils have antimicrobial effects. Among several essential oils that may be useful as antimicrobial agents, marjoram oil (Origanum majorana L., Lamiaceae) is an aromatic medicinal plant with the greatest potential for industrial applications because it shows different biological activities, including antibacterial, antifungal, antihypertensive, anti-inflammatory, and antioxidant properties [21][22][23] . Origanum majorana leaves and essential oil have been claimed to be useful for the treatment of respiratory and gastrointestinal problems 24 . It is one of the most popular spices used in cooking, arousing interest not only in the use of its leaves but also in its essential oil for therapeutic purposes 25 . On the other hand, the genus Citrus (Rutaceae) has been one of the most popular and commercially important crops for thousands of years. Citrus fruits are known for their nutritional values as an excellent source of vitamin C, their unique flavour, and their medicinal properties 26 . Interestingly, essential oil (EO) is the most vital by-product of citrus processing. Petitgrain mandarin essential oil is extracted from Citrus reticulata leaves. It could relieve stress and digestive problems while helping with flatulence, diarrhoea, and constipation. It is mostly used to increase circulation to the skin, reducing fluid retention and helping prevent stretch marks. Mandarin oil is used to calm the nervous system and has a tonic effect 27,28 . Moreover, it showed broad-spectrum antibacterial and antifungal agents. It inhibited the growth of several bacterial and fungal strains [29][30][31] . Furthermore, petitgrain mandarin essential oil showed potential antioxidant, anticancer (HL-60 and NB4), and radical scavenging activities 32 . The increasing emergence of H. pylori infections worldwide as well as the emerging tolerance against most currently available antibiotics has necessitated the urgent need to discover novel and highly effective antimicrobial regimens due to the lack of therapies available to control H. pylori infections. Meanwhile, it has been noticed that a few plants have been investigated recently for their H. pylori bactericidal activity. Antibacterial drug interactions can change the efficacy and either synergistic or antagonistic action, interaction between different compounds can lead to the reduction of the inhibitory activity 33 . This has driven our interest to assess the constituents of essential oils of marjoram (Origanum majorana L.) and mandarin leaves by using GC-MS, as well as evaluate the synergistic anti-H. pylori activity in-vitro of these oils, as compared to clarithromycin. An in-silico study was performed, where molecular docking was carried out on the major compounds identified from both oils on H. pylori virulent factors domains such as urease and CagA. Further in-silico pharmacokinetic and toxicity studies were performed on these major components to determine their safety margins and properties. Essential oils The whole herb of Origanum majorana was subjected to steam distillation for 5 h. The oil produced has a pale yellow colour and herbaceous sweet odour. Citrus reticulata oil was prepared by water distillation of the leaves using the Clevenger apparatus for 5 h. Its colour is pale yellow and of intensely sweet and fresh scent. Both oils were purchased from Somitt Aromatic Company that were kept in dark bottles. GC/FID analysis The GC/FID analyses were carried out on a Varian 3400 apparatus (Varian GmbH, Darmstadt, Germany) equipped with an FID detector and an Rtx-5MS fused-bonded silica column (30 m x 0.25 mm i.d., film thickness 0.25 mm; Ohio Valley, Ohio, USA); the operating conditions were: The initial column temperature was kept at 45 C for 2 min (isothermal), and then programmed rising at a rate of 5 C/ min to 300 C and held for 5 min. Detector and injector temperatures were 300 C and 250 C, respectively. The sample volume was 0.03 ml, Helium carrier gas flow rate was 2 ml/min. Peak Simple 2000 chromatography data system (SRI Instruments, Torrance, USA) was used for recording and integrating the chromatograms. GC/MS analysis The analyses were carried out on a Hewlett Packard gas chromatograph (GC HP 5890 II; Hewlett Packard GmbH, Bad Homburg, Germany) equipped with the same column and conditions as for the GC/FID. The capillary column was directly coupled to a quadrupole mass spectrometer (SSQ 7000; Thermo-Finnigan, Bremen, Germany). The injector temperature was 250 C. Helium carrier gas flow rate was 2 ml/min. All the mass spectra were recorded with the following analytical conditions: filament emission current, 60 mA; electron energy, 70 eV; ion source temperature, 200 C; and scan range was from 40 to 400 Amu. The diluted samples (0.5% v/ v n-hexane used as solvent) were injected with split mode (split ratio, 1:15). Compounds were identified by comparison of their mass spectral data and retention indices with Wiley Registry of Mass Spectral Data 8th edition and NIST Mass Spectral Library (December 2005). The identification was further confirmed by the calculation of the retention indices (RI) relative to a homologous series of n-alkanes (C6 -C22), under identical experimental conditions, as well as matching with the literature 34-37 . Determination of the minimal inhibitory concentration (MIC) The micro-well dilution method was used to evaluate the antibacterial activity of the Marjoram and Mandarin oils against Helicobacter pylori (ATCC 43504, the reference strain being obtained from the American Type Culture Collection) adopting the NCCLS guidelines (1998) and as previously described by Cerda et al. 38 . 100 mg of the tested samples were combined with 100 mL of 20% (v/v) bacterial suspensions (OD at 600 nm ¼ 1.0) in a flat bottomed 96-well microplate. Serial two-fold dilutions of the oils and the standard were prepared directly in a sterile 96-well microtiter plate. Deionised water was used as a negative control meanwhile clarithromycin was used as a positive control. The reaction mixture was incubated using Mueller-Hinton broth. After incubation at 37 C for 3 days under microaerophilic conditions (10% CO 2 and 80% humidity). 25 mL of 10 mM 3-(4,5-dimethyl-thiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) freshly prepared in water were added into the mixture (final volume was 225 mL) to each well and incubate for 30 min. The developed purple colour was measured at 550 nm using a microplate reader 1 . All tests were performed in triplicate. Inhibition (%) was calculated as follows: [(Initial control absorbance-final absorbance)/(Initial control absorbance)] Â 100. The Agar dilution checkerboard method is used to evaluate the synergic action of both essential oils. The MIC90, the concentration of samples with 90% inhibition was calculated from doseresponse curves. The MIC values were assessed in triplicate, using an automatic ELISA microplate reader. Molecular docking The X-ray 3D structures of human H. pylori Urease and Cag A oncogenic proteins were downloaded from the protein data bank using the following IDs: 1e9y and 4dvy, respectively. All the docking studies were conducted using MOE 2019 39 which was also used to generate the 2D and 3D interaction diagrams between the docked ligands and their potential targets. In the beginning, the two enzymes and the seven isolated major compounds were prepared using the default parameters. The active site of each target was determined and the seven isolated compounds were saved into a single file with MDB extension. Finally, the docking was finalised by docking the mdb file containing the seven compounds into the active site of both the enzymes. Determination of anti-Helicobacter pylori activity This study's focal objective is to compare the antimicrobial activities of essential oils of mandarin leaves and marjoram against Helicobacter pylori. The MIC results for the two tested oils separately and combined with clarithromycin as a positive control are shown in Figure 3(A-D), respectively. The results of this study revealed that marjoram oil showed higher antibacterial activity against H. pylori at a MIC of 11.4 mg/mL, ( Figure 3B) relative to mandarin essential oil, which exhibited a MIC value of 31.25 mg/mL ( Figure 3A). This may be attributed to the presence of high content of oxygenated compounds in marjoram oil that were identified as shown in (Table 2), including trans-sabinene hydrate (36.11%), terpinen-4-ol (17.97%), linalyl acetate (9.18%), caryophyllene oxide (8.25%), and a-terpineol (6.17%). While methyl-N-methyl anthranilate (89.93%) has been identified as the major constituent of mandarin leaf oil, followed by c-terpinene (6.25%) as shown in (Table 1). A combined mixture of both oils exhibited a potentially synergistic inhibitory effect against H. pylori at a MIC of 1.95 mg/ml, yielding higher inhibition ( Figure 3C) relative to marjoram and mandarin oils separately. Furthermore, clarithromycin demonstrated the same MIC value (1.95 mg/mL). Docking study A molecular docking study was carried out on the major compounds identified from both oils on H. pylori virulent factors domains such as urease and CagA. Results showed that caryophyllene oxide showed the best fitting scores followed by linalyl acetate and methyl-N-methyl anthranilate, as demonstrated in Table 3. The binding affinities of caryophyllene oxide, linalyl acetate and methyl-N-methyl anthranilate to urease and CagA are further demonstrated in Figures 4, 5, respectively. Caryophyllene oxide showed the best binding affinities to the urease enzyme by 4 hydrogen bonds (H-bonds) and solvent interaction, while it revealed an interaction with only 2 H-bonds with cagA. Linalyl acetate exhibited 5 H-bonds and 2 metal (Nickel) co-ordinates with urease and 3 H-bonds with cagA. Regarding, methyl-N-methyl anthranilate, it showed 3 H-bonds and solvent interaction with urease, while it demonstrated 2 H-bonds and 2 hydrophobic interactions with the active sites of CagA. Figures 4G, 5G revealed concomitant interactions of seven major compounds identified in both oils with the active sites of urease and cagA, respectively, to reveal more of the synergistic effect of these components as anti-H. pylori. In-silico toxicity study As demonstrated in Table 4 all the compounds have high margins of safety and they were predicted to have no potential toxicity. Pharmacokinetics study It is important for therapeutic candidates to have both acceptable pharmacokinetics and pharmacodynamics profiles. Accordingly, the pharmacokinetic profiles of the seven major compounds were computed using the online server of swiss adme. As depicted in Table 2. All the compounds were predicted to have high GIT absorption making them excellent oral candidates against H. pylori. This high bioavailability of the seven compounds is attributed to their desired physicochemical properties including FLEX (Flexibility), LIPO (Lipophilicity), INSATU (Saturation), INSOLU (Solubility), SIZE and POLAR (Polarity) as demonstrated in Table 5 and Figure 6. In addition, all compounds showed no or minimal interaction with microsomal cytochromes and then could be taken concurrently with other medications. Most importantly, no compound was found to be a substrate for the p-glycoprotein known to be one of the resistance mechanisms of H. pylori for existing antibiotics 40 . A worthy note, the seven compounds were aligned with all of Lipinski's rules, besides none of them had any reported Pan Assay Interference (PAINS). Discussion The continuous evolution of many drawbacks with the current therapies for H. pylori, such as the prevalence of antibiotic-resistant, drug interventions, side effects, and poor satisfaction, all highlight the search for safe and effective non-antibiotic alternative medicines 5 . This has driven our interest in evaluating the bactericidal activity of marjoram and mandarin oils against H. pylori. At present, interest in essential oils has increased because of their bactericidal activity against several bacteria without the marked toxic effects of synthetic drugs. Bactericidal activity is a well-known property of volatile oils, particularly those of marjoram and mandarin. Numerous studies have confirmed the antimicrobial activity of marjoram essential oils 21,22,25,41,42 and mandarin leaf essential oils 26,[43][44][45][46] . This study investigates the bactericidal activity of the hydro-distilled essential oils of the leaves of mandarin and marjoram. The results of this study revealed that marjoram oil showed a higher effect against H. pylori than mandarin essential oil. This may be attributed to the presence of high content of oxygenated compounds identified in marjoram oil, including trans-sabinene hydrate (36.11%), terpinen-4-ol (17.97%), linalyl acetate (9.18%), caryophyllene oxide (8.25%), and a-terpineol (6.17%). While methyl-N-methyl anthranilate (89.93%) was identified as the major constituent of mandarin leaf oil, followed by c-terpinene (6.25%). Many studies of in vitro antimicrobial activity of marjoram and mandarin oils in the literature may be probably due to the action of the major compounds which have been previously tested for their bactericidal activity, such as terpinen-4-ol, a-terpineol, and c-terpinene were found as the predominant components of the essential oils obtained from the aerial parts of Origanum scabrum and Origanum microphyllum, both endemic species in Greece, exhibited a very interesting antimicrobial profile after they were tested against six Gram-negative and Gram-positive bacteria and three pathogenic fungi 47 . Furthermore, the acetone crude extract of the stem bark of Sclerocarya birrea is a promising source for anti-H pylori compounds, with terpinen-4-ol, an essential oxygenated monoterpene oil, being the most abundant agent (35.83%), and it was reported as a major mediator of the anti-H pylori activity 48 . The inhibitory activity of terpinen-4-ol in this study was similar to that of amoxicillin, one of the most effective drugs used in the eradication of H. pylori infections worldwide 49 . Additionally, trans-sabinene hydrate, terpinen-4-ol, a-terpineol, and c-terpinene have been reported as major components of marjoram oil, which exhibited antibacterial activity against food-related bacteria like E. coli, Salmonella cholraesius, and S. aureus in fresh sausage. Because of their antimicrobial activity against foodborne bacteria EOs could be added to food products to extend their shelf life, but changes in the taste, as well as formulation problems, could represent a problem there in 21 . Linalool (8.5%), a-terpineol (4.4%), and linalyl acetate (4.2%) are considered the most important components of Myrtle oil that showed significant antimicrobial activities against Salmonella typhimorium, Lactobacillus spp., Yersinia enterocolitica, Helicobacter pylori 50 , and significant antifungal activity when combined with amphotericin B 51 . Moreover, the antimicrobial activity of the essential oil of Thymus capitatus was tested using the broth dilution method. c-Terpinene in Thymus capitatus essential oil (10%) induced strong bactericidal activity against H. pylori strains 52 . b-Caryophyllene, is a natural bicyclic sesquiterpene, which is extracted from clove and tested for the eradication of H. pylori in a mouse model, and its effects on the inflammation of the gastric mucosa. Interestingly, b-caryophyllene showed potent antimicrobial activity against H. pylori by direct killing action. In addition, it improved the inflammation of the gastric mucosa by decreasing H. pylori number 53 . The use of compounds with natural origin has gained popularity in scientific research focussed on drug innovation against H. pylori because of their broad flexibility and low toxicity. For example, monoterpenes limonene and b-pinene resulted in MICs against H. pylori of 75 mg/mL and 500 mg/mL respectively 54 . Regarding the biological properties, we know that essential oils are complex mixtures of numerous constituents. As a result, their biological effects can be the result of the synergism of all their constituents, thus components are working together. In this case, the effect of the mixture would be greater than the pure sum of its single parts 55 . The essential oils could be used on their own, as well as in combination with other oils or synthetic active agents since synergy was observed by combining these substances. Various studies showed that the extent of antimicrobial activity and the mode of action is dependent on the additive, synergistic, or even antagonistic effects of the individual constituents. Results of this study clearly showed the synergistic effect of both marjoram and petitgrain mandarin oils on their anti-H. pylori activity. The combined oil sample showed the highest inhibitory effect against H. pylori at MIC 1.95 mg/mL. Clarithromycin, the used reference drug, demonstrated the same MIC value as the combined oil, at the same concentration used. Thus, it should be noted that the combined oils' effects are comparable to clarithromycin. An in-silico study was carried out to further verify the observed results. Docking studies are performed to reveal the binding affinity of the major components to the target enzymes 56,57 , where caryophyllene oxide showed the best fitting scores followed by linalyl acetate and methyl-N-methyl anthranilate. Furthermore, all the tested compounds showed high margins of safety in-silico, they were predicted to have no potential toxicity and were aligned with all Lipinski's rules. Additionally, all the tested compounds showed no or minimal interaction with microsomal cytochromes, thus, they could be taken concomitantly with other medications. Moreover, neither of the tested compounds was found to be a substrate for the p-glycoprotein (one of the resistance mechanisms of H. pylori for existing antibiotics) 40 or had any reported Pan Assay Interference (PAINS). From this perspective, the two oil extracts are considered promising inhibitors for both sensitive and resistant strains of H. pylori with a notable safety margin and good desirable pharmacokinetic properties. Conclusion Marjoram and mandarin oils are the most widely available and highly consumed by humans due to their nutritional and medicinal values and very low toxic effects. The current study revealed the promising synergistic effects of the volatile constituents from marjoram and mandarin leaves against Helicobacter pylori, offering a Disclosure statement The authors declare that they do not have any conflict of financial interests or personal relationships that could influence the reported work.
2022-06-02T06:22:55.427Z
2022-05-31T00:00:00.000
{ "year": 2022, "sha1": "baa48790417c13603765e04c59cf7d9a37d7fd72", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14756366.2022.2081846?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41d6a7a3c13494a1626d933d5b6a8c0d4c348c78", "s2fieldsofstudy": [ "Medicine", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
924902
pes2o/s2orc
v3-fos-license
Roughness of Crack Interfaces in Two-Dimensional Beam Lattices The roughness of crack interfaces is reported in quasistatic fracture, using an elastic network of beams with random breaking thresholds. For strong disorders we obtain 0.86(3) for the roughness exponent, a result which is very different from the minimum energy surface exponent, i.e., the value 2/3. A cross-over to lower values is observed as the disorder is reduced, the exponent in these cases being strongly dependent on the disorder. Phenomena associated with fracture are a central theme in materials science, with great importance in a wide range of technological applications. In recent years it has also been the subject of much attention in the statistical physics community, resulting in methods which model materials in terms of disordered rather than continuous media [1]. This approach has drawn attention to certain features apparently sharing a common basis with other, seemingly unrelated, problems showing critical behaviour. For instance, the scaling of interfaces which characterize deposition and growth processes, or propagation in substrates with a random structure, have been found to obey non-trivial laws [2]. This is also the case of certain equilibrium phenomena, where an interface is obtained as a result of interactions with a surrounding random medium. In the directed polymer problem, for instance, ζ = 2 3 is obtained for the roughness exponent of the minimum energy path in a two-dimensional embedding medium [3]. Furthermore, in numerical simulations with the random fuse model [4], the roughness exponent of the interface which characterizes electrical breakdown of a conducting network is found to be ζ = 0.70 (7). This is close to the value for the minimum energy path, ζ = 2 3 [3], and there have been speculations that they indeed are equal [5]. However, in three dimensions the fuse model exponent seems higher than the minimum energy surface exponent [6]. The two dimensional fuse network result [7] also agrees with experimental results in two dimensions [8]. However, experimental results in three dimensions suggest a much higher value, ζ = 0.8, than the fuse model gives, ζ = 0.62(5) [6]. A question related to whether or not brittle fracture falls within this class of problems, however, is how well the random fuse model actually describes fracture processes. In the fuse model each element has a single degree of freedom, i.e., the voltage difference between neighbouring nodes. The interface obtained, which is characteristic of electrical breakdown rather than a physical crack, nevertheless provides valuable information on the interplay between quenched disorder and the current distribution. As opposed to vector fracture, where the elastic elements each have three degrees of freedom, the random fuse model is thus referred to as describing scalar fracture. It is due to the analogy between Ohm's law and Hooke's law that the electrical problem has been regarded as similar to its elastic counterpart. In this Letter, we report the results of computer simulations using the elastic beam model [10,11] which has previously been used to study the scaling properties of forces and displacements in brittle fracture. Furthermore, we address the universality issue by using two different types of distribution with a wide range of disorders. The beam model in two dimensions may be defined as a regular square lattice of size L×L, where the spacing is unity, and each node in the horizontal and vertical in-plane directions is connected to its nearest neighbours by elastic beams. A given beam is then soldered to other beams in such a way that, upon subsequent displacement of neighbouring nodes, the angle between beams remains the same as in the original underlying square lattice. The three possible degrees of freedom, i.e., translations in the horizontal and vertical directions and rotations about the axis perpendicular to the plane, thus allow for bending moments as well as axial elongation and compression. The beam is also imagined as having a certain thickness, providing shear elasticity. The forces between neighbouring nodes may be derived by considering a concentrated end load on an elastic beam with no end restraints [12]. We define, for notational convenience, which entails an anti-clockwise labeling beginning with the beam to the right of i. With δz = z j −z i denoting the displacements, we obtain at node i, due to the beam which connects i with j, , for the contributions in moment, shear and strain, respectively. Prefactors characteristic of the material and its dimensions in Eq. (2) depend on where E is Young's modulus, A and I the area of the beam section and its moment of inertia about the centroidal axis, respectively, and G the shear modulus. For the sum of forces and moments on each node, we then have the lattice being in equilibrium when, at any point in the fracture, Σ ix = Σ iy = Σ iθ = 0. Such a configuration is realized when the elastic energy, i.e., is at its minimum. This minimum we obtain via relaxation, using the conjugate gradient method with a tolerance in the residual error of ǫ = 10 −12 . For a brittle material we assume that each beam is linearly elastic up to the breaking threshold. Using t S and t M for the strain and bending thresholds respectively, the breaking criterion [11], inspired from Tresca's formula, is given by where |M| = max(|M i |, |M j |) is the largest of the bending moments at the two beam ends i and j. The fracture process is initiated by imposing on the lattice an external vertical displacement of unit magnitude, i.e., a displacement which at the top row corresponds to one beam in length. In its initial state, the lattice now consists of horizontally undeformed beams and beams which in the vertical direction are stretched lengthwise by an amount 1/L. With an extra row at the top there are L(L − 1) inner nodes, for which any neighbouring beam may be broken, and L nodes each at the top and bottom, the positions of which are held fixed. This defines the vertical boundary conditions. As for the horizontal direction, previous results obtained with the random fuse model have relied on the use of periodic boundary conditions. This is a good strategy to avoid edge-effects, especially in a situation where numerical resources are limited to small system sizes. However, when considering fracture in a periodic system, the topology is essentially that of a plane intersecting a cylinder. We thus need to address the problem of how the trace of a sine curve affects results obtained for the roughness. To avoid this, we instead use open boundary conditions, i.e., we adopt the procedure used in Ref. [13] of subtracting the average vertical drift of the crack as it traverses the width of the lattice. The first beam to break is that for which the sum of the two ratios is largest, this being the vertically oriented beam which has the lowest value of t S . If all threshold values are approximately the same, the next beam to break will be one of the nearest lateral neighbours since these now carry a larger load than other beams in the lattice. The case of no disorder is thus one in which the crack propagates horizontally from the initial damage, taking the shortest possible path to break the lattice apart. This results in a smooth interface. Introducing disorder in the threshold values, material strength is no longer uniformly distributed throughout the lattice and consequently the crack will not necessarily develop from the initial damage point. Instead microcracks and voids form wherever the stress concentration most exceeds the local strength, i.e., wherever Eq. (6) dictates that the next beam should be broken. Towards the end of the process some of these merge into a macroscopic crack, forming a sinuous, or rough, interface which is characteristic of the disorder in the system. Hence we have a highly correlated process in which the quenched disorder and the nonuniform stress distribution combine to determine where the next break will occur while, simultaneously, the stress distribution itself continually changes as the damage spreads. To study this, we generate a random number r on the unit interval [0, 1] and let this represent the cumulative threshold distribution. Assigning the threshold values according to the threshold distribution approaches that of no disorder when |D| → 0. In the fuse model [7,14], several types of distribution have been used for the threshold values. Although at present we restrict ourselves to Eq. (7), the two cases D < 0 and D > 0 represent widely different distributions, i.e., for D > 0 the distribution is a power law with a tail which extends towards weak beams whereas for D < 0 the tail of the distribution extends towards strong beams. In both cases we use a wide range of disorders between |D| = 1 12 and |D| = 4. The roughness is now obtained for a large number of lattices, each of size L, the thresholds being re-cast according to Eq. (7) each time a new sample is broken. Generally the number of samples depend on L as well as, to a lesser degree, on the disorder D. Presently lattices of all sizes from L = 4 up to L = 20 were studied, with sample sizes ranging from N = 250000 in the the former case to about N = 1000 in the latter case. For the larger systems we studied, typical sample sample sizes are shown in Table 1. Fig. 1 shows a log-log plot of W as a function of L for a range of disorders with D > 0. For all L, the interface is seen to become more rough with increasing disorder. Each curve also has a characteristic crossover, beyond which the asymptotic relationship is that of a straight line, i.e., where W ∼ L ζ . This feature is seen to be disorder dependent, with the onset of asymptotic behaviour being deferred to larger L as D increases. At some point, the crossover becomes difficult to define before it again reduces to a point well within the range of the system sizes presently studied. Closely associated with this behaviour is an even more striking feature, i.e., the dependency of ζ upon the disorder. Specifically, with the enumeration scheme used for the disorders in Fig. 1, we obtain (f) ζ = 0. 16 The behaviour of ζ as a function of the disorder D is shown in Fig. 3. Here, estimates for ζ which are difficult to define are also included. Hence, corresponding to the open circles in Fig. 1 we use (d) ζ = 0.89, based on the data for L = 27 to L = 100 and (e) ζ = 0.31, based on the four uppermost data points. In Fig. 2 the corresponding estimates are (c) ζ = 0.88, based on data for L = 19 up to L = 63, and (d) ζ = 0.43, again based on the four uppermost data points. As |D| → 0, the interface becomes sufficiently smooth to frequently avoid detection by the course-graining of the lattice. Hence an accurate estimate for ζ now depends on the relative occurrence of those samples which are unusually rough for the given disorder, implying an excessively large amount of samples for each L. To obtain an estimate nonetheless, we note that the two sets of exponents for the six lowest values of |D| corresponding to D < 0 and D > 0, respectively, each very nearly lie on a straight line. The intersection between the two lines is D ≈ 0.04, with a limiting value of ζ ≈ 0.65 for the exponent. Although this is very close to the two-thirds value frequently referred to in connection with scalar fracture, the result obtained for D = 0.08 does not significantly alter the D = 0.1 result, which is ζ = 0.60. Hence, the lines may taper off at this value, the limit |D| → 0 possibly representing a Laplacian random walk [15] whereby crack advancement is governed by local conditions surrounding the crack tip. Recently the role of propagating stress waves during brittle fracture has been investigated [16]. In our model the elastic wave emitted from a breaking beam would then result in stresses exceeding those due to the elastic deformations only, the stress enhancement being especially important in the case of an imminent burst of failures. Although this feature is not included in the present quasistatic approach, the comparison with experimental results for ζ in two dimensions [8] should remain valid, i.e., Poirer et al. obtained ζ = 0.73 ± 0.07 by considering a two dimensional stacking of parallel collapsible cylinders (drinking straws) while Kertesz et al. and Engøy et al. obtained ζ ≈ 0.73 and ζ = 0.68 ±0.04 by studying tear lines in (wet) paper and fractures in thin wood plates, respectively, none of which should generate stress waves significant enough to modify the result. To summarize, the main feature of our results is the dependency of ζ upon the disorder, apparently contradicting a universal value. Whereas values obtained at low disorders vary considerably, however, the more or less constant ζ obtained at moderate and strong disorders nevertheless seems to be consistent with a universal value of ζ ∼ 0.86. While thus being similar to the experimental results in three dimensions, our results are different from other two dimensional results. . Labels and symbols refer to the enumeration scheme used in Fig. 1 and Fig. 2, with (⋆) referring to the extrapolated value for D ≈ 0, i.e., ζ = 0.65.
2014-10-01T00:00:00.000Z
2000-12-19T00:00:00.000
{ "year": 2000, "sha1": "572b9fd8b1683af1fa8cfaa39eda2d673281a411", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0012344", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d05a3a97a30a20019c514527dc7a87187a7b7eb8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Medicine" ] }
228102038
pes2o/s2orc
v3-fos-license
Dynamics of proteins with different molecular structures under solution condition Incoherent quasielastic neutron scattering (iQENS) is a fascinating technique for investigating the internal dynamics of protein. However, low flux of neutron beam, low signal to noise ratio of QENS spectrometers and unavailability of well-established analyzing method have been obstacles for studying internal dynamics under physiological condition (in solution). The recent progress of neutron source and spectrometer provide the fine iQENS profile with high statistics and as well the progress of computational technique enable us to quantitatively reveal the internal dynamic from the obtained iQENS profile. The internal dynamics of two proteins, globular domain protein (GDP) and intrinsically disordered protein (IDP) in solution, were measured with the state-of-the art QENS spectrometer and then revealed with the newly developed analyzing method. It was clarified that the average relaxation rate of IDP was larger than that of GDP and the fraction of mobile H atoms of IDP was also much higher than that of GDP. Combined with the structural analysis and the calculation of solvent accessible surface area of amino acid residue, it was concluded that the internal dynamics were related to the highly solvent exposed amino acid residues depending upon protein’s structure. Scientific Reports | (2020) 10:21678 | https://doi.org/10.1038/s41598-020-78311-4 www.nature.com/scientificreports/ profile hinders quantitative characterization of the remaining motion, internal dynamics. To overcome such a problematic situation, double Lorentzian function, which takes into account for the contribution of both rigid body motion and internal dynamics, was developed by Perez et al. 12 Through the optimum selection of energy resolution and energy window of QENS spectrometer, the internal dynamics were successfully decoupled from the observed iQENS profile with this function [12][13][14] . In order to extend the further usability of this function, we developed a new analyzing method that provides the precise contributions of translational and rotational diffusions to the observed iQENS profile explicitly with the aid of computational technique. We then apply this newly developed method for studying the internal dynamics of two proteins, GDP and IDP. We investigated MurD 15 as a typical GDP, and the intrinsically disordered region (IDR) of Hef (helicaseassociated endonuclease for fork-structured DNA) (Hef-IDR) as a typical IDP 16 . These proteins have similar translation and rotational diffusion constants, thus, offering an advantage for studying the effects of different molecular structures on internal dynamics. Here, we characterized and elucidated origin of internal dynamics. Solution structures of MurD and Hef-IDR. With the usage of recent state-of-the art software 17,18 , it is possible to reconstruct low-resolution three-dimensional structure from one-dimensional SAXS profile. Our aim for reconstruction of low-resolution three-dimensional structure from SAXS measurements is to compute the translational and rotational diffusion constants, which are used to calculate the contribution of rigid body motion to iQENS profile. We then performed SAXS measurements prior to iQENS measurements. The crystal structure of MurD (PDB code: 1e0d.pdb) deviated from that calculated from the SAXS profile (χ 2 = 90.9) 19 . We then conducted normal mode analysis (NMA) of the crystal structure to determine the structure of MurD in solution 19 . The SAXS profile of the NMA-deformed structure reproduced the experimental SAXS profile of MurD (χ 2 = 5.4) as shown in, Fig. 1a. Contrary to GDP, IDP has largely different configurations 20 . Following this notion, we searched the configurations of Hef-IDR with MultiFoXS 18 , which enables structural modeling and generated the representative structures 1-5. Considering that the populations of structures 1-5 ). By averaging the two-dimensional S(Q, ω) over each region, one dimensional S(Q, ω) at six values were gained for MurD and Hef-IDR (Fig. S2). We then analyzed their S(Q, ω) and Fig. 2 shows S(Q, ω) at = 0.80 Å −1 , as a representative. Compared with the resolution function (orange broken lines), the spectra broadened in both samples, indicating that the motion of the proteins was anharmonic. Since the observed S(Q, ω) consists of three dynamics, translational diffusion, rotational diffusion and internal dynamics 22 , we had to decompose the observed S(Q, ω) into them. The sum of translational and rotational diffusions 12 named as a rigid body motion and then its contribution to S(Q, ω), S(Q, ω) rb is given by following functions: www.nature.com/scientificreports/ S(Q, ω) rb is the S(Q, ω)of rigid body motion, S(Q, ω) trans is the S(Q, ω)of translational diffusion, S(Q, ω) rot is the S(Q, ω)of rotational diffusion, Res(Q, ω) is the resolution function, S(Q, ω) rb,ex is the S(Q, ω) rb convoluted with a resolution function,D t is the translational diffusion constant, H t is the hydordynamic function to translational diffusion constant, R h is the hydrodynamic radius, r is the the distance from the center of hard sphere where an isotropic diffusion was assumed, j l is the lth order sperical Bessel function, ⊗ is the convolution operator. The D t and D r values of MurD and Hef-IDR were computed using "HYDROPRO" 23 . The D t and D r values of MurD determined using the single structure resolved by the SAXS measurements were 6.42 × 10 -7 cm 2 /s and 5.34 × 10 6 s −1 , respectively. As described above, the SAXS profile of Hef-IDR reproduced the ensemble-averaged profile over five structures. For Hef-IDR, the D t values of structures 1-5 of Hef-IDR were calculated to 6.98 × 10 -7 cm 2 /s, 6.47 × 10 -7 cm 2 /s, 6.68 × 10 -7 cm 2 /s, 7.05 × 10 -7 cm 2 /s and 7.23 × 10 -7 cm 2 /s, respectively. The D r values of structures 1-5 of Hef-IDR were calculated to 5.73 × 10 6 s −1 , 5.42 × 10 6 s −1 , 5.03 × 10 6 s −1 , 5.85 × 10 6 s −1 and 6.54 × 10 6 s −1 , respectively. Five sets of separately calculated D t and D r values were averaged depending on their populations in the ensemble-averaged profile, and the averaged D t and D r values were 6.71 × 10 -7 cm 2 /s and 5.51 × 10 6 s −1 , respectively, for Hef-IDR. It was confirmed that the diffusion constants of MurD and Hef-IDR were almost the same. Considering the concentration of MurD and Hef-IDR used for iQENS measurements, we also calculated H t for both samples. A modified function was then considered to reproduce the observed S(Q, ω). Sarter et al. 14 reported that observed S(Q, ω) could be expressed as a convolution of two dynamic scattering functions S(Q, ω) rb and S(Q, ω) int , which describes the internal dynamics given by Eq. (2): where δ(ω) and A(Q) correspond to the delta function and elastic incoherent structure factor, respectively. For the simplification of calculation, it is assumed that the S(Q, ω) int is described by a single Lorentz function as follows. where Γ indicates the relaxation rate of internal dynamics. By substituting S(Q, ω) rb into Eqs. (2) and (4) is obtained as follows: Taking into consideration of fast dynamics such as the rotation of methyl groups 24 in the modified fit function, we also introduced the contribution of a flat background (B(Q)). Finally, the following modified fit function was obtained: where S(Q, ω) mod,ex corresponds to S(Q, ω) mod convoluted with a resolution function. The pink and blue lines in Fig. 2 show the results of fits with Eq. (5) for MurD and Hef-IDR, respectively, and both S(Q, ω)s were appropriately described by this modified function. Figure 3a shows the Q 2 dependence of Γ values from both samples. The Γ values were larger for Hef-IDR than MurD, meaning that the averaged internal dynamics were faster for Hef-IDR than MurD. Namely, we succeeded to exhibit the difference of internal dynamics between GDP and IDP quantitatively through the application of newly developed analyzing method to the observed S(Q, ω) profiles. Prior to the further detailed analysis of internal dynamics, we briefly explain the observable H atoms of proteins in iQENS measurements. Both MurD To consider the origin of the difference in the internal dynamics between GDP and IDP, we focused on the mobility of H atoms embedded on the peptide chains. We then analyzed the Q dependence of the elastic incoherent structure factor (A(Q)) ( Fig. 3b) because the mean square displacement (< u 2 >) of mobile H atoms within a protein can be determined using the following equation 14,25 : where the value of p corresponds to the fraction of mobile H atoms. The calculated values for < u 2 > and p were 2.1 ± 0.4 Å 2 and 0.33 ± 0.07, respectively, for MurD, and 2.1 ± 0.2 Å 2 and 0.85 ± 0.05 respectively, for Hef-IDR. Although < u 2 > was not affected by the different molecular structures, the fraction of mobile H atoms was higher in Hef-IDR than that in MurD. It should be an origin of difference of internal dynamics between them. Discussion We considered the difference in p values between MurD and Hef-IDR. It is considered that the H atoms with high mobility are considered to be located at the surface of protein based on theoretical shell model 26 . In consistent with this idea, Zanotti et al. 27 also reported that peripheral water-protein interaction affected the internal dynamics of protein through the combination of QNES and 13 C-NMR. To clarify H atoms in MurD and Hef-IDR that were exposed to solvent, we obtained their solution scattering data using SAXS. The mean solvent accessible surface areas of the amino acid residues of MurD and Hef-IDR with their solution structures determined by GETAREA 28 (probe particle radius of 1.4 Å), were 44.1 and 117.2 Å 2 , respectively. It implies that the mean value of SASA of Hef-IDR was higher than that of MurD. From the normal mode analysis for MurD, it was revealed that higher SASA possessed higher mobility from NMA (refer to Fig. S3). It is considered that there exist the www.nature.com/scientificreports/ relationship between internal dynamics and SASA. Then, we adopted the idea that amino acid residues exposed to a solvent could affect the internal dynamics. In the following, we explain our idea in more detail. 1. Under the assumption that a shape of amino acid residue is sphere, the entire surface area (S whole ) of each amino acid residue was calculated from its volume. 2. The number of H nex in the entire protein, N whole , was calculated 29 . 3. The solvent accessible surface area (S solvent ) of each amino residue was calculated for both MurD and Hef-IDR as shown in Fig. 4. 4. The fractions of solvent exposed surface area to the entire surface area (S solvent /S whole ) were defined as f. and f values were calculated for all the constituting amino acid residues for both MurD and Hef-IDR. 5. To judge whether a given amino acid residue is located at the solvent or not, we set the threshold f value (f* t ) as the quantitative criteria: Here, the amino acid residue with f values exceeding f* t (f > f* t ) is regarded to be exposed to the solvent and named as a exp . 6. For each setting f* t value (0.0 ~ 0.8), the entire amino acid resides were classified and then the number of non-exchangeable atoms (H nex ) in the a exp (N surface (f* t )) was calculated. 7. The number ratio r H (f* t ) (= N surface (f* t )/N whole ) was calculated. Pink and blue lines in Fig. 5 indicate the results for MurD and Hef-IDR, respectively. 8. Because exchangeable H atoms of protein in D 2 O were replaced with deuterium atoms, the iQENS of the protein was dominated by mobile non-exchangeable H atoms that are mainly located in amino acid residues exposed to solvent. This means that the p values should agree with the r H value. In this procedure, we should find the optimum f* t value that reproduce the r H (p) values from both MurD and Hef-IDR simultaneously. For this purpose, we firstly calculated the following χ 2 values against r H (p) for MurD (χ 2 m (f* t )) and Hef-IDR (χ 2 h (f* t )), respectively. ) from Hef-IDR, r H (p) value from Hef-IDR, and error value of r H (p) value from Hef-IDR, respectively. As a next step, we named the sum of χ 2 m (f* t ) and χ 2 h (f* t ) as a total χ 2 (χ 2 tot (f* t )). It is considered that optimum f* t value could be determined by finding the condition where χ 2 tot (f* t ) exhibited the lowest value. We then plotted χ 2 tot (f* t ) in Figs. 1-3 and χ 2 tot (f* t ) exhibited the smallest value at f* t = 0.6. We then concluded that f* t value of 0.6 satisfied the value calculated with SAXS and that observed using iQENS. All steps are schematically summarized in Fig. S5. Figure 6 shows a schema of amino acid residues in both MurD and Hef-IDR located in surface area that can access the solvent under conditions of f* t = 0.6. Such amino acid residues were notably segregated only on the surface of MurD, but these were distributed and chained within the entire structure of Hef-IDR. These findings indicated that the amino acid residues at a surface with access to a solvent is responsible for the internal dynamics of proteins depending on their molecular structures. Thanks to the application of the newly developed analyzing method, we could discuss the difference of internal dynamics of GDP and IDP quantitatively. Furthermore, we could reach the present conclusion that can interpret the internal dynamics of GDP and IDP without inconsistency. Summary The internal dynamics of two proteins, globular domain protein (GDP) and intrinsically disordered protein (IDP) in solution, were studied by measuring incoherent quasielastic neutron scattering (iQENS) with state-of-the art spectrometer QENS spectrometer and analyzing them with a newly developed method assisted by computational technique. It was clarified that the average relaxation rate of internal dynamics in IDP was larger than that of GDP quantitatively. From the further detailed analyzes, the fraction of mobile hydrogen (H) atoms of IDP was higher than that of GDP. Calculation of the solvent accessible surface areas per amino acid residues revealed that the fraction of highly solvent exposed H atoms was related to the fraction of mobile H atoms. Then, present iQENS studies clarified that non-exchangeable H atoms that are mainly located in amino acid residues exposed to solvent was relevant to the internal dynamics depending upon protein's structures. It is strongly expected Small-angle X-ray scattering (SAXS) measurements. SAXS measurements of Hef-IDR (3.4 mg/mL) were performed with a BioSAXS 1000 system mounted on a MicroMax007HF X-ray generator (Rigaku, Tokyo, Japan) at 25 °C and a PILATUS100K detector (DECTRIS, Baden-Dättwil, Switzerland) located 485 mm from the sample. The X-ray wavelength was 1.542 Å. One-dimensional scattering data (I(Q)) were obtained by radial averaging. Scattered intensity was converted into absolute scatter intensity, and calibrated based on the scatter (b) (a) 90°F igure 6. Schematic view of solvent-exposed amino acid residues for MurD and Hef-IDR when f* t = 0.6. (a) In the case of f* t = 0.6, the amino acid residues, which are located in solvent accessible surface area, were depicted for MurD by purple spheres. The domain 1, 2, 3 were depicted by green, red and blue sticks, respectively. (b) In the case of f* t = 0.6, the amino acid residues, which are located in solvent accessible surface area, were depicted for Hef-IDR by blue spheres. This figure is prepared by the usage of Adobe Illustrator CC 2015. www.nature.com/scientificreports/ intensity of water (I(Q) water = 1.632 × 10 -2 cm −1 ). Data were processed using SAXSLab (Rigaku) and the ATSAS package 17,31 . SAXS measurements of MurD (3.0 mg/mL) at 25 °C were performed with a NANOPIX (Rigaku, Tokyo, Japan). X-rays emanating from a high-brilliance point-focused X-ray generator (MicroMAX-007HF, Rigaku, Tokyo, Japan) were focused using a confocal mirror (OptiSAXS) and collimated with a confocal multilayer mirror and a two-pinhole collimation system with lower parasitic scattering. The scattered X-rays were detected using a two-dimensional (2D) HyPix-6000 semiconductor detector (Rigaku, Tokyo, Japan). We covered the Q range (0.015-0.5 Å −1 ) by measuring SAXS profiles at sample-to-detector distances (SDD) of 1320 and 300 mm. One-dimensional I(Q) values were obtained by radial averaging the 2D scattering patterns. The scatter intensity from the protein was converted to an absolute scale by comparison with the scatter intensity of water. All data were reduced and processed using SAngler 32 . iQENS measurement. We measured iQENS measurements using an inverted geometry time-of-flight spectrometer (BL02 DNA) installed 21 at the Materials and Life Science Experimental Facility (MLF) in J-PARC, Tokai, Japan. The magnitude of the scattering vector Q (Q = 4πsinθ/λ f , where 2θ and λ f = 6.26 Å are the scattering angle and the wavelength of the analyzed neutron, respectively) ranged from 0.12 to 1.78 Å −1 . Samples were loaded into double-cylindrical aluminum cells (outer diameter: 14 mm, inner diameter: 13 mm, height: 45 mm) under a helium atmosphere. The resolution function was determined from the measurement of vanadium at 298 K and the calculated energy resolution (δE) was 12 µeV. Solutions of Hef-IDR and MurD (8.0 and 52.0 mg/ mL, respectively) were measured at 25 °C. Dynamic scattering laws from D 2 O buffer were subtracted from those of protein solutions based on their volume fractions to obtain the protein dynamics. Data availability The datasets generated and analyzed during the current study are available from the corresponding authors on reasonable request. www.nature.com/scientificreports/
2020-12-12T14:08:02.774Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "38f71d28bad0706cdb19013194a3d882181b3466", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-78311-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "16706ac151a32b6b03c88cbd0d259cdc5399a2dd", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
251749667
pes2o/s2orc
v3-fos-license
Economic impact of gastrointestinal nematodes in Morada Nova sheep in Brazil This study evaluated the economic impact of gastrointestinal nematode (GIN) infection in Morada Nova lambs under different parasite chemical control conditions. For this, 246 lambs, in the rainy and dry season, were randomized into groups according to their anthelmintic treatment with levamisole: control (CT: no treatment); routine treatment (RT: treated every 42 days); and targeted selective treatment (TST: treated according to the average daily weight gain, DWG). From 63 days of age (D63) to D210, the lambs were weighed and monitored for GIN infection parameters. Spending on anthelmintics in the production system was 1.3% of the total economic result. The economic result per animal (R$ 5.00 = US$ 1.00) was higher in the RT group, amounting to US$ 6.60 in the rainy and US$ 5.69 in the dry season, due to higher DWG. Thus, RT presented economic results 14.4% and 10.9% higher than CT, and 7.2% and 1.9% higher than TST, in the rainy and dry season, respectively. However, fast development of resistance made RT unfeasible. Here, the economic impact of GIN infection on a national scale is discussed, demonstrating its importance and the impossibility of profitable and sustainable sheep production without adequate control. Introduction Brazil has 19.7 million sheep, and the national flock increased by 4.05% in 2018 (IBGE, 2019). The northeastern region has 13.5 million head, equivalent to 71.05% of the national flock, followed by the southern, central-western and southeastern regions, with 3.95 million, 1.0 million and 600,000 head, respectively, corresponding to 20.53%, 5.26% and 3.16% of the total Brazilian sheep flock (IBGE, 2019;Magalhães et al., 2020). However, due to lack of organization in the production chain, sheep production still does not supply the domestic market with efficiency and quality. One of the biggest problems relates to the deficit in constancy of supply, which makes it difficult to structure the sector, including the setting of slaughter scales. Consequently, importing of sheep meat becomes necessary: in 2020, about 3,219 tons were imported (Zen et al., 2014;FAO, 2020). Parasitic diseases caused by gastrointestinal nematodes (GIN) are among the factors that limit the production of small ruminants worldwide. They are responsible for high economic losses, due to slow growth, weight loss, reduced food consumption, reduced milk production, diminished fertility and, in cases of massive infection, high mortality rates. These clinical signs are caused by lesions in the gastrointestinal mucosa, which disturb nutrient absorption; and by hematophagous spoliation, which leads to subclinical infection. In some cases, these effects are associated with reduced weight gain and anemia and, consequently, with low body condition and carcass yield. Haemonchus contortus is the most clinically and economically important nematode in small ruminant farming in Brazil (Cavalcante et al., 2009;Chagas et al., 2013). Females can release between 5,000 and 10,000 eggs daily (Romero & Boero, 2001). Their pre-patent period, i.e. the period between ingestion of the infective larvae (L 3 ) by the host and elimination of eggs in feces, is 18 to 22 days . H. contortus settles in the abomasum and can ingest 0.05 to 0.08 mL of blood/day. Therefore, when animals have high parasite loads, they present anemia and submandibular edema and may die (Amarante, 2014). GIN control has been largely carried out through use of anthelmintics. However, excessive dependence on these substances has led to the development of anthelmintic resistance (Muchiut et al., 2018;George et al., 2021;Viana et al., 2021). It is important to understand the extrinsic factors that favor the development of anthelmintic resistance, as these have a direct impact on flock productivity (Waller, 2006). In this way, the development of modeling approaches should allow scaling-up from studies concerning expenses for ineffective anthelmintics, to enable predictions about farmers' profits. In addition to the worldwide resistance, studies of the potential environmental consequences of excessive anthelmintic administration to sheep indicate that this approach is not sustainable, so its impact should be considered. The period of maximum residue excretion is generally more transient in sheep than cattle dung, but low-level excretion may continue for longer, giving the potential for extended sub-lethal effects (Beynon, 2012). This has a huge impact on refugia in pastures and on the future efficacy of the chemical groups in flocks. According to Boxall et al. (2007), anthelmintics administered to sheep enter into the environment more through feces than urine, but the wash-off of topically applied compounds, spillage during application and inappropriate disposal of compounds also provide other important environmental entry points. Thus, beyond the negative impact on non-target species in soil and dung, contamination also reaches other ecosystems, such as in groundwater, surface water bodies and watercourses. Since the complexity of the drug-dung-fauna system is challenging to observe and quantify in vivo and is difficult to fully represent under controlled laboratory conditions, modeling techniques are also alternatives to address these issues (Cooke et al., 2017). Concerning the economic impact, the annual costs for GIN control among small ruminants outweigh all other costs for endemic disease control in developed countries. In the three largest sheep producing countries (Australia, South Africa and Uruguay), losses due to helminth infection are around US$ 222 million, US$ 45 million and US$ 42 million, respectively (Waller, 2006). In Brazil, on the other hand, studies of the economic impact of GIN infection in ruminants are more abundant in relation to cattle, mainly regarding reduced weight gain (Bianchin et al., 1995;Grisi et al., 2014;Heckler et al., 2016;Oliveira et al., 2021). In sheep, a survey of parasitic diseases diagnosed in animals in a region of the state of Rio Grande do Sul, Brazil, indicated that 42.7% of the cases diagnosed consisted of GIN infection. These gave rise to mortality of approximately 16,800 animals per year, which resulted in an economic impact estimated at US$ 400,000 (Oliveira et al., 2017). Thus, the objective of this study was to evaluate the economic impact of GIN infection among Morada Nova lambs under different conditions of parasite chemical control. Experimental groups This experiment were approved by the local Ethic Committee on Animal Experimentation (process no. CEUA 01/2020), and are in accordance with national and international ethical principles and guidelines for animal experimentation. During the rainy season, ewes were fed exclusively on pasture, and in the dry season, they received corn silage supplementation (gradually increasing as the pasture supply decreased) and feed concentrate at a proportion of 1% of the live weight. Water and mineral salt were kept available ad libitum. Their lambs, of both sexes, were reared together with the dams until the mean age of weaning, at 150 days of age, when the ewes were removed from the pasture area. The lambs then remained on the pasture until the end of the experiment at 210 days of age (D210), on average. Thus, 144 and 102 Morada Nova lambs were evaluated in the rainy season of 2019 and dry season of 2020, respectively. At 63 days of age (D63), all the animals received anthelmintic treatment (Ripercol® L -150F; injectable levamisole 18.8%, 9.4 mg/kg), to start the comparative experimental treatments. From D63 to D210, every 21 days, blood and feces samples were collected for hematocrit measurement (packed cell volume, PCV), individual eggs per gram of feces (EPG) counts (Ueno & Gonçalves, 1998) and fecal culture per group (Roberts & O'Sullivan, 1950). All the animals were weighed every 21 days from birth to D210. On D63 of each season, the lambs were then divided into three groups (with homogeneous means for birth weight, type of birth (single or twin), sex, EPG and PVC), according to the anthelmintic treatment (injectable levamisole 18.8%, 9.4 mg/kg) proposed: control (CT): no treatment; routine treatment (RT): treatment of all lambs every 42 days, from D105 to D189; and targeted selective treatment (TST): treated when the average daily weight gain of the lamb (DWG) was ≤ mean DWG of the TST group -(standard deviation of the TST group's DWG * 0.5) (adapted from Cintra et al., 2018), every 21 days, from D105 to D189. Lambs that presented PCV ≤ 21% received anthelmintic treatment and the most debilitated animals were supplemented with vitamin B12 in order to accelerate recovery and avoid deaths. B12 administration occurred only in CT. The average cost of time spent on labor, to feed animals, collect feces, make blood and weight measurements and do other activities, was considered similar in the three treatments during the experimental period. The cost of handling the herd was estimated at US$ 0.10/animal/day. Economic result The economic analysis (R$ 5.00 = US$ 1.00) was conducted after obtaining data from the lambs born and weaned in the rainy season of 2019 and in the dry season of 2020, when all parameters had been measured and collected. The experimental treatments were then compared regarding their economic results, which were obtained through Equations 1, 2 and 3. The gross economic result was the variable used for comparison. The gross revenue from the sale of animals (GR) was obtained through Equation 1: number of animals per treatment (NA) x average weight at 210 days (W) x market price paid in dollars (US$/kg of body weight). The gross economic result (GE) was obtained through Equation 2: GR -operating cost per treatment (OPC). The GE per animal was obtained by dividing the GE by the number of animals per treatment. The OPC (Equation 3) was obtained as the sum of the operational cost per treatment (anthelmintic -ahC; vitamin B12 -vitC; supplies such as syringes, needles etc. -supC; and labor for handling herd -lhC). Pasture maintenance costs, food supplements, depreciation and land costs were not considered, as these remained constant for all treatments. Results The results regarding GIN infection (PCV, EPG and fecal cultures) and its impact on weight gain are described more fully in a study of the consequences of different anthelmintic treatments (CT, RT and TST) on parasite control in the rainy and dry season (Santos et al., 2022). The main results are summarized in Table 1. There was a statistical difference in the overall mean EPG between CT and RT in the rainy season and among the three treatment groups in the dry season. Regarding the PCV, RT was statistically similar to TST only in the dry season. On D210, the mean weight in RT exceeded those of the other treatments just in the rainy season. RT presented statistically higher mean DWG than the other treatments in the rainy season, but it was not different from TST in the dry season. In comparison with RT (100% anthelmintic treatment), the TST presented an average percentage of 27.2% of lambs dewormed in the rainy season and 32.8% in the dry season, while the CT presented 10.5% and 4.4%, respectively, when PCV ≤ 21%. Levamisole had the lowest efficacy in the FECRT and the highest LCs in the RESISTA-Test© for RT, in both seasons. The PCR indicated that the polymorphism was under selective pressure (p ≤ 0.05) on D210 for all treatments and seasons. In the present study, the RT animals reached D210 heavier (23.92 kg) than the TST (22.28 kg) and CT (20.94 kg), thus generating higher gross revenue (US$ 1,946.92) than the TST (US$ 1,813.42) and CT (US$ 1,750.76), in the rainy season. This difference in weight gain is believed to have been triggered by anthelmintic treatments, which were more frequent in RT than in TST or CT, as mentioned above. The same pattern was observed in the dry season for RT (US$ 1,995.80), TST (US$ 1,899.79) and CT (US$ 1,798.94) ( Table 2). Table 1. Means of eggs per gram of feces (EPG) counts, packed cell volume (PCV), average live weight on D210 (LW), daily weight gain (DWG), and number (percentage) of anthelmintic treatments in control (CT), routine (RT) and targeted selective (TST) treatments, in Morada Nova lambs, during the rainy (R) and dry (D) seasons (S). Also data concerning resistance to levamisole on D210: lethal concentrations (LC 50 , µg.mL -1 ) obtained by the RESISTA-Test©, percentages of anthelmintic efficacy in FECRT and percentages of Haemonchus contortus with resistant genotypes by PCR. During the rainy season there were 12 anthelmintic treatments in the CT, 62 in the TST and 114 in the RT (38 animals x 3 treatments). In the dry season, there were 6, 67 and 102 treatments (34 animals x 3 treatments), respectively. From these data, the total expenditure on anthelmintic treatments per experimental group and per season was calculated ( Table 2). The total cost of anthelmintic application, considering materials and labor, was US$ 17.88, US$ 25.72 and US$ 21.88 for the CT, RT and TST treatments, respectively, in the rainy season. CT had lower expenditure on application of anthelmintic due to the small number of animals (n = 12) that presented PCV ≤ 21%. However, due to vitamin B12 supplementation for debilitated animals, the total cost of the CT increased by US$ 8.64, to total US$ 26.52, which was higher than the cost of the other treatments. Spending on anthelmintics represented 1.5%, 1.3% and 1.2% of the total economic results for CT, RT and TST, respectively, in the rainy season. The RT treatment consumed a greater amount of anthelmintics, syringes and needles, which were applied every 42 days, but despite this, it presented an economic gain 14.4% higher than the CT and 7.2% higher than the TST. Due to the higher DWG obtained between D63 and D210 for this group, the economic gain per animal was US$ 51.92, while for the TST and CT it was US$ 48.42 and US$ 45.32, respectively ( Table 2). S In the dry season, the total costs of anthelmintic application were US$ 17.14, US$ 24.83 and US$ 22.08 for the CT, RT and TST treatments, respectively. The CT had the lowest cost, as only six animals had PCV ≤ 21%. However, with vitamin B12 supplementation, US$ 4.32 was added, to total US$ 21.46, which was lower than the cost of the other treatments. The total expenditures on anthelmintics represented 1.2%, 1.3% and 1.2% of the total economic result for CT, RT and TST, respectively. Even with the lower cost, the CT showed a lower economic result, due to the lower weight gain, while the RT was 10.9% higher than the CT and 1.9% higher than the TST. The economic results per animal for the RT, TST and CT were US$ 57.97, US$ 56.90 and US$ 52.28, respectively ( Table 2). Table 3 shows the net differences in economic result per head between the treatments for the two experimental periods. In the rainy season, the economic result per head was significantly better in the RT, by US$ 6.60/animal, compared with the CT; and by US$ 3.48, compared with the TST. In the dry season, the result for the RT was better than for the CT, by US$ 5.69. However, in relation to the TST, the result for the RT was better only by US$ 1.07, a much smaller difference than what was observed in the rainy season. Discussion The most intense treatment of the entire flock with anthelmintic, done every 42 days (RT), proved to be the most interesting one in economic terms. In general, this strategy made it possible to obtain a higher DWG among the animals, while favoring a lower level of GIN infection (EPG) and positively impacting the weight performance of the animals. The present study also indicated that when farmers use anthelmintic treatment only when the animal is already in an intensely anemic state, this may, at first sight, seem to be less expensive. Nevertheless, the potential for deaths or for expenditure on vitamins or other support medicines contradicts this hypothesis. On the other hand, although the cost of intense anthelmintic treatment was relatively higher (since levamisole is inexpensive), a relatively higher final live weight of animals was obtained in the RT, such that better economic results per head were attained under these experimental conditions, mainly in the rainy season. However, studies carried out in Brazil have shown that anthelmintic resistance develops quickly under conditions of intense and nonselective parasite control. This type of management (RT) should not be implemented as a practice in farms, as it will make sustainability of the production chain unfeasible. Parasites from sheep intensely treated with monepantel in Brazil showed resistance to this drug within three months (Albuquerque et al., 2017). Faster establishment of resistance was also detected among Morada Nova lambs in the state of São Paulo after the third treatment with levamisole, performed every 42 days (dos Santos et al., 2022). Conversely, the TST approach allowed good weight performance, very close to the RT, but promoted reduction in the use of anthelmintics. The number of treatments in TST was almost half of the number in RT, and on average, 30% of the TST lambs received anthelmintics. Studies have shown that when TST is adopted, the interval between anthelmintic use and development of resistance is longer. This also results in less impact on the environment and less drug residues in animal products (dos Santos et al., 2022). In flocks in which TST was performed using EPG as a parameter, it took around 4.5 years for anthelmintic resistance to ivermectin to become established (Echevarria & Trindade, 1989). On farms in the southern region of Brazil on which lambs received less than three treatments of different classes of anthelmintics per year, the anthelmintic resistance rate detected was 6.7%; while when they received four to six treatments per year, it was 44.9%; and when they received seven or more treatments, it was 48.3% (Echevarria et al., 1996). In the Netherlands, the first report of resistance to monepantel occurred on a farm where this drug was administered more carefully for two years, twice a year for ewes and sires, and on average three times a year for lambs (van den Brom et al., 2015). Anthelmintics have been the main tool adopted for parasite controls and have usually positively impacted the welfare and health of domestic and production animals (Pasiani et al., 2012). However, in a scenario of highly resistant parasites (Raschia et al., 2021), refugia preservation, favored by the TST approach, has become essential for maintaining the efficacy of anthelmintics, which must be used carefully (George et al., 2021). As highlighted by Sauermann et al. (2020), any parasite in a pasture is only in refugia if it successfully develops to the adult stage and produces viable offspring. Otherwise, it does not contribute to the population genetics and must be disregarded in terms of refugia. Thus, there is a tendency to a shift away from whole-flock treatments to the TST approach, which will decrease the negative impacts of treatments on dung fauna populations by providing population refugia. This provides novel evidence for the benefits of TST regimens to local food webs (Cooke et al., 2017). Another point concerning the overuse of anthelmintics due to resistance is the presence of residues in food and in the environment. The rise in global temperature is leading to increased occurrence and alterations in the distribution of many infectious diseases. Recent reports by the Intergovernmental Panel on Climate Change and the Food and Agriculture Organization of the United Nations identified an increase in livestock diseases, including parasite infections, as a result of changing climate, with a negative impact on food security (Sauermann et al., 2020). A study carried out in in the state of Ceará, Brazil, indicated that helminthiasis accounted for 81.9% of the diseases diagnosed in goats and sheep (Pinheiro et al., 2002). In the state of Paraná, a study on postmortem diagnoses among 177 sheep showed that haemonchosis was the main disease diagnosed, affecting 53 (20.87%) animals (Sprenger et al., 2015). In the central region of Rio Grande do Sul, it was found that parasitosis accounted for 24.3% of all diagnoses and that 62.5% of these parasitoses consisted of haemonchosis (Rissi et al., 2010). In a survey of the most frequent parasitic diseases in sheep in the southern region of Rio Grande do Sul, covering the period from 1978 to 2014, Oliveira et al. (2017) found that 33.6% of the diagnoses made were of parasitic infections and that mixed gastrointestinal parasites (42.7%) and haemonchosis (35.4%) together accounted for 78.1% of the diagnoses among sheep of all ages. Based on the estimated mortality rate of 5% for the species (Rio Grande do Sul, 2010) and the percentage of sheep diagnosed with parasitic infection (33.6%), the estimated annual economic losses were US$ 400,000 in the flocks of the southern region of Rio Grande do Sul alone, which total around one million sheep (Oliveira et al., 2017). Indirect losses such as the decrease in production (meat, milk and wool) and expenditure on medicines and veterinary care were not considered. Based on the aforementioned data, it was then possible to estimate the losses due to parasitic infections in all Brazilian flocks (Table 4), considering an average percentage parasitic morbidity of 30.0% (range of values between 24.3% and 33.6%), with 5.0% mortality and 78.1% parasitosis, which have been observed in sheep flocks (Rissi et al., 2010;Oliveira et al., 2017). The current average value of US$ 44.00 per lamb was also considered. Thus, we calculated that these assumed values added up to estimated total losses of US$ 107.52 million per year. These losses may be even higher if the expenditure on medicines and veterinary assistance for prevention of parasitic diseases are added ( Table 4). From the data of the present study, if we consider a national mortality rate of 295,500 sheep per year due to parasitic diseases, the losses due to deaths in the Brazilian sheep industry can be estimated as US$ 13 million per year. The biggest losses are caused by the reduction in weight gain and these reach US$ 94.5 million per year (Table 4). This was obtained by multiplying the number of animals affected by parasitic diseases by the average value (US$ 6.14) of the difference in the gain through the TST in relation to CT, between the rainy season (US$ 6.60) and dry season (US$ 5.69) ( Table 3). Given that information on production losses in economic studies is usually obtained from control animals (which are kept untreated or do not have GIN infection), estimates of economic losses represent potential losses expected in the absence of parasite control measures (Grisi et al., 2014). On the other hand, considering that the current scenario of parasite resistance results in frequent anthelmintic treatment, generally using commercial products without efficacy against GIN, this increases the damage to production systems by including the costs of ineffective control. Therefore, a need exists for further studies to evaluate the contribution of anthelmintic resistance to the economic damage resulting from parasitic diseases and the impact of anthelmintics on the environment and human health. The reduction in weight gain is a consequence of parasitism, as also are occurrences of deaths. The latter was avoided in the present study due to animal welfare issues, but it represents 12.1% (US$ 13 million) of the total annual losses caused in sheep farming. The most significant losses for Brazil occur in the northeastern region, which has the largest number of small ruminants. Regarding weight loss, a study on Dorper lambs in Brazil was carried out with the following groups: infected-supplemented (G1), control-supplemented (G2), infected-basal diet (G3) and control-basal diet (G4). The control groups received anthelmintic treatment every 14 days to minimize GIN infection. The lambs that received anthelmintic treatment (G2 and G4) showed 17.1% and 26.7% greater daily weight gain, respectively. For the supplemented lambs, the difference was smaller, as their control group was also supplemented (Starling et al., 2019). The data from the present study showed that the final weight (D210), in relation to the initial weight (D63), was 28.8% and 15.5% higher in the rainy and dry seasons, respectively. These percentages were close to those of the above study. In addition, the present study, carried out for 147 days, showed DWGs that were much lower than those of Starling et al. (2019), thus indicating the relevance of breed and food supplementation level in economic estimates. As is well known, supply of adequate food according to animal category, especially in relation to crude protein, is of great importance for parasitism control and for enabling less need for anthelmintic treatment, thereby contributing to the sustainability of the chain (Bricarello et al., 2005;Cériac et al., 2019). The values estimated here would need to be better evaluated at the regional level. Parasite occurrence depends, for example, on elements such as: temperature, rainfall, soil, topography, pasture type and management, species, breed, age, physiological and nutritional status and animal management (Ruas & Berne, 2001). Morada Nova lambs are resilient to GIN and adults are resistant (Issakowicz et al., 2016;Toscano et al., 2019;Haehling et al., 2021;Okino et al., 2021). Thus, we believe that the values calculated here are underestimates in relation to more susceptible breeds, such as Suffolk and Ile de France (Amarante et al., 2004). In addition, the anthelmintic adopted here has a lower cost than monepantel-based products, for example. The data from the literature was reliable for our calculations. Thus, we expect this model framework can be adapted to any system, anywhere, given workable parameter estimates. Data can be adjusted with local information for the variables in Table 2, which will feed the regional estimates presented in Table 4. Regardless of the limitations of studies that are used to develop economic estimates, especially when extrapolated from local situations to a national scale, the picture obtained here demonstrates the magnitude and importance of *considering that 30% of disease diagnoses in sheep are parasitic diseases and that the mortality rate is 5%; **considering that the parasite occurrence rate in the flock is 78.1%; ***estimated loss of US$ 6.14 per animal per year; ****considering US$ 44.00 per animal. parasitism in Brazil and the impossibility of profitable livestock-rearing without adequate control of GIN (Grisi et al. al., 2014). In addition, fast establishment of anthelmintic resistance, which currently occurs in the majority of the national flocks, has an important impact that is difficult to incorporate into the calculation of the economic damage resulting from parasitic diseases. Associated with that, there is a lack of knowledge or understanding of the functional consequences of eco-toxic residues' effects. Thus, an integrated approach between ecologists and economists should be further explored, in order to increase understanding of the economic importance of maintaining functional pasture systems, by including an economic variable to ecosystem functions (Beynon, 2012). Future studies should also include economic analysis, in order to balance short-term production gains with longer term environmental impacts (Cooke et al., 2017). Conclusion The animals in the RT reached 210 days of age heavier than those in the CT, thus generating higher gross revenue in both seasons. In relation to the CT, it was possible to verify that use of RT provided a significantly better economic result, in both seasons. The expenditure on use of anthelmintics had low impact. However, under conditions of intense use of anthelmintics (i.e. in the RT group), resistance is quickly established in parasites, thus making the production system unfeasible. The TST approach has potential long-term economic benefits and can play important role in reducing environmental impacts. The panorama here discussed demonstrates the importance of parasitism by GIN in Morada Nova flocks and the impossibility of profitable livestock-rearing without rational control of parasites that include approaches aiming to delay the development of anthelmintic resistance.
2022-08-24T15:12:50.980Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "6ecbcb4c8e805cb55471b988f61d0929cba31654", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rbpv/a/Ldv5LHX96qSW9vwQTFXHK7y/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d93f5f96c99f8d8491a868990862894152c9565e", "s2fieldsofstudy": [ "Economics", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
9931213
pes2o/s2orc
v3-fos-license
Immunotherapy alone vs no maintenance treatment in acute myelogenous leukaemia. Forty-one adult patients with acute myelogenous leukaemia entered remission induced by daunorubicin and cytosine arabinoside, and subsequently received 6 weeks' consolidation therapy with cyclophosphamide plus 6-thioguanine. They were then randomized to either immunotherapy consisting of intradermal BCG plus allogeneic cells or to "no maintenance". Patients receiving immunotherapy had significantly longer remission (P = 0.039) and survival from remission (P = 0.044) as assessed by the log-rank test. The median duration of first remission for 21 patients receiving immunotherapy was 35.14 weeks, compared with 19.71 weeks for 20 patients on no maintenance, and the median survival from remission was doubled in patients receiving immunotherapy. The value of adequate consolidation chemotherapy is confirmed by the comparatively long first remissions in both groups compared with our previous trials, whilst avoidance of maintenance chemotherapy possibly allowed frequent second remissions and similar post-relapse survival in patients from both treatment arms. Summary.-Forty-one adult patients with acute myelogenous leukaemia entered remission induced by daunorubicin and cytosine arabinoside, and subsequently received 6 weeks' consolidation therapy with cyclophosphamide plus 6-thioguanine. They were then randomized to either immunotherapy consisting of intradermal BCG plus allogeneic cells or to "no maintenance". Patients receiving immunotherapy had significantly longer remission (P=0.039) and survival from remission (P=0.044) as assessed by the log-rank test. The median duration of first remission for 21 patients receiving immunotherapy was 35-14 weeks, compared with 19-71 weeks for 20 patients on no maintenance, and the median survival from remission was doubled in patients receiving immunotherapy. The value of adequate consolidation chemotherapy is confirmed by the comparatively long first remissions in both groups compared with our previous trials, whilst avoidance of maintenance chemotherapy possibly allowed frequent second remissions and similar post-relapse survival in patients from both treatment arms. FOLLOWING Mathe&s (1969) encouraging results with immunotherapy in acute lymphoblastic leukaemia, and a similar potential later shown in acute myelogenous leukaemia (Powles et al., 1971) we initiated a pilot study of active immunotherapy used alone during remission in adult patients with acute myelogenous leukaemia (AML). This study, which showed easy reinduction with consequent prolongation of survival after relapse (Freeman et al., 1973) was later followed in Manchester by a randomized trial under the aegis of the MRC which compared immunotherapy with a combination of immunotherapy and chemotherapy (Harris et al., 1978a). This trial again suggested that immunotherapy (when given without maintenance chemotherapy) improved post-relapse survival. However, a halving of first remission length compared with the pilot study was attributed to the omission of cytoreduction from the MRC protocol. It was also unclear whether immunotherapy itself was therapeutically beneficial or whether its apparent advantages were due to the avoidance of drug resistance induced by maintenance chemotherapy. We designed our present trial to remove these uncertainties. Consolidation chemotherapy was reintroduced following remission induced identically. Patients were then randomized to either immunotherapy alone or a "no-maintenance arm". This trial protocol allowed us for the first time to assess the value of immunotherapy uncomplicated by simultaneous maintenance chemotherapy. PATIENTS AND METHODS From 1 January 1975 to 31 July 1978, 41 patients who entered complete and consolidated remission were randomized to receive either immunotherapy alone (RI, 21 patients) or "no maintenance" treatment (RO, 20 patients). The follow-up of both groups of patients is complete to 15 May 1979. All patients were seen at weekly intervals for clinical assessment and blood counts, whilst marrow examinations were done at monthly intervals. Marrows were reported on by a number of different individuals, the majority of whom were not aware of the treatment arm to which the patient had been randomized. Details of induction, criteria for remission and relapse and administration of immunotherapy are described elsewhere (Freeman et al., 1973;Harris et al., 1978a). Statistical methods.-Although conventional median values are given, Kaplan-Meier life tables and log-rank analyses were used to test the statistical significance of differences in remission length and survival, using exact variance calculations without continuity corrections (Peto et al., 1977). Two-tailed P values are quoted since this provides a more rigorous test, making no prior assumptions in favour of immunotherapy. Data were analysed using a version of computer programme SURV-C. RESULTS Data for each patient randomized are given in detail in the Appendix. Four different measures of outcome were examined: Corresponding life tables are shown in Figs 1, 2 and 3, except for survival from date of treatment, which is similar in shape to that from first remission. Duration offirst remission The median remission length of 35-14 weeks in immunotherapy patients was 15-43 weeks (78%) longer than in the "no maintenance" arm; this difference is 00 When controls have received immunotherapy and simultaneous chemotherapy, the effects of these forms of treatment 0.~~~~~~~~~c annot be separated (Powles et al., 1979 (Freeman et al., 1973). In our Duration of survival from remission second trial (Harris et al., 1978a) we Themedan urvva of90-9 weksin deleted consolidation chemotherapy in Tha mdinourvo Iaivals cofpare9 weeks in7 accordance with the MRC protocol (MRC, favs ouro RI patients: copaes withreci57 1978) and so reduced first-remission length westa foricaRO patieicnts the2diferenc is that interpretation was difficult. A further statiswticall saignificnthex death Pate complication was the randomization 0ato.044)8iha avn i h eahrt (according to the MRC (1978) protocol) to ratio (0.48).~~r emission maintenance with immuno-Durtio o 8uvivl ftefirt elase therapy alone or immunotherapy with Durtio ofsuvivl aterfist elase simultaneous chemotherapy which we The difference between RI and RO now believe interferes with the effects of patients is not significant (X2 = 089, P = immunotherapy. We designed our third trial so as to overcome these problems. Firstly, we reintroduced a consolidation phase after induction chemotherapy, in an attempt to further reduce leukaemic cell mass. We then randomized patients to one of 2 therapeutic arms; immunotherapy alone (RI) or "no maintenance" (RO). It was then possible to assess the value of immunotherapy in patients in remission with minimum leukaemic cell mass and uncomplicated by simultaneous chemotherapy. Over a follow-up period varying from 10 months to 4 years, immunotherapy patients in this trial have had significantly longer remissions and survival than patients receiving no maintenance treatment. It is also noteworthy that there was no overt CNS involvement in patients on immunotherapy, compared with 3 RO patients with leukaemic CNS disease, although Peto et al. (1977) have indicated the difficulties in the analysis of CNS involvement. Our immunotherapy patients fared as well as those of Powles et al. (1977b) who used a "superior" form of immunotherapy (BCG and cells mixed together), both in terms of length of first remission and in the proportion remaining in remission for more than 2 years. It is particularly encouraging that the significant differences between our RI and the RO patients appear to be genuine, and not due to unusually poor remission lengths or durations of survival in the controls. For example, the RO median remission length of almost 20 weeks is comparable with chemotherapy medians in other studies (Reizenstein et al., 1978;MRC, 1978) while the median of 22 weeks for survival after relapse in the RO group is similar to that reported by the MRC (1978) for patients receiving immunotherapy plus maintenance chemotherapy, and is better than chemotherapy medians (MRC, 1978(MRC, , 1979. In this trial second-remission rates and post-relapse survival are similar in RO and RI patients, confirming our original suggestion (Freeman et al., 1973) that the poor post-relapse performance of RI plus chemotherapy (referred to as I + C) compared with RI may have been partly due to mnaintenance chemotherapy. Indeed, the results of our present (third) trial suggest that maintenance chemotherapy may worsen the outlook for patients who relapse and should, unless otherwise indicated, be omitted. Thus, although RI patients had significantly longer first remissions and survival than RO patients, there was no significant difference between RI and RO in terms of post-relapse survival or second-remission rates, whilst both groups of patients have done better than would be expected from the published data on post-relapse performance of patients receiving maintenance chemotherapy (Powles et al., 1977a;Whittaker & Slater, 1977;Gale & Cline, 1977;MRC, 1978MRC, , 1979. We suggest that chemotherapy seems unnecessary for maintenance if adequate induction and consolidation treatment has been given, and is better reserved for reinduction after first relapse, detected early by monthly marrow examination whilst the leukaemic cell mass is still small (Harris et al., 1978a). In our opinion, based on 8 years' experience of immunotherapy in AML, there is no ethical objection to the omission of maintenance chemotherapy, so long as no form of treatment is available which will selectively ablate all leukaemic cells. The value of consolidation chemotherapy emerges from a comparison of this with our earlier trials. Thus, first-remission length was reduced to 11-5 weeks in the immunotherapy-alone arm of our second trial (Harris et al., 1978a) in which consolidation chemotherapy was not used, and should be compared with our trials which did include consolidation, notably the superior results of 23 weeks in the first trial (Freeman et al., 1973) and 35-14 weeks in the present trial. It has been emphasized (MRC, 1978) that rapid changes may occur in small trials as patients relapse or die. However, this tendency decreases the longer patients remain in remission (Freirich et al., 1978) and our results, taken with those of others, confirm that immunotherapy does prolong first remission and survival. However, it may fairly be asked whether the definite but modest improvements attributable to immunotherapy justify the considerable logistic problems involved. We have no doubt of the heuristic value of these trials, which justifies further work to identify and explain the underlying immunopathological mechanism. In this connection we agree with Murphy & Hersh (I 97 8) who emphasize the need for better forms of immunotherapy, and our studies of genetic markers in AML (Harris et al., 1977. 1978b) convince us that certain categories of AML patients will respond better than others. As a result of our trials, we further suggest that maintenance chemotherapy as currently used may actually worsen prognosis, as well as rendering unacceptable the quality of life of many AML patients.
2014-10-01T00:00:00.000Z
1980-03-01T00:00:00.000
{ "year": 1980, "sha1": "c5e4361e9e41c434f22d48eaed5cdcd04d517c94", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc2010236?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c5e4361e9e41c434f22d48eaed5cdcd04d517c94", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }